Search results for: wikipedia - Bridge of Knowledge

Search

Search results for: wikipedia

Search results for: wikipedia

  • Embedded Representations of Wikipedia Categories

    Publication

    - Year 2021

    In this paper, we present an approach to building neural representations of the Wikipedia category graph. We test four different methods and examine the neural embeddings in terms of preservation of graphs edges, neighborhood coverage in representation space, and their influence on the results of a task predicting parent of two categories. The main contribution of this paper is application of neural representations for improving the...

    Full text to download in external service

  • Wikipedia Articles Representation with Matrix'u

    Publication

    - Year 2013

    In the article we evaluate different text representation methods used for a task of Wikipedia articles categorization. We present the Matrix’u application used for creating computational datasets ofWikipedia articles. The representations have been evaluated with SVM classifiers used for reconstruction human made categories.

    Full text to download in external service

  • Towards Extending Wikipedia with Bidirectional Links

    Publication

    In this paper, we present the results of our WikiLinks project which aims at extending current Wikipedia linkage mechanisms. Wikipedia has become recently one of the most important information sources on the Internet, which still is based on relatively simple linkage facilities. A WikiLinks system extends the Wikipedia with bidirectional links between fragments of articles. However, there were several attempts to introduce bidirectional...

    Full text available to download

  • Bidirectional Fragment to Fragment Links in Wikipedia

    Publication

    The paper presents a WikiLinks system that extends the Wikipedia linkage model with bidirectional links between fragments of the articles and overlapping links’ anchors. The proposed model adopts some ideas from the research conducted in a field of nonlinear, computer-aided writing, often called a hypertext. WikiLinks may be considered as a web augmentation tool but it presents a new approach to the problem that addresses the specific...

    Full text available to download

  • Information Retrieval in Wikipedia with Conceptual Directions

    Publication

    - Year 2015

    The paper describes our algorithm used for retrieval of textual information from Wikipedia. The experiments show that the algorithm allows to improve typical evaluation measures of retrieval quality. The improvement of the retrieval results was achieved by two phase usage approach. In first the algorithm extends the set of content that has been indexed by the specified keywords and thus increases the Recall value. Then, using the...

    Full text to download in external service

  • Collaborative approach to WordNet and Wikipedia integration

    Publication

    In this article we present a collaborative approach tocreating mappings between WordNet and Wikipedia. Wikipediaarticles have been first matched with WordNet synsets in anautomatic way. Then such associations have been evaluated andcomplemented in a collaborative way using a web application.We describe algorithms used for creating automatic mappingsas well as a system for their collaborative development. Theoutcome enables further...

  • Categorization of Wikipedia articles with spectral clustering

    Abstract. The article reports application of clustering algorithms for creating hierarchical groups withinWikipedia articles.We evaluate three spectral clustering algorithms based on datasets constructed with usage ofWikipedia categories. Selected algorithm has been implemented in the system that categorize Wikipedia search results in the fly.

  • Towards automatic classification of Wikipedia content

    Artykuł opisuje podejście do automatycznej klasyfikacji artykułów w Wikipedii. Przeanalizowane zostały reprezentacje tekstu bazujące na treści dokumentu i wzajemnych powiązaniach. Przedstawiono rezultaty zastosowania klasyfikatora SVM.

  • Mining relations between wikipedia categories

    Opisano metody indukcji powiązań pomiędzy kategoriami organizującymi zbiór dokumentów. Przedstawiono wyniki zastosowania proponowanego podejścia dla poprawy systemu kategorii Wikipedii.

  • Exact-match Based Wikipedia-WordNet Integration

    Publication

    Ability to link between WordNet synsets and Wikipedia articles allows usage of those resources by computers during natural language processing. A lot of work was done in this field, however most of the approaches focus on similarity between Wikipedia articles and WordNet synsets rather than creation of perfect matches. In this paper we proposed a set of methods for automatic perfect matching generation. The proposed methods were...

    Full text available to download

  • Interactive Information Retrieval Algorithm for Wikipedia Articels

    Publication

    - Year 2012

    The article presents an algorithm for retrieving textual information in documents collection. The algorithm employs a category system that organizers the repository and using interaction with user improves search precision. The algorithm was implemented for simple English Wikipedia and the first evaluation results indicates the proposed method can help to retrieve information from large document repositories.

  • Exact-match Based Wikipedia-WordNet Integration

    Publication

    - Year 2019

    Full text to download in external service

  • Wordventure - Developing WordNet in Wikipedia-like Style

    Publication

    - Year 2010

    The article describes an approach for building WordNet semantic dictionary in a collaborative way. The idea of gathering lexical data has been proposed, as well as the system for linguistic data acquisition and management.

  • Spectral Clustering Wikipedia Keyword-Based search Results

    The paper summarizes our research in the area of unsupervised categorization of Wikipedia articles. As a practical result of our research, we present an application of spectral clustering algorithm used for grouping Wikipedia search results. The main contribution of the paper is a representation method for Wikipedia articles that has been based on combination of words and links and used for categoriation of search results in this...

    Full text available to download

  • Wikipedia and WordNet integration based on words co-occurrences

    Publication

    - Year 2009

    The article presents a method for automatic integration of two lexical resources: semantic dictionary WordNet and electronic encyclopaedia Wikipedia. Our goal is to add automatically an semantic tags - a WordNet synset identifier to the title of the Wikipedia article. We've analyze several different ap-proaches to these problem and implement our own solution, based on word occurrences in synsets descriptions and the article body....

  • Game with a Purpose for Verification of Mappings Between Wikipedia and WordNet

    Publication

    - Year 2017

    The paper presents a Game with a Purpose for verification of automatically generated mappings focusing on mappings between WordNet synsets and Wikipedia articles. General description of idea standing behind the games with the purpose is given. Description of TGame system, a 2D platform mobile game with verification process included in the game-play, is provided. Additional mechanisms for anti-cheating, increasing player’s motivation...

    Full text to download in external service

  • An Analysis of Neural Word Representations for Wikipedia Articles Classification

    Publication

    - CYBERNETICS AND SYSTEMS - Year 2019

    One of the current popular methods of generating word representations is an approach based on the analysis of large document collections with neural networks. It creates so-called word-embeddings that attempt to learn relationships between words and encode this information in the form of a low-dimensional vector. The goal of this paper is to examine the differences between the most popular embedding models and the typical bag-of-words...

    Full text to download in external service

  • Self–Organizing Map representation for clustering Wikipedia search results

    Publication

    - Year 2011

    The article presents an approach to automated organization of textual data. The experiments have been performed on selected sub-set of Wikipedia. The Vector Space Model representation based on terms has been used to build groups of similar articles extracted from Kohonen Self-Organizing Maps with DBSCAN clustering. To warrant efficiency of the data processing, we performed linear dimensionality reduction of raw data using Principal...

    Full text to download in external service

  • Crowdsourcing-Based Evaluation of Automatic References Between WordNet and Wikipedia

    The paper presents an approach to build references (also called mappings) between WordNet and Wikipedia. We propose four algorithms used for automatic construction of the references. Then, based on an aggregation algorithm, we produce an initial set of mappings that has been evaluated in a cooperative way. For that purpose, we implement a system for the distribution of evaluation tasks, that have been solved by the user community....

    Full text available to download

  • 0-step K-means for clustering Wikipedia search results

    Publication

    - Year 2011

    This article describes an improvement for K-means algorithm and its application in the form of a system that clusters search results retrieved from Wikipedia. The proposed algorithm eliminates K-means isadvantages and allows one to create a cluster hierarchy. The main contributions of this paper include the ollowing: (1) The concept of an improved K-means algorithm and its application for hierarchical clustering....

  • Relation-based Wikipedia Search System for Factoid Questions Answering

    In this paper we propose an alternative keyword search mechanism for Wikipedia, designed as a prototype solution towards factoid questions answering. The method considers relations between articles for finding the best matching article. Unlike the standard Wikipedia search engine and also Google engine, which search the articles content independently, requiring the entire query to be satisfied by a single article, the proposed...

    Full text available to download

  • Self-Organizing Map representation for clustering Wikipedia search results

    The article presents an approach to automated organization of textual data. The experiments have been performed on selected sub-set of Wikipedia. The Vector Space Model representation based on terms has been used to build groups of similar articles extracted from Kohonen Self-Organizing Maps with DBSCAN clustering. To warrant efficiency of the data processing, we performed linear dimensionality reduction of raw data using Principal...

  • Path-based methods on categorical structures for conceptual representation of wikipedia articles

    Machine learning algorithms applied to text categorization mostly employ the Bag of Words (BoW) representation to describe the content of the documents. This method has been successfully used in many applications, but it is known to have several limitations. One way of improving text representation is usage of Wikipedia as the lexical knowledge base – an approach that has already shown promising results in many research studies....

    Full text available to download

  • Automatically created and partially veriffied Wikipedia - WordNet mappings

    Open Research Data

    Mapping between Wikipedia articles and WordNet synsets. The mappings between Wikipedia articles and WordNet synsets were obtained automatically using 4 algorithms of data processing. The automatically generated mappings were than a subject of verification by a group of volunteers using crowdsourcing approach through so called Games with a Purpose. The...

  • TF-IDF weighted bag-of-words preprocessed text documents from Simple English Wikipedia

    Open Research Data

    The SimpleWiki2K-scores dataset contains TF-IDF weighted bag-of-words preprocessed text documents (raw strings are not available) [feature matrix] and their multi-label assignments [label-matrix]. Label scores for each document are also provided for an enhanced multi-label KNN [1] and LEML [2] classifiers. The aim of the dataset is to establish a benchmark...

  • Przegląd badań na temat Wikipedii oraz z wykorzystaniem Wikipedii jako instrument badawczego

    Publication

    - Year 2020

    W badaniach dotychczas prowadzonych w Polsce Wikipedia była zarówno przedmiotem badań, jak i instrumentem badawczym. Badania na jej temat oraz na temat skutków społecznych jej używania prowadzili przedstawiciele nauk humanistycznych, społecznych, ekonomicznych i prawnych. Dla wielu badaczy (zwłaszcza z dziedziny informatyki) Wikipedia była instrumentem pomocnym w prowadzeniu różnorodnych analiz i dociekań naukowych. Niniejszy artykuł...

    Full text to download in external service

  • Comparative Analysis of Text Representation Methods Using Classification

    Publication

    In our work, we review and empirically evaluate five different raw methods of text representation that allow automatic processing of Wikipedia articles. The main contribution of the article—evaluation of approaches to text representation for machine learning tasks—indicates that the text representation is fundamental for achieving good categorization results. The analysis of the representation methods creates a baseline that cannot...

    Full text to download in external service

  • WikiPrefs: human preferences dataset build from text edits

    Open Research Data

    The WikiPrefs dataset is a human preferences dataset for Large Language Models alignment. It was built using the EditPrefs method from historical edits of Wikipedia featured articles

  • Wydobywanie wiedzy z Wikipedii

    Publication

    - Year 2022

    Wikipedia jest olbrzymim źródłem wiedzy encyklopedycznej gromadzonej przez ludzi i przeznaczonej dla ludzi. W systemach informatycznych odpowiednikiem takiego źródła wiedzy są ontologie. Ten rozdział pokazuje, w jaki sposób Wikipedia jest transformowana w ontologię i jak wydobywać z niej pojęcia, ich właściwości i relacje między nimi.

  • Text classifiers for automatic articles categorization

    Publication

    The article concerns the problem of automatic classification of textual content. We present selected methods for generation of documents representation and we evaluate them in classification tasks. The experiments have been performed on Wikipedia articles classified automatically to their categories made by Wikipedia editors.

  • Wordventure - cooperative wordnet editor. Architecture for lexical semantic aquisition

    Publication

    - Year 2009

    This article presents architecture for acquiring lexical semanticsin a collaborative approach paradigm. The system enablesfunctionality for editing semantic networks in a wikipedia-like style. The core of the system is a user-friendly interface based on interactive graph navigation.It has been used for semantic network presentation,and brings simultaneously modification functionality.

  • WordVenture - COOPERATIVE WordNet EDITOR Architecture for Lexical Semantic Acquisition

    Publication

    - Year 2017

    This article presents architecture for acquiring lexical semantics in a collaborative approach paradigm. The system enables functionality for editing semantic networks in a wikipedia-like style. The core of the system is a user-friendly interface based on interactive graph navigation. It has been used for semantic network presentation, and brings simultaneously modification functionality.

    Full text to download in external service

  • Elgold: gold standard, multi-genre dataset for named entity recognition and linking

    Open Research Data
    version 1.0 open access

    The dataset contains 276 multi-genre texts with marked named entities, which are linked to corresponding Wikipedia articles if available. Each entity was manually verified by at least three people, which makes the dataset a high-quality gold standard for the evaluation of named entity recognition and linking algorithms.

  • Elgold intermediate: annotated raw

    Open Research Data

    The dataset contains a subset of texts from Elgold intermediate: raw texts with named entities marked and linked to corresponding Wikipedia articles. The texts were annotated by 31 participants during the 1.5-hour session.

  • Self Organizing Maps for Visualization of Categories

    Publication

    - Year 2012

    Visualization of Wikipedia categories using Self Organizing Mapsshows an overview of categories and their relations, helping to narrow down search domains. Selecting particular neurons this approach enables retrieval of conceptually similar categories. Evaluation of neural activations indicates that they form coherent patterns that may be useful for building user interfaces for navigation over category structures.

  • Towards Increasing Density of Relations in Category Graphs

    Publication

    In the chapter we propose methods for identifying new associations between Wikipedia categories. The first method is based on Bag-of-Words (BOW) representation of Wikipedia articles. Using similarity of the articles belonging to different categories allows to calculate the information about categories similarity. The second method is based on average scores given to categories while categorizing documents by our dedicated score-based...

    Full text to download in external service

  • Elgold partial: News

    Open Research Data

    The dataset contains 37 English texts scrapped from news websites. In each text, the named entities are marked. Each name entity is linked to the corresponding Wikipedia if possible. All entities were manually verified by at least three people, which makes the dataset a high-quality gold standard for the evaluation of named entity recognition and linking...

  • Metody ekstrakcji ustrukturalizowanej treści z Wikipedii

    Publication

    - Year 2022

    Wikipedia jest od dawna przedmiotem zainteresowania badaczy. Jednym z obszarów zainteresowania jest pozyskiwanie wiedzy z treści Wikipedii a to wymaga parsowania tekstu artykułów. W tym rozdziale przedstawiono analizę porównawczą różnych możliwości parsowania treści Wikipedii, wskazując problemy, z jakimi muszą się mierzyć autorzy parserów. Dzięki temu można zrozumieć, dlaczego proces wydobywania wiedzy z Wikipedii jest trudny

  • Elgold partial: Automotive blogs

    Open Research Data

    The dataset contains 34 English texts scrapped from automotive blogs. In each text, the named entities are marked. Each name entity is linked to the corresponding Wikipedia if possible. All entities were manually verified by at least three people, which makes the dataset a high-quality gold standard for the evaluation of named entity recognition and...

  • Elgold partial: Movie reviews

    Open Research Data

    The dataset contains 37 English texts with movie reviews. In each text, the named entities are marked. Each name entity is linked to the corresponding Wikipedia if possible. All entities were manually verified by at least three people, which makes the dataset a high-quality gold standard for the evaluation of named entity recognition and linking algorithms.

  • Elgold partial: Job offers

    Open Research Data

    The dataset contains 34 English texts scrapped from the web portals offering job offers. In each text, the named entities are marked. Each name entity is linked to the corresponding Wikipedia if possible. All entities were manually verified by at least three people, which makes the dataset a high-quality gold standard for the evaluation of named entity...

  • Elgold partial: Scientific papers' abstracts

    Open Research Data

    The dataset contains 87 Scientific papers' abstracts in English randomly chosen from the folowing scientific disciplines: Biomedicine, Life Sciences, Mathematics, Medicine, Science, Humanities, Social Science.

  • Elgold partial: Amazon product reviews

    Open Research Data

    The dataset contains 34 Amazon product reviews in English. In each text, the named entities are marked. Each name entity is linked to the corresponding Wikipedia if possible. All entities were manually verified by at least three people, which makes the dataset a high-quality gold standard for the evaluation of named entity recognition and linking algorithms.

  • Elgold partial: History blogs

    Open Research Data

    The dataset contains 13 texts from English history blogs. In each text, the named entities are marked. Each name entity is linked to the corresponding Wikipedia if possible. All entities were manually verified by at least three people, which makes the dataset a high-quality gold standard for the evaluation of named entity recognition and linking algorithms.

  • Management of Textual Data at Conceptual Level

    Publication

    - Year 2011

    The article presents the approach to the management of a large repository of documents at conceptual level. We describe our approach to representing Wikipedia articles using their categories. The representation has been used to construct groups of similar articles. Proposed approach has been implemented in prototype system that allows to organize articles that are search results for a given query. Constructed clusters allow to...

  • Elgold intermediate: verified by the authors

    Open Research Data

    The dataset contains the texts from Elgold intermediate: verified by verification team additionaly verified by the dataset authors but before the final validation step with the elgold toolset.

  • Elgold intermediate: verified by verification team

    Open Research Data

    The dataset contains the texts from Elgold intermediate: annotated raw additionaly verified by the five-person verification team.  arly 25% of the mentions were corrected in some aspect.

  • Review on Wikification methods

    Publication

    - AI COMMUNICATIONS - Year 2019

    The paper reviews methods on automatic annotation of texts with Wikipedia entries. The process, called Wikification aims at building references between concepts identified in the text and Wikipedia articles. Wikification finds many applications, especially in text representation, where it enables one to capture the semantic similarity of the documents. Also, it can be considered as automatic tagging of the text. We describe typical...

    Full text to download in external service

  • Evaluation of Path Based Methods for Conceptual Representation of the Text

    Publication

    Typical text clustering methods use the bag of words (BoW) representation to describe content of documents. However, this method is known to have several limitations. Employing Wikipedia as the lexical knowledge base has shown an improvement of the text representation for data-mining purposes. Promising extensions of that trend employ hierarchical organization of Wikipedia category system. In this paper we propose three path-based...

    Full text to download in external service

  • Towards Facts Extraction From Texts in Polish Language

    The Polish language differs from English in many ways. It has more complicated conjugation and declination. Because of that automatic facts extraction from texts is difficult. In this paper we present basic differences between those languages. The paper presents an algorithm for extraction of facts from articles from Polish Wikipedia. The algorithm is based on 7 proposed facts schemes that are searched for in the analyzed text....

    Full text available to download