Search results for: wikipedia - Bridge of Knowledge

Search

Search results for: wikipedia

Search results for: wikipedia

  • Automatyczna budowa taksonomii usług w oparciu o ich głosy w języku naturalnym oraz przy uzyciu zewnętrznych źródeł wiedzy

    Publication

    - Year 2009

    Przedstawiono propozycję metody automatycznej budowy taksonomiiusług na podstawie ich opisów w języku naturalnym, w oparciu ometodę analizy formalnych koncepcji (FCA). Dodatkowo przedstawione rozwiązanie przewiduje możliwość skorzystania z zewnętrznych źródeł wiedzy takich jak Wikipedia, Word Net, ConceptNet lub globalnej sieci WWW w celu eliminacji problemu niepełnych danych wejściowych (ang. data sparseness).

  • Cooperative Word Net Editor for Lexical Semantic Acquisition

    Publication

    - Year 2011

    The article describes an approach for building Word Net semantic dictionary in a collaborative approach paradigm. The presented system system enables functionality for gathering lexical data in a Wikipedia-like style. The core of the system is a user-friendly interface based on component for interactive graph navigation. The component has been used for Word Net semantic network presentation on web page, and it brings functionalities...

    Full text to download in external service

  • Management of Textual Data at Conceptual Level

    Publication

    - Year 2011

    The article presents the approach to the management of a large repository of documents at conceptual level. We describe our approach to representing Wikipedia articles using their categories. The representation has been used to construct groups of similar articles. Proposed approach has been implemented in prototype system that allows to organize articles that are search results for a given query. Constructed clusters allow to...

  • Words context analysis for improvement of information retrieval

    Publication

    - Year 2012

    In the article we present an approach to improvement of retrieval informationfrom large text collections using words context vectors. The vectorshave been created analyzing English Wikipedia with Hyperspace Analogue to Language model of words similarity. For test phrases we evaluate retrieval with direct user queries as well as retrieval with context vectors of these queries. The results indicate that the proposed method can not...

  • Towards Effective Processing of Large Text Collections

    Publication

    In the article we describe the approach to parallelimplementation of elementary operations for textual data categorization.In the experiments we evaluate parallel computations ofsimilarity matrices and k-means algorithm. The test datasets havebeen prepared as graphs created from Wikipedia articles relatedwith links. When we create the clustering data packages, wecompute pairs of eigenvectors and eigenvalues for visualizationsof...

  • Game with a Purpose for Mappings Verification

    Mappings verification is a laborious task. The paper presents a Game with a Purpose based system for verification of automatically generated mappings. General description of idea standing behind the games with the purpose is given. Description of TGame system, a 2D platform mobile game with verification process included in the gameplay, is provided. Additional mechanisms for anti-cheating, increasing player’s motivation and gathering...

    Full text available to download

  • Thresholding Strategies for Large Scale Multi-Label Text Classifier

    Publication

    This article presents an overview of thresholding methods for labeling objects given a list of candidate classes’ scores. These methods are essential to multi-label classification tasks, especially when there are a lot of classes which are organized in a hierarchy. Presented techniques are evaluated using the state-of-the-art dedicated classifier on medium scale text corpora extracted from Wikipedia. Obtained results show that the...

    Full text to download in external service

  • External Validation Measures for Nested Clustering of Text Documents

    Publication

    Abstract. This article handles the problem of validating the results of nested (as opposed to "flat") clusterings. It shows that standard external validation indices used for partitioning clustering validation, like Rand statistics, Hubert Γ statistic or F-measure are not applicable in nested clustering cases. Additionally to the work, where F-measure was adopted to hierarchical classification as hF-measure, here some methods to...

  • Parallel Computations of Text Similarities for Categorization Task

    Publication

    - Year 2013

    In this chapter we describe the approach to parallel implementation of similarities in high dimensional spaces. The similarities computation have been used for textual data categorization. A test datasets we create from Wikipedia articles that with their hyper references formed a graph used in our experiments. The similarities based on Euclidean distance and Cosine measure have been used to process the data using k-means algorithm....

  • Passing from requirements specification to class model using application domain ontology

    The quality of a classic software engineering process depends on the completeness of project documents and on the inter-phase consistency. In this paper, a method for passing from the requirement specification to the class model is proposed. First, a developer browses the text of the requirements, extracts the word sequences, and places them as terms into the glossary. Next, the internal ontology logic for the glossary needs to...

  • How Specific Can We Be with k-NN Classifier?

    Publication

    This paper discusses the possibility of designing a two stage classifier for large-scale hierarchical and multilabel text classification task, that will be a compromise between two common approaches to this task. First of it is called big-bang, where there is only one classifier that aims to do all the job at once. Top-down approach is the second popular option, in which at each node of categories’ hierarchy, there is a flat classifier...

    Full text to download in external service

  • Selection of Relevant Features for Text Classification with K-NN

    Publication

    In this paper, we describe five features selection techniques used for a text classification. An information gain, independent significance feature test, chi-squared test, odds ratio test, and frequency filtering have been compared according to the text benchmarks based on Wikipedia. For each method we present the results of classification quality obtained on the test datasets using K-NN based approach. A main advantage of evaluated...

    Full text to download in external service

  • Text Categorization Improvement via User Interaction

    Publication

    - Year 2018

    In this paper, we propose an approach to improvement of text categorization using interaction with the user. The quality of categorization has been defined in terms of a distribution of objects related to the classes and projected on the self-organizing maps. For the experiments, we use the articles and categories from the subset of Simple Wikipedia. We test three different approaches for text representation. As a baseline we use...

    Full text to download in external service

  • Improving css-KNN Classification Performance by Shifts in Training Data

    Publication

    - Year 2015

    This paper presents a new approach to improve the performance of a css-k-NN classifier for categorization of text documents. The css-k-NN classifier (i.e., a threshold-based variation of a standard k-NN classifier we proposed in [1]) is a lazy-learning instance-based classifier. It does not have parameters associated with features and/or classes of objects, that would be optimized during off-line learning. In this paper we propose...

  • Follow the Light. Where to search for useful research information

    Architectural Lighting Design (ALD) has never been a standalone professional discipline. Rather, it has existed as the combination of art and the science of light. Today, third generation lighting professionals are already creatively intertwining these fields, and the acceleration in scientific, technological and societal studies has only increased the need for reliable multidisciplinary information. Therefore, a thorough re-examination...

    Full text available to download

  • Identification of category associations using a multilabel classifier

    Description of the data using categories allows one to describe it on a higher abstraction level. In this way, we can operate on aggregated groups of the information, allowing one to see relationships that do not appear explicit when we analyze the individual objects separately. In this paper we present automatic identification of the associations between categories used for organization of the textual data. As experimental data...

    Full text to download in external service