Filters
total: 76
Search results for: wikipedia
-
Towards Facts Extraction From Texts in Polish Language
PublicationThe Polish language differs from English in many ways. It has more complicated conjugation and declination. Because of that automatic facts extraction from texts is difficult. In this paper we present basic differences between those languages. The paper presents an algorithm for extraction of facts from articles from Polish Wikipedia. The algorithm is based on 7 proposed facts schemes that are searched for in the analyzed text....
-
Selecting Features with SVM
PublicationA common problem with feature selection is to establish how many features should be retained at least so that important information is not lost. We describe a method for choosing this number that makes use of Support Vector Machines. The method is based on controlling an angle by which the decision hyperplane is tilt due to feature selection. Experiments were performed on three text datasets generated from a Wikipedia dump. Amount...
-
DBpedia As a Formal Knowledge Base – An Evaluation
PublicationDBpedia is widely used by researchers as a mean of accessing Wikipedia in a standardized way. In this paper it is characterized from the point of view of questions answering system. Simple implementation of such system is also presented. The paper also characterizes alternatives to DBpedia in form of OpenCyc and YAGO knowledge bases. A comparison between DBpedia and those knowledge bases is presented.
-
Improving css-KNN Classification Performance by Shifts in Training Data
PublicationThis paper presents a new approach to improve the performance of a css-k-NN classifier for categorization of text documents. The css-k-NN classifier (i.e., a threshold-based variation of a standard k-NN classifier we proposed in [1]) is a lazy-learning instance-based classifier. It does not have parameters associated with features and/or classes of objects, that would be optimized during off-line learning. In this paper we propose...
-
Automatyczna budowa taksonomii usług w oparciu o ich głosy w języku naturalnym oraz przy uzyciu zewnętrznych źródeł wiedzy
PublicationPrzedstawiono propozycję metody automatycznej budowy taksonomiiusług na podstawie ich opisów w języku naturalnym, w oparciu ometodę analizy formalnych koncepcji (FCA). Dodatkowo przedstawione rozwiązanie przewiduje możliwość skorzystania z zewnętrznych źródeł wiedzy takich jak Wikipedia, Word Net, ConceptNet lub globalnej sieci WWW w celu eliminacji problemu niepełnych danych wejściowych (ang. data sparseness).
-
Cooperative Word Net Editor for Lexical Semantic Acquisition
PublicationThe article describes an approach for building Word Net semantic dictionary in a collaborative approach paradigm. The presented system system enables functionality for gathering lexical data in a Wikipedia-like style. The core of the system is a user-friendly interface based on component for interactive graph navigation. The component has been used for Word Net semantic network presentation on web page, and it brings functionalities...
-
Words context analysis for improvement of information retrieval
PublicationIn the article we present an approach to improvement of retrieval informationfrom large text collections using words context vectors. The vectorshave been created analyzing English Wikipedia with Hyperspace Analogue to Language model of words similarity. For test phrases we evaluate retrieval with direct user queries as well as retrieval with context vectors of these queries. The results indicate that the proposed method can not...
-
Game with a Purpose for Mappings Verification
PublicationMappings verification is a laborious task. The paper presents a Game with a Purpose based system for verification of automatically generated mappings. General description of idea standing behind the games with the purpose is given. Description of TGame system, a 2D platform mobile game with verification process included in the gameplay, is provided. Additional mechanisms for anti-cheating, increasing player’s motivation and gathering...
-
Thresholding Strategies for Large Scale Multi-Label Text Classifier
PublicationThis article presents an overview of thresholding methods for labeling objects given a list of candidate classes’ scores. These methods are essential to multi-label classification tasks, especially when there are a lot of classes which are organized in a hierarchy. Presented techniques are evaluated using the state-of-the-art dedicated classifier on medium scale text corpora extracted from Wikipedia. Obtained results show that the...
-
Towards Effective Processing of Large Text Collections
PublicationIn the article we describe the approach to parallelimplementation of elementary operations for textual data categorization.In the experiments we evaluate parallel computations ofsimilarity matrices and k-means algorithm. The test datasets havebeen prepared as graphs created from Wikipedia articles relatedwith links. When we create the clustering data packages, wecompute pairs of eigenvectors and eigenvalues for visualizationsof...
-
External Validation Measures for Nested Clustering of Text Documents
PublicationAbstract. This article handles the problem of validating the results of nested (as opposed to "flat") clusterings. It shows that standard external validation indices used for partitioning clustering validation, like Rand statistics, Hubert Γ statistic or F-measure are not applicable in nested clustering cases. Additionally to the work, where F-measure was adopted to hierarchical classification as hF-measure, here some methods to...
-
Passing from requirements specification to class model using application domain ontology
PublicationThe quality of a classic software engineering process depends on the completeness of project documents and on the inter-phase consistency. In this paper, a method for passing from the requirement specification to the class model is proposed. First, a developer browses the text of the requirements, extracts the word sequences, and places them as terms into the glossary. Next, the internal ontology logic for the glossary needs to...
-
Parallel Computations of Text Similarities for Categorization Task
PublicationIn this chapter we describe the approach to parallel implementation of similarities in high dimensional spaces. The similarities computation have been used for textual data categorization. A test datasets we create from Wikipedia articles that with their hyper references formed a graph used in our experiments. The similarities based on Euclidean distance and Cosine measure have been used to process the data using k-means algorithm....
-
How Specific Can We Be with k-NN Classifier?
PublicationThis paper discusses the possibility of designing a two stage classifier for large-scale hierarchical and multilabel text classification task, that will be a compromise between two common approaches to this task. First of it is called big-bang, where there is only one classifier that aims to do all the job at once. Top-down approach is the second popular option, in which at each node of categories’ hierarchy, there is a flat classifier...
-
Selection of Relevant Features for Text Classification with K-NN
PublicationIn this paper, we describe five features selection techniques used for a text classification. An information gain, independent significance feature test, chi-squared test, odds ratio test, and frequency filtering have been compared according to the text benchmarks based on Wikipedia. For each method we present the results of classification quality obtained on the test datasets using K-NN based approach. A main advantage of evaluated...
-
Text Categorization Improvement via User Interaction
PublicationIn this paper, we propose an approach to improvement of text categorization using interaction with the user. The quality of categorization has been defined in terms of a distribution of objects related to the classes and projected on the self-organizing maps. For the experiments, we use the articles and categories from the subset of Simple Wikipedia. We test three different approaches for text representation. As a baseline we use...
-
Automatyczna klasyfikacja artykułów Wikipedii
PublicationWikipedia- internetowa encyklopedia do organizacji artykułów wykorzystuje system kategorii. W chwili obecnej proces przypisywania artykułu do odpowiednich kategorii tematycznych realizowany jest ręcznie przez jej edytorów. Zadanie to jest czasochłonne i wymaga wiedzy o strukturze Wikiedii. Ręczna kategoryzacja jest również podatna na błędy wynikające z faktu, że przyporządkowanie artykułu don kategorii odbywa się w oparciu o arbitralną...
-
Follow the Light. Where to search for useful research information
PublicationArchitectural Lighting Design (ALD) has never been a standalone professional discipline. Rather, it has existed as the combination of art and the science of light. Today, third generation lighting professionals are already creatively intertwining these fields, and the acceleration in scientific, technological and societal studies has only increased the need for reliable multidisciplinary information. Therefore, a thorough re-examination...
-
Identification of category associations using a multilabel classifier
PublicationDescription of the data using categories allows one to describe it on a higher abstraction level. In this way, we can operate on aggregated groups of the information, allowing one to see relationships that do not appear explicit when we analyze the individual objects separately. In this paper we present automatic identification of the associations between categories used for organization of the textual data. As experimental data...
-
Two Stage SVM and kNN Text Documents Classifier
PublicationThe paper presents an approach to the large scale text documents classification problem in parallel environments. A two stage classifier is proposed, based on a combination of k-nearest neighbors and support vector machines classification methods. The details of the classifier and the parallelisation of classification, learning and prediction phases are described. The classifier makes use of our method named one-vs-near. It is...
-
Interactive Information Search in Text Data Collections
PublicationThis article presents a new idea for retrieving in text repositories, as well as it describes general infrastructure of a system created to implement and test those ideas. The implemented system differs from today’s standard search engine by introducing process of interactive search with users and data clustering. We present the basic algorithms behind our system and measures we used for results evaluation. The achieved results...
-
IDENTYFIKACJA POWIĄZAŃ POMIĘDZY KATEGORIAMI WIKIPEDII Z UŻYCIEM MIAR PODOBIEŃSTWA ARTYKUŁÓW
PublicationW artykule opisano podejście do identyfikacji powiązań między kategoriami w repozytorium danych tekstowych, bazując na Wikipedii. Przeprowadzając analizę podobieństwa między artykułami określono miary pozwalające zidentyfikować powiązania między kategoriami, które nie były wcześniej uwzględnione i nadawać im wagi określające stopień istotności. Przeprowadzono automatyczną ocenę uzyskanych rezultatów w odniesieniu do już istniejącej...
-
Wizualizacja struktury Wikipedii do wspomagania wyszukiwania informacji
PublicationGraficzna prezentacja jest efektywnym sposobem poprawiania interakcji użytkownika z repozytorium wiedzy. Pozwala ona na przejrzyste przedstawienie złożonych struktur i uchwycenie zależności, które nie są widoczne bezpośrednio. Zastosowanie takiego podejścia w wyszukiwaniu informacji pozwala na prezentację danych na wysokim poziomie abstrakcji przy jednoczesnym określeniu ich kontekstu, co ma bezpośrednie przełożenie na jakość dostępu...
-
Annotating Words Using WordNet Semantic Glosses
PublicationAn approach to the word sense disambiguation (WSD) relaying onthe WordNet synsets is proposed. The method uses semantically tagged glosses to perform a process similar to the spreading activation in semantic network, creating ranking of the most probable meanings for word annotation. Preliminary evaluation shows quite promising results. Comparison with the state-of-theart WSD methods indicates that the use of WordNet relations...
-
Improving Effectiveness of SVM Classifier for Large Scale Data
PublicationThe paper presents our approach to SVM implementation in parallel environment. We describe how classification learning and prediction phases were pararellised. We also propose a method for limiting the number of necessary computations during classifier construction. Our method, named one-vs-near, is an extension of typical one-vs-all approach that is used for binary classifiers to work with multiclass problems. We perform experiments...
-
Commonly Accessible Web Service Platform - Wiki-WS
PublicationWeb Service technology on the basis had to supply complete and reliable system components. Nowadays this technology is commonly used by companies providing results of their work to end users and hiding implementation details. This paper presents a SOA-enabled platform - Wiki-WS - that empowers users to deploy, modify, discover and invoke web services. Moreover it discusses concepts and functionalities of this open source management...