Julian Szymański - Publications - Bridge of Knowledge

Search

Filters

total: 135

  • Category
  • Year
  • Options

clear Chosen catalog filters disabled

Catalog Publications

Year 2017
Year 2016
Year 2015
  • DBpedia As a Formal Knowledge Base – An Evaluation

    DBpedia is widely used by researchers as a mean of accessing Wikipedia in a standardized way. In this paper it is characterized from the point of view of questions answering system. Simple implementation of such system is also presented. The paper also characterizes alternatives to DBpedia in form of OpenCyc and YAGO knowledge bases. A comparison between DBpedia and those knowledge bases is presented.

    Full text available to download

  • Improving css-KNN Classification Performance by Shifts in Training Data
    Publication

    - Year 2015

    This paper presents a new approach to improve the performance of a css-k-NN classifier for categorization of text documents. The css-k-NN classifier (i.e., a threshold-based variation of a standard k-NN classifier we proposed in [1]) is a lazy-learning instance-based classifier. It does not have parameters associated with features and/or classes of objects, that would be optimized during off-line learning. In this paper we propose...

  • Improving Effectiveness of SVM Classifier for Large Scale Data

    The paper presents our approach to SVM implementation in parallel environment. We describe how classification learning and prediction phases were pararellised. We also propose a method for limiting the number of necessary computations during classifier construction. Our method, named one-vs-near, is an extension of typical one-vs-all approach that is used for binary classifiers to work with multiclass problems. We perform experiments...

    Full text to download in external service

  • Information Retrieval in Wikipedia with Conceptual Directions
    Publication

    - Year 2015

    The paper describes our algorithm used for retrieval of textual information from Wikipedia. The experiments show that the algorithm allows to improve typical evaluation measures of retrieval quality. The improvement of the retrieval results was achieved by two phase usage approach. In first the algorithm extends the set of content that has been indexed by the specified keywords and thus increases the Recall value. Then, using the...

    Full text to download in external service

  • Retrieval of Heterogeneus Sevices in C2NIWA Repository
    Publication

    The paper reviews the methods used for retrieval of information and services. The selected approaches presented in the review inspired us to build retrieval mechanisms in a system for searching the resources stored in the C2NIWA repository. We describe the architecture of the system, its functions and the surrounding subsystems to which it is related. For retrieval of C2NIWA sevices we propos three approaches based on: keyword...

    Full text available to download

  • Semantic URL Analytics to Support Efficient Annotation of Large Scale Web Archives
    Publication
    • T. Souza
    • E. Demidova
    • T. Risse
    • H. Holzmann
    • G. Gossen
    • J. Szymański

    - Year 2015

    Long-term Web archives comprise Web documents gathered over longer time periods and can easily reach hundreds of terabytes in size. Semantic annotations such as named entities can facilitate intelligent access to the Web archive data. However, the annotation of the entire archive content on this scale is often infeasible. The most efficient way to access the documents within Web archives is provided through their URLs, which are...

    Full text to download in external service

  • Simulation of parallel similarity measure computations for large data sets

    The paper presents our approach to implementation of similarity measure for big data analysis in a parallel environment. We describe the algorithm for parallelisation of the computations. We provide results from a real MPI application for computations of similarity measures as well as results achieved with our simulation software. The simulation environment allows us to model parallel systems of various sizes with various components...

    Full text to download in external service

  • Two Stage SVM and kNN Text Documents Classifier
    Publication

    - Year 2015

    The paper presents an approach to the large scale text documents classification problem in parallel environments. A two stage classifier is proposed, based on a combination of k-nearest neighbors and support vector machines classification methods. The details of the classifier and the parallelisation of classification, learning and prediction phases are described. The classifier makes use of our method named one-vs-near. It is...

Year 2014
  • Automatic Classification of Polish Sign Language Words

    In the article we present the approach to automatic recognition of hand gestures using eGlove device. We present the research results of the system for detection and classification of static and dynamic words of Polish language. The results indicate the usage of eGlove allows to gain good recognition quality that additionally can be improved using additional data sources such as RGB cameras.

    Full text available to download

  • Big Data Paradigm Developed in Volunteer Grid System with Genetic Programming Scheduler

    Artificial intelligence techniques are capable to handle a large amount of information collected over the web. In this paper, big data paradigm has been studied in volunteer and grid system called Comcute that is optimized by a genetic programming scheduler. This scheduler can optimize load balancing and resource cost. Genetic programming optimizer has been applied for finding the Pareto solu-tions. Finally, some results from numerical...

    Full text to download in external service

  • Comparative Analysis of Text Representation Methods Using Classification
    Publication

    In our work, we review and empirically evaluate five different raw methods of text representation that allow automatic processing of Wikipedia articles. The main contribution of the article—evaluation of approaches to text representation for machine learning tasks—indicates that the text representation is fundamental for achieving good categorization results. The analysis of the representation methods creates a baseline that cannot...

    Full text to download in external service

  • Evaluation of Path Based Methods for Conceptual Representation of the Text
    Publication

    Typical text clustering methods use the bag of words (BoW) representation to describe content of documents. However, this method is known to have several limitations. Employing Wikipedia as the lexical knowledge base has shown an improvement of the text representation for data-mining purposes. Promising extensions of that trend employ hierarchical organization of Wikipedia category system. In this paper we propose three path-based...

    Full text to download in external service

  • How Specific Can We Be with k-NN Classifier?
    Publication

    This paper discusses the possibility of designing a two stage classifier for large-scale hierarchical and multilabel text classification task, that will be a compromise between two common approaches to this task. First of it is called big-bang, where there is only one classifier that aims to do all the job at once. Top-down approach is the second popular option, in which at each node of categories’ hierarchy, there is a flat classifier...

    Full text to download in external service

  • Knowledge Base Suitable for Answering Questions in Natural Language
    Publication

    This paper presents three knowledge bases widely used by researchers coping with natural language processing: OpenCyc, DBpedia and YAGO. They are characterized from the point of view of questions answering system. In this paper a short description of the aforementioned system implementation is also presented.

    Full text to download in external service

  • Towards Increasing Density of Relations in Category Graphs
    Publication

    In the chapter we propose methods for identifying new associations between Wikipedia categories. The first method is based on Bag-of-Words (BOW) representation of Wikipedia articles. Using similarity of the articles belonging to different categories allows to calculate the information about categories similarity. The second method is based on average scores given to categories while categorizing documents by our dedicated score-based...

    Full text to download in external service

Year 2013
  • Bringing Common Sense to WordNet with a Word Game
    Publication

    We present a tool for common sense knowledge acquisition in form of a twenty questions game. The described approach uses WordNet dictionary, which rich taxonomy allows to keep cognitive economy and accelerate knowledge propagation, although sometimes inferences made on hierarchical relations result in noise. We extend the dictionary with common sense assertions acquired during the games played with humans. The facts added to the...

    Full text to download in external service

  • IDENTYFIKACJA POWIĄZAŃ POMIĘDZY KATEGORIAMI WIKIPEDII Z UŻYCIEM MIAR PODOBIEŃSTWA ARTYKUŁÓW

    W artykule opisano podejście do identyfikacji powiązań między kategoriami w repozytorium danych tekstowych, bazując na Wikipedii. Przeprowadzając analizę podobieństwa między artykułami określono miary pozwalające zidentyfikować powiązania między kategoriami, które nie były wcześniej uwzględnione i nadawać im wagi określające stopień istotności. Przeprowadzono automatyczną ocenę uzyskanych rezultatów w odniesieniu do już istniejącej...

    Full text to download in external service

  • Improvement of Imperfect String Matching Based on Asymetric n-Grams
    Publication

    Typical approaches to string comparing treats them as either different or identical without taking into account the possibility of misspelling of the word. In this article we present an approach we used for improvement of imperfect string matching that allows one to reconstruct potential string distortions. The proposed method increases the quality of imperfect string matching, allowing the lookup of misspelled words without significant...

    Full text available to download

  • Interactive Information Search in Text Data Collections
    Publication

    This article presents a new idea for retrieving in text repositories, as well as it describes general infrastructure of a system created to implement and test those ideas. The implemented system differs from today’s standard search engine by introducing process of interactive search with users and data clustering. We present the basic algorithms behind our system and measures we used for results evaluation. The achieved results...

    Full text to download in external service

  • Parallel Computations of Text Similarities for Categorization Task
    Publication

    - Year 2013

    In this chapter we describe the approach to parallel implementation of similarities in high dimensional spaces. The similarities computation have been used for textual data categorization. A test datasets we create from Wikipedia articles that with their hyper references formed a graph used in our experiments. The similarities based on Euclidean distance and Cosine measure have been used to process the data using k-means algorithm....

  • Retrieval with Semantic Sieve
    Publication

    The article presents an algorithm we called Semantic Sieve applied for refining search results in text documents repository. The algorithm calculates socalled conceptual directions that enables interaction with the user and allows to narrow the set of results to the most relevant ones. We present the system where the algorithm has been implemented. The system also offers in the presentation layer clustering of the results into...

    Full text to download in external service

  • Selecting Features with SVM
    Publication

    A common problem with feature selection is to establish how many features should be retained at least so that important information is not lost. We describe a method for choosing this number that makes use of Support Vector Machines. The method is based on controlling an angle by which the decision hyperplane is tilt due to feature selection. Experiments were performed on three text datasets generated from a Wikipedia dump. Amount...

    Full text to download in external service

  • Selection of Relevant Features for Text Classification with K-NN
    Publication

    In this paper, we describe five features selection techniques used for a text classification. An information gain, independent significance feature test, chi-squared test, odds ratio test, and frequency filtering have been compared according to the text benchmarks based on Wikipedia. For each method we present the results of classification quality obtained on the test datasets using K-NN based approach. A main advantage of evaluated...

    Full text to download in external service

  • Thresholding Strategies for Large Scale Multi-Label Text Classifier
    Publication

    This article presents an overview of thresholding methods for labeling objects given a list of candidate classes’ scores. These methods are essential to multi-label classification tasks, especially when there are a lot of classes which are organized in a hierarchy. Presented techniques are evaluated using the state-of-the-art dedicated classifier on medium scale text corpora extracted from Wikipedia. Obtained results show that the...

    Full text to download in external service

  • Wikipedia Articles Representation with Matrix'u
    Publication

    - Year 2013

    In the article we evaluate different text representation methods used for a task of Wikipedia articles categorization. We present the Matrix’u application used for creating computational datasets ofWikipedia articles. The representations have been evaluated with SVM classifiers used for reconstruction human made categories.

    Full text to download in external service

Year 2012
  • Adaptive Algorithm for Interactive Question-based Search
    Publication

    - Year 2012

    Popular web search engines tend to improve the relevanceof their result pages, but the search is still keyword-oriented and far from "understanding" the queries' meaning. In the article we propose an interactive question-based search algorithm that might come up helpful for identifying users' intents. We describe the algorithm implemented in a form of a questions game. The stress is put mainly on the most critical aspect of this...

  • Annotating Words Using WordNet Semantic Glosses
    Publication

    - Year 2012

    An approach to the word sense disambiguation (WSD) relaying onthe WordNet synsets is proposed. The method uses semantically tagged glosses to perform a process similar to the spreading activation in semantic network, creating ranking of the most probable meanings for word annotation. Preliminary evaluation shows quite promising results. Comparison with the state-of-theart WSD methods indicates that the use of WordNet relations...

  • Collaborative approach to WordNet and Wikipedia integration
    Publication

    In this article we present a collaborative approach tocreating mappings between WordNet and Wikipedia. Wikipediaarticles have been first matched with WordNet synsets in anautomatic way. Then such associations have been evaluated andcomplemented in a collaborative way using a web application.We describe algorithms used for creating automatic mappingsas well as a system for their collaborative development. Theoutcome enables further...

  • Context Search Algorithm for Lexical Knowledge Acquisition
    Publication

    - CONTROL AND CYBERNETICS - Year 2012

    A Context Search algorithm used for lexical knowledge acquisition is presented. Knowledge representation based on psycholinguistic theories of cognitive processes allows for implementation of a computational model of semantic memory in the form of semantic network. A knowledge acquisition using supervised dialog templates have been performed in a word game designed to guess the concept a human user is thinking about. The game,...

  • Interactive Information Retrieval Algorithm for Wikipedia Articels
    Publication

    - Year 2012

    The article presents an algorithm for retrieving textual information in documents collection. The algorithm employs a category system that organizers the repository and using interaction with user improves search precision. The algorithm was implemented for simple English Wikipedia and the first evaluation results indicates the proposed method can help to retrieve information from large document repositories.

  • Matching Exception Class Hierarchies between .NET, Java Environments
    Publication

    The paper presents a methodology of exception classification and matching exception messages between .NET andJava environments. The methodology operates on existing exception class hierarchies and proposes two complementingapproaches: automated and manual matching. The automated matching uses the similarity measure to find associationsbetween exception messages from the two sets of classes for the considered programming languages....

  • Rozpraszanie obliczeń za pomocą serwerów dystrybucyjnych

    Omówiono zasady funkcjonowania serwerów dystrybucyjnych w systemie obliczeniowym klasy grid pracującym w trybie volunteer computing. Omówiono sposoby zwiększania wydajności tej warstwy systemu za pomocą zarządzania strumieniem paczek danych. Odniesiono się także do koncepcji Map-Reduce w implementacji przetwarzania równoległego.

    Full text to download in external service

  • Self Organizing Maps for Visualization of Categories
    Publication

    - Year 2012

    Visualization of Wikipedia categories using Self Organizing Mapsshows an overview of categories and their relations, helping to narrow down search domains. Selecting particular neurons this approach enables retrieval of conceptually similar categories. Evaluation of neural activations indicates that they form coherent patterns that may be useful for building user interfaces for navigation over category structures.

  • Text classifiers for automatic articles categorization
    Publication

    The article concerns the problem of automatic classification of textual content. We present selected methods for generation of documents representation and we evaluate them in classification tasks. The experiments have been performed on Wikipedia articles classified automatically to their categories made by Wikipedia editors.

  • Towards Effective Processing of Large Text Collections
    Publication

    In the article we describe the approach to parallelimplementation of elementary operations for textual data categorization.In the experiments we evaluate parallel computations ofsimilarity matrices and k-means algorithm. The test datasets havebeen prepared as graphs created from Wikipedia articles relatedwith links. When we create the clustering data packages, wecompute pairs of eigenvectors and eigenvalues for visualizationsof...

  • Words context analysis for improvement of information retrieval
    Publication

    - Year 2012

    In the article we present an approach to improvement of retrieval informationfrom large text collections using words context vectors. The vectorshave been created analyzing English Wikipedia with Hyperspace Analogue to Language model of words similarity. For test phrases we evaluate retrieval with direct user queries as well as retrieval with context vectors of these queries. The results indicate that the proposed method can not...

  • Zastosowanie systemu Comcute do łamania algorytmu DES
    Publication

    - Year 2012

    Zaprezentowano zastosowanie systemu Comcute do łamania szyfru DES. Przedstawiono podstawową architekturę wykorzystaną do dystrybucji obliczeń oraz zaprezentowano wyniki skalowalności rozwiązania w funkcji użytych jednostek obliczeniowych.

    Full text to download in external service

Year 2011
  • 0-step K-means for clustering Wikipedia search results
    Publication

    - Year 2011

    This article describes an improvement for K-means algorithm and its application in the form of a system that clusters search results retrieved from Wikipedia. The proposed algorithm eliminates K-means isadvantages and allows one to create a cluster hierarchy. The main contributions of this paper include the ollowing: (1) The concept of an improved K-means algorithm and its application for hierarchical clustering....

  • Categorization of Wikipedia articles with spectral clustering

    Abstract. The article reports application of clustering algorithms for creating hierarchical groups withinWikipedia articles.We evaluate three spectral clustering algorithms based on datasets constructed with usage ofWikipedia categories. Selected algorithm has been implemented in the system that categorize Wikipedia search results in the fly.

  • Cooperative Word Net Editor for Lexical Semantic Acquisition
    Publication

    - Year 2011

    The article describes an approach for building Word Net semantic dictionary in a collaborative approach paradigm. The presented system system enables functionality for gathering lexical data in a Wikipedia-like style. The core of the system is a user-friendly interface based on component for interactive graph navigation. The component has been used for Word Net semantic network presentation on web page, and it brings functionalities...

    Full text to download in external service

seen 4883 times