Abstract
In this paper, we describe five features selection techniques used for a text classification. An information gain, independent significance feature test, chi-squared test, odds ratio test, and frequency filtering have been compared according to the text benchmarks based on Wikipedia. For each method we present the results of classification quality obtained on the test datasets using K-NN based approach. A main advantage of evaluated approach is reducing the dimensionality of the vector space that allows to improve effectiveness of classification task. The information gain method, that obtained the best results, has been used for evaluation of features selection and classification scalability. We also provide the results indicating the feature selection is also useful for obtaining the commonsense features for describing natural-made categories.
Citations
-
2
CrossRef
-
0
Web of Science
-
3
Scopus
Authors (4)
Cite as
Full text
full text is not available in portal
Keywords
Details
- Category:
- Conference activity
- Type:
- materiały konferencyjne indeksowane w Web of Science
- Title of issue:
- Artificial Intelligence and Soft Computing. - Part 2 strony 477 - 488
- Language:
- English
- Publication year:
- 2013
- Bibliographic description:
- Balicki J., Krawczyk H., Rymko Ł., Szymański J..: Selection of Relevant Features for Text Classification with K-NN, W: Artificial Intelligence and Soft Computing. - Part 2, 2013, Springer,.
- DOI:
- Digital Object Identifier (open in new tab) 10.1007/978-3-642-38610-7_44
- Verified by:
- Gdańsk University of Technology
seen 122 times
Recommended for you
Text Categorization Improvement via User Interaction
- J. Atroszko,
- J. Szymański,
- D. Gil
- + 1 authors