An experimental multimodal system, designed for polysensory diagnosis and stimulation of persons with impaired communication skills or even non-communicative subjects is presented. The user interface includes an eye tracking device and the EEG monitoring of the subject. Furthermore, the system consists of a device for objective hearing testing and an autostereoscopic projection system designed to stimulate subjects through their immersion in a virtual environment. Data analysis methods are described, and experiments associated with classification of mental states during listening exercises as well as audio-visual stimuli are presented and discussed. Feature extraction was based on discrete wavelet transformation and clustering employing the k-means algorithm was designed. All algorithms were implemented in the Python programming language with the use of Open Source libraries. Tests of the proposed system were performed in a Special School and Educational Center in Koś-cierzyna, Poland. Results and comparison with data gathered from the control group of healthy people are presented and discussed.
Web of Science
full text is not available in portal
- Conference activity
- materiały konferencyjne indeksowane w Web of Science
- Title of issue:
- Foundations of Intelligent Systems : 23rd International Symposium on Methodologies for Intelligent Systems, ISMIS 2017 strony 47 - 56
- Publication year:
- Bibliographic description:
- Kurowski A., Odya P., Szczuko P., Lech M., Spaleniak P., Kostek B., Czyżewski A..: Multimodal system for diagnosis and polysensory stimulation of subjects with communication disorders, W: Foundations of Intelligent Systems : 23rd International Symposium on Methodologies for Intelligent Systems, ISMIS 2017, 2017, ,.
- Digital Object Identifier (open in new tab) 10.1007/978-3-319-60438-1_5
- Verified by:
- Gdańsk University of Technology
seen 66 times