Abstract
The Brain–Computer Interfaces (BCI) based on electroencephalograms (EEG) are systems which aim is to provide a communication channel to any person with a computer, initially it was proposed to aid people with disabilities, but actually wider applications have been proposed. These devices allow to send messages or to control devices using the brain signals. There are different neuro-paradigms which evoke brain signals of interest for such purposes. Imagined speech is one of the most recent paradigms, and it is explored in this work, it consists of the internal pronunciation of a word, i.e. a subject imagines the utterance of a word without emitting sounds or articulating facial movements. Under this neuro-paradigm, to increase the initial vocabulary reducing drastically the training time using few or none new data is an open challenge. The proposed method extracts characteristic units (i.e. codewords) of the EEGs associated with the words of an initial vocabulary. Subsequently, a new imagined word is represented with these codewords, and then a classification algorithm is applied. The method was tested both, with and without calibration examples, in a 27 subjects dataset. An initial vocabulary of 4 words, with 33 epochs for each word was considered. The results were obtained by averaging the accuracies of every subject, without calibration data a 65.65% accuracy was achieved. In comparison to the baseline method, which obtained an average accuracy of 68.9%, the proposed method showed no statistical difference.
Citations
-
6 0
CrossRef
-
0
Web of Science
-
5 8
Scopus
Authors (4)
Cite as
Full text
- Publication version
- Accepted or Published Version
- DOI:
- Digital Object Identifier (open in new tab) 10.1016/j.bspc.2019.01.006
- License
- open in new tab
Keywords
Details
- Category:
- Articles
- Type:
- artykuły w czasopismach
- Published in:
-
Biomedical Signal Processing and Control
no. 50,
pages 151 - 157,
ISSN: 1746-8094 - Language:
- English
- Publication year:
- 2019
- Bibliographic description:
- Garcia Salinas J., Villaseñor-Pineda L., Reyes-Garćia C. A., Torres-García A. A.: Transfer learning in imagined speech EEG-based BCIs// Biomedical Signal Processing and Control -Vol. 50, (2019), s.151-157
- DOI:
- Digital Object Identifier (open in new tab) 10.1016/j.bspc.2019.01.006
- Sources of funding:
-
- Free publication
- Verified by:
- Gdańsk University of Technology
seen 73 times
Recommended for you
Intra-subject class-incremental deep learning approach for EEG-based imagined speech recognition
- J. Garcia Salinas,
- A. A. Torres-García,
- C. A. Reyes-Garćia
- + 1 authors
Tensor Decomposition for Imagined Speech Discrimination in EEG
- J. Garcia Salinas,
- L. Villaseñor-Pineda,
- C. A. Reyes-Garćia
- + 1 authors
Pursuing the Deep-Learning-Based Classification of Exposed and Imagined Colors from EEG
- A. A. Torres-García,
- J. Garcia Salinas,
- L. Villaseñor-Pineda