Using deep learning to increase accuracy of gaze controlled prosthetic arm - Publication - Bridge of Knowledge

Search

Using deep learning to increase accuracy of gaze controlled prosthetic arm

Abstract

This paper presents how neural networks can be utilized to improve the accuracy of reach and grab functionality of hybrid prosthetic arm with eye tracing interface. The LSTM based Autoencoder was introduced to overcome the problem of lack of accuracy of the gaze tracking modality in this hybrid interface. The gaze based interaction strongly depends on the eye tracking hardware. In this paper it was presented how the overall the accuracy can be slightly improved by software solution. The cloud of points related to possible final positions of the arm was created to train Autoencoder. The trained model was next used to improve the position provided by the eye tracker. Using the LSTM based Autoencoder resulted in nearly 3% improvement of the overall accuracy.

Citations

  • 0

    CrossRef

  • 0

    Web of Science

  • 0

    Scopus

Cite as

Full text

full text is not available in portal

Keywords

Details

Category:
Conference activity
Type:
materiały konferencyjne indeksowane w Web of Science
Language:
English
Publication year:
2021
Bibliographic description:
Kocejko T..: Using deep learning to increase accuracy of gaze controlled prosthetic arm, W: , 2021, ,.
DOI:
Digital Object Identifier (open in new tab) 10.1109/hsi52170.2021.9538710
Sources of funding:
  • Statutory activity/subsidy
Verified by:
Gdańsk University of Technology

seen 98 times

Recommended for you

Meta Tags