Filters
total: 1259
filtered: 297
Search results for: restoration of audio signals
-
MODALITY corpus - SPEAKER 41 - SEQUENCE S1
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 37 - COMMANDS C1
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 10 - SEQUENCE S4
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 10 - COMMANDS C6
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 03 - SEQUENCE S3
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 34 - SEQUENCE S1
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 03 - SEQUENCE S4
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 03 - COMMANDS C5
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 10 - SEQUENCE S6
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 03 - COMMANDS C4
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 30 - SEQUENCE S1
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 03 - COMMANDS C2
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 29 - SEQUENCE S1
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 30 - COMMANDS C1
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 31 - SEQUENCE S1
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 10 - COMMANDS C3
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 40 - SEQUENCE S1
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 34 - COMMANDS C1
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 17 - SEQUENCE S4
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 17 - SEQUENCE S2
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 17 - SEQUENCE S5
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 17 - SEQUENCE S3
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 17 - SEQUENCE S6
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
MODALITY corpus - SPEAKER 17 - SEQUENCE S1
Open Research DataThe MODALITY corpus is one of the multimodal database of word recordings in English. It consists of over 30 hours of multimodal recordings. The database contains high-resolution, high-framerate stereoscopic video streams and audio signals obtained from a microphone array and a laptop microphone. The corpus can be employed to develop an AVSR system,...
-
Vident-lab: a dataset for multi-task video processing of phantom dental scenes
Open Research DataWe introduce a new, asymmetrically annotated dataset of natural teeth in phantom scenes for multi-task video processing: restoration, teeth segmentation, and inter-frame homography estimation. Pairs of frames were acquired with a beam splitter. The dataset constitutes a low-quality frame, its high-quality counterpart, a teeth segmentation mask, and...
-
Surface EMG-based signal acquisition for decoding hand movements
Open Research DataBiosignal processing plays a crucial role in modern hand prosthetics. The challenge is to restore functionality of a lost limb based on the signals acquired from the surface of the stump. The number of sensors (emg channels) used for signal acquisition influence the quality of a prosthetic hand. Modern algorithms (including neural networks) can significantly...
-
Projects of noise level on railway lines
Open Research DataProjects of acoustic signals from railway lines for further analysis in the BK Connect program.
-
projects of noise level on railway lines
Open Research DataProjects of acoustic signals from railway lines for further analysis in the BK Connect software.
-
SEMG signal database for the automated upper limb rehabilitation process
Open Research DataAn automated rehabilitation device control system requires information about the patient's physiological condition. This is possible thanks to the use of biological feedback in the form of electromyography and surface signals (Surface Electromyography, SEMG).
-
Measurements of guided wave propagation in CFRP/steel adhesive joints obtained by scanning laser vibrometry
Open Research DataThe DataSet contains guided wave signals measured in adhesive joints of steel plates (steel of type S355 with a thickness of 11.9 mm ) and carbon fibre reinforced polymer (CarboDur S1012 with a thickness of 1.24 mm). The samples were as follows: #0 – intact; #1 – 10% disbond; #2 – 20% disbond; #3 – 50% disbond; #4 – 80% disbond; #5 – 100% disbond.
-
Doppler blood flow recordings
Open Research DataVital signals registration plays a grate role in biomedical engineering and education process. Well acquired data allow future engineers to observe certain physical phenomenons as well learn how to correctly process and interpret the data. This data set was designed for students to learn about Doppler phenomena and to demonstrate correctly and incorrectly...
-
Neural oscillations induced by IAPS pictures (session 24)
Open Research DataThe data were collected to perform research on the possibility of controlling the content displayed on the monitor screen using human emotional states extracted from EEG signals. The dataset contains recordings of 14-channel EEG signals gathered from seven persons aged 23 - 35 within 26 sessions, during which 45 different random photos, taken from the...
-
Neural oscillations induced by IAPS pictures (session 23)
Open Research DataThe data were collected to perform research on the possibility of controlling the content displayed on the monitor screen using human emotional states extracted from EEG signals. The dataset contains recordings of 14-channel EEG signals gathered from seven persons aged 23 - 35 within 26 sessions, during which 45 different random photos, taken from the...
-
Neural oscillations induced by IAPS pictures (session 25)
Open Research DataThe data were collected to perform research on the possibility of controlling the content displayed on the monitor screen using human emotional states extracted from EEG signals. The dataset contains recordings of 14-channel EEG signals gathered from seven persons aged 23 - 35 within 26 sessions, during which 45 different random photos, taken from the...
-
Neural oscillations induced by IAPS pictures (session 26)
Open Research DataThe data were collected to perform research on the possibility of controlling the content displayed on the monitor screen using human emotional states extracted from EEG signals. The dataset contains recordings of 14-channel EEG signals gathered from seven persons aged 23 - 35 within 26 sessions, during which 45 different random photos, taken from the...
-
Neural oscillations induced by IAPS pictures (session 10)
Open Research DataThe data were collected to perform research on the possibility of controlling the content displayed on the monitor screen using human emotional states extracted from EEG signals. The dataset contains recordings of 14-channel EEG signals gathered from seven persons aged 23 - 35 within 26 sessions, during which 45 different random photos, taken from the...
-
Neural oscillations induced by IAPS pictures (session 1)
Open Research DataThe data were collected to perform research on the possibility of controlling the content displayed on the monitor screen using human emotional states extracted from EEG signals. The dataset contains recordings of 14-channel EEG signals gathered from seven persons aged 23 - 35 within 26 sessions, during which 45 different random photos, taken from the...
-
Neural oscillations induced by IAPS pictures (session 9)
Open Research DataThe data were collected to perform research on the possibility of controlling the content displayed on the monitor screen using human emotional states extracted from EEG signals. The dataset contains recordings of 14-channel EEG signals gathered from seven persons aged 23 - 35 within 26 sessions, during which 45 different random photos, taken from the...
-
Neural oscillations induced by IAPS pictures (session 5)
Open Research DataThe data were collected to perform research on the possibility of controlling the content displayed on the monitor screen using human emotional states extracted from EEG signals. The dataset contains recordings of 14-channel EEG signals gathered from seven persons aged 23 - 35 within 26 sessions, during which 45 different random photos, taken from the...
-
Neural oscillations induced by IAPS pictures (session 2)
Open Research DataThe data were collected to perform research on the possibility of controlling the content displayed on the monitor screen using human emotional states extracted from EEG signals. The dataset contains recordings of 14-channel EEG signals gathered from seven persons aged 23 - 35 within 26 sessions, during which 45 different random photos, taken from the...
-
Neural oscillations induced by IAPS pictures (session 11)
Open Research DataThe data were collected to perform research on the possibility of controlling the content displayed on the monitor screen using human emotional states extracted from EEG signals. The dataset contains recordings of 14-channel EEG signals gathered from seven persons aged 23 - 35 within 26 sessions, during which 45 different random photos, taken from the...
-
Neural oscillations induced by IAPS pictures (session 3)
Open Research DataThe data were collected to perform research on the possibility of controlling the content displayed on the monitor screen using human emotional states extracted from EEG signals. The dataset contains recordings of 14-channel EEG signals gathered from seven persons aged 23 - 35 within 26 sessions, during which 45 different random photos, taken from the...
-
Neural oscillations induced by IAPS pictures (session 6)
Open Research DataThe data were collected to perform research on the possibility of controlling the content displayed on the monitor screen using human emotional states extracted from EEG signals. The dataset contains recordings of 14-channel EEG signals gathered from seven persons aged 23 - 35 within 26 sessions, during which 45 different random photos, taken from the...
-
Neural oscillations induced by IAPS pictures (session 4)
Open Research DataThe data were collected to perform research on the possibility of controlling the content displayed on the monitor screen using human emotional states extracted from EEG signals. The dataset contains recordings of 14-channel EEG signals gathered from seven persons aged 23 - 35 within 26 sessions, during which 45 different random photos, taken from the...
-
Neural oscillations induced by IAPS pictures (session 8)
Open Research DataThe data were collected to perform research on the possibility of controlling the content displayed on the monitor screen using human emotional states extracted from EEG signals. The dataset contains recordings of 14-channel EEG signals gathered from seven persons aged 23 - 35 within 26 sessions, during which 45 different random photos, taken from the...
-
Neural oscillations induced by IAPS pictures (session 7)
Open Research DataThe data were collected to perform research on the possibility of controlling the content displayed on the monitor screen using human emotional states extracted from EEG signals. The dataset contains recordings of 14-channel EEG signals gathered from seven persons aged 23 - 35 within 26 sessions, during which 45 different random photos, taken from the...
-
Neural oscillations induced by IAPS pictures (session 14)
Open Research DataThe data were collected to perform research on the possibility of controlling the content displayed on the monitor screen using human emotional states extracted from EEG signals. The dataset contains recordings of 14-channel EEG signals gathered from seven persons aged 23 - 35 within 26 sessions, during which 45 different random photos, taken from the...
-
Neural oscillations induced by IAPS pictures (session 13)
Open Research DataThe data were collected to perform research on the possibility of controlling the content displayed on the monitor screen using human emotional states extracted from EEG signals. The dataset contains recordings of 14-channel EEG signals gathered from seven persons aged 23 - 35 within 26 sessions, during which 45 different random photos, taken from the...
-
Neural oscillations induced by IAPS pictures (session 16)
Open Research DataThe data were collected to perform research on the possibility of controlling the content displayed on the monitor screen using human emotional states extracted from EEG signals. The dataset contains recordings of 14-channel EEG signals gathered from seven persons aged 23 - 35 within 26 sessions, during which 45 different random photos, taken from the...
-
Neural oscillations induced by IAPS pictures (session 19)
Open Research DataThe data were collected to perform research on the possibility of controlling the content displayed on the monitor screen using human emotional states extracted from EEG signals. The dataset contains recordings of 14-channel EEG signals gathered from seven persons aged 23 - 35 within 26 sessions, during which 45 different random photos, taken from the...