Search results for: SPEECH EMOTION RECOGNITION - Bridge of Knowledge

Search

Search results for: SPEECH EMOTION RECOGNITION

Search results for: SPEECH EMOTION RECOGNITION

  • A Study of Cross-Linguistic Speech Emotion Recognition Based on 2D Feature Spaces

    Publication
    • G. Tamulevicius
    • G. Korvel
    • A. B. Yayak
    • P. Treigys
    • J. Bernataviciene
    • B. Kostek

    - Electronics - Year 2020

    In this research, a study of cross-linguistic speech emotion recognition is performed. For this purpose, emotional data of different languages (English, Lithuanian, German, Spanish, Serbian, and Polish) are collected, resulting in a cross-linguistic speech emotion dataset with the size of more than 10.000 emotional utterances. Despite the bi-modal character of the databases gathered, our focus is on the acoustic representation...

    Full text available to download

  • Uncertainty in emotion recognition

    Purpose–The purpose of this paper is to explore uncertainty inherent in emotion recognition technologiesand the consequences resulting from that phenomenon.Design/methodology/approach–The paper is a general overview of the concept; however, it is basedon a meta-analysis of multiple experimental and observational studies performed over the past couple of years.Findings–The mainfinding of the paper might be summarized as follows:...

    Full text to download in external service

  • Emotion Recognition

    Open Research Data
    open access - series: Person A

    The films presented here were recorded using so-called high-speed camera Phantom Miro. To play the movie  You need the special software which can be downloaded from the web site https://www.phantomhighspeed.com/resourcesandsupport/phantomresources/pccsoftware the details of the movie are available after starting the movie in the viewer in the description...

  • Emotion Recognition

    Open Research Data
    open access - series: Person A

    The films presented here were recorded using so-called high-speed camera Phantom Miro. To play the movie  You need the special software which can be downloaded from the web site https://www.phantomhighspeed.com/resourcesandsupport/phantomresources/pccsoftware the details of the movie are available after starting the movie in the viewer in the description...

  • Integration in Multichannel Emotion Recognition

    Publication

    - Year 2018

    The paper concerns integration of results provided by automatic emotion recognition algorithms. It presents both the challenges and the approaches to solve them. Paper shows experimental results of integration. The paper might be of interest to researchers and practitioners who deal with automatic emotion recognition and use more than one solution or multichannel observation.

    Full text available to download

  • Human emotion recognition with biosignals

    Publication

    - Year 2022

    This chapter presents issues in the field of affective computing. Basic preliminary information for the recognition of emotions is given and models of emotions, various ways of evoking emotions, as well as their theoretical foundations are discussed. The particular attention is given to the use of physiological signals in recognizing emotions. This subject is outlined further below by presenting selected biosignals, their relationship...

    Full text to download in external service

  • Emotion Recognition and Its Applications

    The paper proposes a set of research scenarios to be applied in four domains: software engineering, website customization, education and gaming. The goal of applying the scenarios is to assess the possibility of using emotion recognition methods in these areas. It also points out the problems of defining sets of emotions to be recognized in different applications, representing the defined emotional states, gathering the data and...

    Full text to download in external service

  • Using Physiological Signals for Emotion Recognition

    Publication

    - Year 2013

    Recognizing user’s emotions is the promising area of research in a field of human-computer interaction. It is possible to recognize emotions using facial expression, audio signals, body poses, gestures etc. but physiological signals are very useful in this field because they are spontaneous and not controllable. In this paper a problem of using physiological signals for emotion recognition is presented. The kinds of physiological...

    Full text to download in external service

  • Emotion Recognition Using Physiological Signals

    Publication

    - Year 2015

    In this paper the problem of emotion recognition using physiological signals is presented. Firstly the problems with acquisition of physiological signals related to specific human emotions are described. It is not a trivial problem to elicit real emotions and to choose stimuli that always, and for all people, elicit the same emotion. Also different kinds of physiological signals for emotion recognition are considered. A set of...

    Full text to download in external service

  • Language Models in Speech Recognition

    Publication

    - Year 2022

    This chapter describes language models used in speech recognition, It starts by indicating the role and the place of language models in speech recognition. Mesures used to compare language models follow. An overview of n-gram, syntactic, semantic, and neural models is given. It is accompanied by a list of popular software.

    Full text to download in external service

  • Emotion recognition and its application in software engineering

    In this paper a novel application of multimodal emotion recognition algorithms in software engineering is described. Several application scenarios are proposed concerning program usability testing and software process improvement. Also a set of emotional states relevant in that application area is identified. The multimodal emotion recognition method that integrates video and depth channels, physiological signals and input devices...

    Full text to download in external service

  • Facial emotion recognition using depth data

    Publication

    - Year 2015

    In this paper an original approach is presented for facial expression and emotion recognition based only on depth channel from Microsoft Kinect sensor. The emotional user model contains nine emotions including the neutral one. The proposed recognition algorithm uses local movements detection within the face area in order to recognize actual facial expression. This approach has been validated on Facial Expressions and Emotions Database...

    Full text to download in external service

  • Emotion Recognition Based on Facial Expressions of Gamers

    This article presents an approach to emotion recognition based on facial expressions of gamers. With application of certain methods crucial features of an analyzed face like eyebrows' shape, eyes and mouth width, height were extracted. Afterwards a group of artificial intelligence methods was applied to classify a given feature set as one of the following emotions: happiness, sadness, anger and fear. The approach presented in this...

  • Emotion Recognition Based on Facial Expressions of Gamers

    Publication

    This article presents an approach to emotion recognition based on facial expressions of gamers. With application of certain methods crucial features of an analysed face like eyebrows' shape, eyes and mouth width, height were extracted. Afterwards a group of artificial intelligence methods was applied to classify a given feature set as one of the following emotions: happiness, sadness, anger and fear.The approach presented in this...

  • Emotion Recognition for Affect Aware Video Games

    In this paper the idea of affect aware video games is presented. A brief review of automatic multimodal affect recognition of facial expressions and emotions is given. The first result of emotions recognition using depth data as well as prototype affect aware video game are presented

    Full text to download in external service

  • Ranking Speech Features for Their Usage in Singing Emotion Classification

    Publication

    - Year 2020

    This paper aims to retrieve speech descriptors that may be useful for the classification of emotions in singing. For this purpose, Mel Frequency Cepstral Coefficients (MFCC) and selected Low-Level MPEG 7 descriptors were calculated based on the RAVDESS dataset. The database contains recordings of emotional speech and singing of professional actors presenting six different emotions. Employing the algorithm of Feature Selection based...

    Full text available to download

  • Mining inconsistent emotion recognition results with the multidimensional model

    Publication

    - IEEE Access - Year 2021

    The paper deals with the challenge of inconsistency in multichannel emotion recognition. The focus of the paper is to explore factors that might influence the inconsistency. The paper reports an experiment that used multi-camera facial expression analysis with multiple recognition systems. The data were analyzed using a multidimensional approach and data mining techniques. The study allowed us to explore camera location, occlusions...

    Full text available to download

  • Bimodal Emotion Recognition Based on Vocal and Facial Features

    Emotion recognition is a crucial aspect of human communication, with applications in fields such as psychology, education, and healthcare. Identifying emotions accurately is challenging, as people use a variety of signals to express and perceive emotions. In this study, we address the problem of multimodal emotion recognition using both audio and video signals, to develop a robust and reliable system that can recognize emotions...

    Full text available to download

  • Scoreboard Architectural Pattern and Integration of Emotion Recognition Results

    Publication

    This paper proposes a new design pattern, named Scoreboard , dedicated for applications solving complex, multi-stage, non-deterministic problems. The pattern provides a computational framework for the design and implementation of systems that integrate a large number of diverse specialized modules that may vary in accuracy, solution level, and modality. The Scoreboard is an extension of Blackboard design pattern and comes under...

    Full text available to download

  • Graph Representation Integrating Signals for Emotion Recognition and Analysis

    Data reusability is an important feature of current research, just in every field of science. Modern research in Affective Computing, often rely on datasets containing experiments-originated data such as biosignals, video clips, or images. Moreover, conducting experiments with a vast number of participants to build datasets for Affective Computing research is time-consuming and expensive. Therefore, it is extremely important to...

    Full text available to download

  • Automatic Emotion Recognition in Children with Autism: A Systematic Literature Review

    Publication

    - SENSORS - Year 2022

    The automatic emotion recognition domain brings new methods and technologies that might be used to enhance therapy of children with autism. The paper aims at the exploration of methods and tools used to recognize emotions in children. It presents a literature review study that was performed using a systematic approach and PRISMA methodology for reporting quantitative and qualitative results. Diverse observation channels and modalities...

    Full text available to download

  • Multimodal English corpus for automatic speech recognition

    A multimodal corpus developed for research of speech recognition based on audio-visual data is presented. Besides usual video and sound excerpts, the prepared database contains also thermovision images and depth maps. All streams were recorded simultaneously, therefore the corpus enables to examine the importance of the information provided by different modalities. Based on the recordings, it is also possible to develop a speech...

  • Speech recognition system for hearing impaired people.

    Publication

    - Year 2005

    Praca przedstawia wyniki badań z zakresu rozpoznawania mowy. Tworzony system wykorzystujący dane wizualne i akustyczne będzie ułatwiał trening poprawnego mówienia dla osób po operacji transplantacji ślimaka i innych osób wykazujących poważne uszkodzenia słuchu. Active Shape models zostały wykorzystane do wyznaczania parametrów wizualnych na podstawie analizy kształtu i ruchu ust w nagraniach wideo. Parametry akustyczne bazują na...

  • Limitations of Emotion Recognition in Software User Experience Evaluation Context

    This paper concerns how an affective-behavioural- cognitive approach applies to the evaluation of the software user experience. Although it may seem that affect recognition solutions are accurate in determining the user experience, there are several challenges in practice. This paper aims to explore the limitations of the automatic affect recognition applied in the usability context as well as...

    Full text available to download

  • Systematic Literature Review for Emotion Recognition from EEG Signals

    Publication

    Researchers have recently become increasingly interested in recognizing emotions from electroencephalogram (EEG) signals and many studies utilizing different approaches have been conducted in this field. For the purposes of this work, we performed a systematic literature review including over 40 articles in order to identify the best set of methods for the emotion recognition problem. Our work collects information about the most...

    Full text to download in external service

  • Systematic Literature Review for Emotion Recognition from EEG Signals

    Researchers have recently become increasingly interested in recognizing emotions from electroencephalogram (EEG) signals and many studies utilizing different approaches have been conducted in this field. For the purposes of this work, we performed a systematic literature review including over 40 articles in order to identify the best set of methods for the emotion recognition problem. Our work collects information about the most...

    Full text available to download

  • Emotion Recognition - the need for a complete analysis of the phenomenon of expression formation

    Publication

    This article shows how complex emotions are. This has been proven by the analysis of the changes that occur on the face. The authors present the problem of image analysis for the purpose of identifying emotions. In addition, they point out the importance of recording the phenomenon of the development of emotions on the human face with the use of high-speed cameras, which allows the detection of micro expression. The work that was...

    Full text available to download

  • Applicability of Emotion Recognition and Induction Methods to Study the Behavior of Programmers

    Publication

    Recent studies in the field of software engineering have shown that positive emotions can increase and negative emotions decrease the productivity of programmers. In the field of affective computing, many methods and tools to recognize the emotions of computer users were proposed. However, it has not been verified yet which of them can be used to monitor the emotional states of software developers. The paper describes a study carried...

    Full text available to download

  • Usefulness of Keystroke Dynamics Features in User Authentication and Emotion Recognition

    The study presented in the article focuses on keystroke dynamics analysis applied to recognize emotional states and to authenticate users. An overview of some studies and applications in these areas is presented. Then, an experiment is described, i.e. the way of collecting data, extracting features, training classifiers and finding out the most appropriate feature subsets. The results show that it is difficult to indicate a universal...

    Full text to download in external service

  • Optimizing Medical Personnel Speech Recognition Models Using Speech Synthesis and Reinforcement Learning

    Text-to-Speech synthesis (TTS) can be used to generate training data for building Automatic Speech Recognition models (ASR). Access to medical speech data is because it is sensitive data that is difficult to obtain for privacy reasons; TTS can help expand the data set. Speech can be synthesized by mimicking different accents, dialects, and speaking styles that may occur in a medical language. Reinforcement Learning (RL), in the...

    Full text available to download

  • Examining Influence of Distance to Microphone on Accuracy of Speech Recognition

    Publication

    The problem of controlling a machine by the distant-talking speaker without a necessity of handheld or body-worn equipment usage is considered. A laboratory setup is introduced for examination of performance of the developed automatic speech recognition system fed by direct and by distant speech acquired by microphones placed at three different distances from the speaker (0.5 m to 1.5 m). For feature extraction from the voice signal...

    Full text to download in external service

  • An audio-visual corpus for multimodal automatic speech recognition

    review of available audio-visual speech corpora and a description of a new multimodal corpus of English speech recordings is provided. The new corpus containing 31 hours of recordings was created specifically to assist audio-visual speech recognition systems (AVSR) development. The database related to the corpus includes high-resolution, high-framerate stereoscopic video streams from RGB cameras, depth imaging stream utilizing Time-of-Flight...

    Full text available to download

  • Visual Lip Contour Detection for the Purpose of Speech Recognition

    Publication

    A method for visual detection of lip contours in frontal recordings of speakers is described and evaluated. The purpose of the method is to facilitate speech recognition with visual features extracted from a mouth region. Different Active Appearance Models are employed for finding lips in video frames and for lip shape and texture statistical description. Search initialization procedure is proposed and error measure values are...

  • Audiovisual speech recognition for training hearing impaired patients

    Publication

    Praca przedstawia system rozpoznawania izolowanych głosek mowy wykorzystujący dane wizualne i akustyczne. Modele Active Shape Models zostały wykorzystane do wyznaczania parametrów wizualnych na podstawie analizy kształtu i ruchu ust w nagraniach wideo. Parametry akustyczne bazują na współczynnikach melcepstralnych. Sieć neuronowa została użyta do rozpoznawania wymawianych głosek na podstawie wektora cech zawierającego oba typy...

  • Limitations of Emotion Recognition from Facial Expressions in e-Learning Context

    Publication

    The paper concerns technology of automatic emotion recognition applied in e-learning environment. During a study of e-learning process the authors applied facial expressions observation via multiple video cameras. Preliminary analysis of the facial expressions using automatic emotion recognition tools revealed several unexpected results, including unavailability of recognition due to face coverage and significant inconsistency...

    Full text to download in external service

  • Emotion Recognition from Physiological Channels Using Graph Neural Network

    In recent years, a number of new research papers have emerged on the application of neural networks in affective computing. One of the newest trends observed is the utilization of graph neural networks (GNNs) to recognize emotions. The study presented in the paper follows this trend. Within the work, GraphSleepNet (a GNN for classifying the stages of sleep) was adjusted for emotion recognition and validated for this purpose. The...

    Full text available to download

  • Acquisition and indexing of RGB-D recordings for facial expressions and emotion recognition

    Publication

    In this paper KinectRecorder comprehensive tool is described which provides for convenient and fast acquisition, indexing and storing of RGB-D video streams from Microsoft Kinect sensor. The application is especially useful as a supporting tool for creation of fully indexed databases of facial expressions and emotions that can be further used for learning and testing of emotion recognition algorithms for affect-aware applications....

    Full text to download in external service

  • A review of emotion recognition methods based on keystroke dynamics and mouse movements

    Publication

    - Year 2013

    The paper describes the approach based on using standard input devices, such as keyboard and mouse, as sources of data for the recognition of users’ emotional states. A number of systems applying this idea have been presented focusing on three categories of research problems, i.e. collecting and labeling training data, extracting features and training classifiers of emotions. Moreover the advantages and examples of combining standard...

    Full text to download in external service

  • Comparison of Language Models Trained on Written Texts and Speech Transcripts in the Context of Automatic Speech Recognition

    Publication
    • S. Dziadzio
    • A. Nabożny
    • A. Smywiński-Pohl
    • B. Ziółko

    - Year 2015

    Full text to download in external service

  • Auditory-model based robust feature selection for speech recognition

    Publication

    - Journal of the Acoustical Society of America - Year 2010

    Full text to download in external service

  • Comparison of selected off-the-shelf solutions for emotion recognition based on facial expressions

    The paper concerns accuracy of emotion recognition from facial expressions. As there are a couple of ready off-the-shelf solutions available in the market today, this study aims at practical evaluation of selected solutions in order to provide some insight into what potential buyers might expect. Two solutions were compared: FaceReader by Noldus and Xpress Engine by QuantumLab. The performed evaluation revealed that the recognition...

    Full text to download in external service

  • A Review of Emotion Recognition Methods Based on Data Acquired via Smartphone Sensors

    Publication

    In recent years, emotion recognition algorithms have achieved high efficiency, allowing the development of various affective and affect-aware applications. This advancement has taken place mainly in the environment of personal computers offering the appropriate hardware and sufficient power to process complex data from video, audio, and other channels. However, the increase in computing and communication capabilities of smartphones,...

    Full text available to download

  • Ontological Model for Contextual Data Defining Time Series for Emotion Recognition and Analysis

    Publication

    One of the major challenges facing the field of Affective Computing is the reusability of datasets. Existing affective-related datasets are not consistent with each other, they store a variety of information in different forms, different formats, and the terms used to describe them are not unified. This paper proposes a new ontology, ROAD, as a solution to this problem, by formally describing the datasets and unifying the terms...

    Full text available to download

  • Recognition of Emotions in Speech Using Convolutional Neural Networks on Different Datasets

    Artificial Neural Network (ANN) models, specifically Convolutional Neural Networks (CNN), were applied to extract emotions based on spectrograms and mel-spectrograms. This study uses spectrograms and mel-spectrograms to investigate which feature extraction method better represents emotions and how big the differences in efficiency are in this context. The conducted studies demonstrated that mel-spectrograms are a better-suited...

    Full text available to download

  • Comparison of Acoustic and Visual Voice Activity Detection for Noisy Speech Recognition

    Publication

    The problem of accurate differentiating between the speaker utterance and the noise parts in a speech signal is considered. The influence of utilizing a voice activity detection in speech signals on the accuracy of the automatic speech recognition (ASR) system is presented. The examined methods of voice activity detection are based on acoustic and visual modalities. The problem of detecting the voice activity in clean and noisy...

  • Analysis of 2D Feature Spaces for Deep Learning-based Speech Recognition

    Publication

    - JOURNAL OF THE AUDIO ENGINEERING SOCIETY - Year 2018

    convolutional neural network (CNN) which is a class of deep, feed-forward artificial neural network. We decided to analyze audio signal feature maps, namely spectrograms, linear and Mel-scale cepstrograms, and chromagrams. The choice was made upon the fact that CNN performs well in 2D data-oriented processing contexts. Feature maps were employed in the Lithuanian word recognition task. The spectral analysis led to the highest word...

  • A survey of automatic speech recognition deep models performance for Polish medical terms

    Among the numerous applications of speech-to-text technology is the support of documentation created by medical personnel. There are many available speech recognition systems for doctors. Their effectiveness in languages such as Polish should be verified. In connection with our project in this field, we decided to check how well the popular speech recognition systems work, employing models trained for the general Polish language....

    Full text to download in external service

  • Combining visual and acoustic modalities to ease speech recognition by hearing impaired people

    Publication

    - Year 2005

    Artykuł prezentuje system, którego celem działania jest ułatwienie procesu treningu poprawnej wymowy dla osób z poważnymi wadami słuchu. W analizie mowy wykorzystane zostały parametry akutyczne i wizualne. Do wyznaczenia parametrów wizualnych na podstawie kształtu i ruchu ust zostały wykorzystane modele Active Shape Models. Parametry akustyczne bazują na współczynnikach melcepstralnych. Do klasyfikacji wypowiadanych głosek została...

  • Hybrid of Neural Networks and Hidden Markov Models as a modern approach to speech recognition systems

    The aim of this paper is to present a hybrid algorithm that combines the advantages ofartificial neural networks and hidden Markov models in speech recognition for control purpos-es. The scope of the paper includes review of currently used solutions, description and analysis of implementation of selected artificial neural network (NN) structures and hidden Markov mod-els (HMM). The main part of the paper consists of a description...

    Full text available to download

  • Ontological Modeling for Contextual Data Describing Signals Obtained from Electrodermal Activity for Emotion Recognition and Analysis

    Most of the research in the field of emotion recognition is based on datasets that contain data obtained during affective computing experiments. However, each dataset is described by different metadata, stored in various structures and formats. This research can be counted among those whose aim is to provide a structural and semantic pattern for affective computing datasets, which is an important step to solve the problem of data...

    Full text available to download

  • EXAMINING INFLUENCE OF VIDEO FRAMERATE AND AUDIO/VIDEO SYNCHRONIZATION ON AUDIO-VISUAL SPEECH RECOGNITION ACCURACY

    Publication

    The problem of video framerate and audio/video synchronization in audio-visual speech recognition is considered. The visual features are added to the acoustic parameters in order to improve the accuracy of speech recognition in noisy conditions. The Mel-Frequency Cepstral Coefficients are used on the acoustic side whereas Active Appearance Model features are extracted from the image. The feature fusion approach is employed. The...

  • EXAMINING INFLUENCE OF VIDEO FRAMERATE AND AUDIO/VIDEO SYNCHRONIZATION ON AUDIO-VISUAL SPEECH RECOGNITION ACCURACY

    Publication

    The problem of video framerate and audio/video synchronization in audio-visual speech recogni-tion is considered. The visual features are added to the acoustic parameters in order to improve the accuracy of speech recognition in noisy conditions. The Mel-Frequency Cepstral Coefficients are used on the acoustic side whereas Active Appearance Model features are extracted from the image. The feature fusion approach is employed. The...

  • Intra-subject class-incremental deep learning approach for EEG-based imagined speech recognition

    Publication

    - Biomedical Signal Processing and Control - Year 2023

    Brain–computer interfaces (BCIs) aim to decode brain signals and transform them into commands for device operation. The present study aimed to decode the brain activity during imagined speech. The BCI must identify imagined words within a given vocabulary and thus perform the requested action. A possible scenario when using this approach is the gradual addition of new words to the vocabulary using incremental learning methods....

    Full text to download in external service

  • Language material for English audiovisual speech recognition system developmen . Materiał językowy do wykorzystania w systemie audiowizualnego rozpoznawania mowy angielskiej

    Publication

    - Year 2013

    The bi-modal speech recognition system requires a 2-sample language input for training and for testing algorithms which precisely depicts natural English speech. For the purposes of the audio-visual recordings, a training data base of 264 sentences (1730 words without repetitions; 5685 sounds) has been created. The language sample reflects vowel and consonant frequencies in natural speech. The recording material reflects both the...

  • IEEE Automatic Speech Recognition and Understanding Workshop

    Conferences

  • ISCA Tutorial and Research Workshop Automatic Speech Recognition

    Conferences

  • Introduction to the special issue on machine learning in acoustics

    Publication
    • Z. Michalopoulou
    • P. Gerstoft
    • B. Kostek
    • M. A. Roch

    - Journal of the Acoustical Society of America - Year 2021

    When we started our Call for Papers for a Special Issue on “Machine Learning in Acoustics” in the Journal of the Acoustical Society of America, our ambition was to invite papers in which machine learning was applied to all acoustics areas. They were listed, but not limited to, as follows: • Music and synthesis analysis • Music sentiment analysis • Music perception • Intelligent music recognition • Musical source separation • Singing...

    Full text available to download

  • Artur Gańcza mgr inż.

    I received the M.Sc. degree from the Gdańsk University of Technology (GUT), Gdańsk, Poland, in 2019. I am currently a Ph.D. student at GUT, with the Department of Automatic Control, Faculty of Electronics, Telecommunications and Informatics. My professional interests include speech recognition, system identification, adaptive signal processing and linear algebra.

  • WYKORZYSTANIE SIECI NEURONOWYCH DO SYNTEZY MOWY WYRAŻAJĄCEJ EMOCJE

    Publication

    - Year 2018

    W niniejszym artykule przedstawiono analizę rozwiązań do rozpoznawania emocji opartych na mowie i możliwości ich wykorzystania w syntezie mowy z emocjami, wykorzystując do tego celu sieci neuronowe. Przedstawiono aktualne rozwiązania dotyczące rozpoznawania emocji w mowie i metod syntezy mowy za pomocą sieci neuronowych. Obecnie obserwuje się znaczny wzrost zainteresowania i wykorzystania uczenia głębokiego w aplikacjach związanych...

  • Andrzej Czyżewski prof. dr hab. inż.

    Prof. zw. dr hab. inż. Andrzej Czyżewski jest absolwentem Wydziału Elektroniki PG (studia magisterskie ukończył w 1982 r.). Pracę doktorską na temat związany z dźwiękiem cyfrowym obronił z wyróżnieniem na Wydziale Elektroniki PG w roku 1987. W 1992 r. przedstawił rozprawę habilitacyjną pt.: „Cyfrowe operacje na sygnałach fonicznych”. Jego kolokwium habilitacyjne zostało przyjęte jednomyślnie w czerwcu 1992 r. w Akademii Górniczo-Hutniczej...

  • Bożena Kostek prof. dr hab. inż.

  • Emotions in polish speech recordings

    Open Research Data
    open access

    The data set presents emotions recorded in sound files that are expressions of Polish speech. Statements were made by people aged 21-23, young voices of 5 men. Each person said the following words / nie – no, oddaj - give back, podaj – pass, stop - stop, tak - yes, trzymaj -hold / five times representing a specific emotion - one of three - anger (a),...

  • Material for Automatic Phonetic Transcription of Speech Recorded in Various Conditions

    Publication

    Automatic speech recognition (ASR) is under constant development, especially in cases when speech is casually produced or it is acquired in various environment conditions, or in the presence of background noise. Phonetic transcription is an important step in the process of full speech recognition and is discussed in the presented work as the main focus in this process. ASR is widely implemented in mobile devices technology, but...

    Full text to download in external service

  • Towards New Mappings between Emotion Representation Models

    Publication

    There are several models for representing emotions in affect-aware applications, and available emotion recognition solutions provide results using diverse emotion models. As multimodal fusion is beneficial in terms of both accuracy and reliability of emotion recognition, one of the challenges is mapping between the models of affect representation. This paper addresses this issue by: proposing a procedure to elaborate new mappings,...

    Full text available to download

  • Towards Emotion Acquisition in IT Usability Evaluation Context

    Publication

    - Year 2015

    The paper concerns extension of IT usability studies with automatic analysis of the emotional state of a user. Affect recognition methods and emotion representation models are reviewed and evaluated for applicability in usability testing procedures. Accuracy of emotion recognition, susceptibility to disturbances, independence on human will and interference with usability testing procedures are...

    Full text to download in external service

  • Emotion monitoring system for drivers

    This article describes a new approach to the issue of building a driver monitoring system. Actual systems focus, for example, on tracking eyelid and eyebrow movements that result from fatigue. We propose a different approach based on monitoring the state of emotions. Such a system assumes that by using the emotion model based on our own concept, referred to as the reverse Plutchik’s paraboloid of emotions, the recognition of emotions...

    Full text available to download

  • Investigating Feature Spaces for Isolated Word Recognition

    Publication

    - Year 2018

    Much attention is given by researchers to the speech processing task in automatic speech recognition (ASR) over the past decades. The study addresses the issue related to the investigation of the appropriateness of a two-dimensional representation of speech feature spaces for speech recognition tasks based on deep learning techniques. The approach combines Convolutional Neural Networks (CNNs) and timefrequency signal representation...

  • Jan Daciuk dr hab. inż.

    Jan Daciuk received his M.Sc. from the Faculty of Electronics of Gdansk University of Technology in 1986, and his Ph.D. from the Faculty of Electronics, Telecommunications and Informatics of Gdańsk University of Technology in 1999. He has been working at the Faculty from 1988. His research interests include finite state methods in natural language processing and computational linguistics including speech processing. Dr. Daciuk...

  • Remote Estimation of Video-Based Vital Signs in Emotion Invocation Studies

    Abstract— The goal of this study is to examine the influence of various imitated and video invoked emotions on the vital signs (respiratory and pulse rates). We also perform an analysis of the possibility to extract signals from sequences acquired with cost-effective cameras. The preliminary results show that the respiratory rate allows for better separation of some emotions than the pulse rate, yet this relation highly depends...

    Full text available to download

  • Methodology and technology for the polymodal allophonic speech transcription

    A method for automatic audiovisual transcription of speech employing: acoustic and visual speech representations is developed. It adopts a combining of audio and visual modalities, which provide a synergy effect in terms of speech recognition accuracy. To establish a robust solution, basic research concerning the relation between the allophonic variation of speech, i.e. the changes in the articulatory setting of speech organs for...

    Full text to download in external service

  • Methodology and technology for the polymodal allophonic speech transcription

    A method for automatic audiovisual transcription of speech employing: acoustic, electromagnetical articulography and visual speech representations is developed. It adopts a combining of audio and visual modalities, which provide a synergy effect in terms of speech recognition accuracy. To establish a robust solution, basic research concerning the relation between the allophonic variation of speech, i.e., the changes in the articulatory...

    Full text to download in external service

  • Investigating Feature Spaces for Isolated Word Recognition

    Publication
    • P. Treigys
    • G. Korvel
    • G. Tamulevicius
    • J. Bernataviciene
    • B. Kostek

    - Year 2020

    The study addresses the issues related to the appropriateness of a two-dimensional representation of speech signal for speech recognition tasks based on deep learning techniques. The approach combines Convolutional Neural Networks (CNNs) and time-frequency signal representation converted to the investigated feature spaces. In particular, waveforms and fractal dimension features of the signal were chosen for the time domain, and...

    Full text to download in external service

  • An Attempt to Create Speech Synthesis Model That Retains Lombard Effect Characteristics

    Publication

    - Year 2019

    The speech with the Lombard effect has been extensively studied in the context of speech recognition or speech enhancement. However, few studies have investigated the Lombard effect in the context of speech synthesis. The aim of this paper is to create a mathematical model that allows for retaining the Lombard effect. These models could be used as a basis of a formant speech synthesizer. The proposed models are based on dividing...

    Full text available to download

  • SYNTHESIZING MEDICAL TERMS – QUALITY AND NATURALNESS OF THE DEEP TEXT-TO-SPEECH ALGORITHM

    The main purpose of this study is to develop a deep text-to-speech (TTS) algorithm designated for an embedded system device. First, a critical literature review of state-of-the-art speech synthesis deep models is provided. The algorithm implementation covers both hardware and algorithmic solutions. The algorithm is designed for use with the Raspberry Pi 4 board. 80 synthesized sentences were prepared based on medical and everyday...

    Full text available to download

  • Rough Sets Applied to Mood of Music Recognition

    Publication

    - Year 2016

    With the growth of accessible digital music libraries over the past decade, there is a need for research into automated systems for searching, organizing and recommending music. Mood of music is considered as one of the most intuitive criteria for listeners, thus this work is focused on the emotional content of music and its automatic recognition. The research study presented in this work contains an attempt to music emotion recognition...

  • Investigation of educational processes with affective computing methods

    Publication

    This paper concerns the monitoring of educational processes with the use of new technologies for the recognition of human emotions. This paper summarizes results from three experiments, aimed at the validation of applying emotion recognition to e-learning. An analysis of the experiments’ executions provides an evaluation of the emotion elicitation methods used to monitor learners. The comparison of affect recognition algorithms...

    Full text available to download

  • Speech Analytics Based on Machine Learning

    Publication

    In this chapter, the process of speech data preparation for machine learning is discussed in detail. Examples of speech analytics methods applied to phonemes and allophones are shown. Further, an approach to automatic phoneme recognition involving optimized parametrization and a classifier belonging to machine learning algorithms is discussed. Feature vectors are built on the basis of descriptors coming from the music information...

    Full text to download in external service

  • AffecTube — Chrome extension for YouTube video affective annotations

    Publication

    - SoftwareX - Year 2023

    The shortage of emotion-annotated video datasets suitable for training and validating machine learning models for facial expression-based emotion recognition stems primarily from the significant effort and cost required for manual annotation. In this paper, we present AffecTube as a comprehensive solution that leverages crowdsourcing to annotate videos directly on the YouTube platform, resulting in ready-to-use emotion-annotated...

    Full text available to download

  • Examining Feature Vector for Phoneme Recognition

    Publication

    - Year 2018

    The aim of this paper is to analyze usability of descriptors coming from music information retrieval to the phoneme analysis. The case study presented consists in several steps. First, a short overview of parameters utilized in speech analysis is given. Then, a set of time and frequency domain-based parameters is selected and discussed in the context of stop consonant acoustical characteristics. A toolbox created for this purpose...

  • Enhanced voice user interface employing spatial filtration of signals from acoustic vector sensor

    Spatial filtration of sound is introduced to enhance speech recognition accuracy in noisy conditions. An acoustic vector sensor (AVS) is employed. The signals from the AVS probe are processed in order to attenuate the surrounding noise. As a result the signal to noise ratio is increased. An experiment is featured in which speech signals are disturbed by babble noise. The signals before and after spatial filtration are processed...

    Full text to download in external service

  • KORPUS MOWY ANGIELSKIEJ DO CELÓW MULTIMODALNEGO AUTOMATYCZNEGO ROZPOZNAWANIA MOWY

    W referacie zaprezentowano audiowizualny korpus mowy zawierający 31 godzin nagrań mowy w języku angielskim. Korpus dedykowany jest do celów automatycznego audiowizualnego rozpoznawania mowy. Korpus zawiera nagrania wideo pochodzące z szybkoklatkowej kamery stereowizyjnej oraz dźwięk zarejestrowany przez matrycę mikrofonową i mikrofon komputera przenośnego. Dzięki uwzględnieniu nagrań zarejestrowanych w warunkach szumowych korpus...

  • DevEmo—Software Developers’ Facial Expression Dataset

    The COVID-19 pandemic has increased the relevance of remote activities and digital tools for education, work, and other aspects of daily life. This reality has highlighted the need for emotion recognition technology to better understand the emotions of computer users and provide support in remote environments. Emotion recognition can play a critical role in improving the remote experience and ensuring that individuals are able...

    Full text available to download

  • A comparative study of English viseme recognition methods and algorithms

    An elementary visual unit – the viseme is concerned in the paper in the context of preparing the feature vector as a main visual input component of Audio-Visual Speech Recognition systems. The aim of the presented research is a review of various approaches to the problem, the implementation of algorithms proposed in the literature and a comparative research on their effectiveness. In the course of the study an optimal feature vector construction...

    Full text available to download

  • A comparative study of English viseme recognition methods and algorithm

    An elementary visual unit – the viseme is concerned in the paper in the context of preparing the feature vector as a main visual input component of Audio-Visual Speech Recognition systems. The aim of the presented research is a review of various approaches to the problem, the implementation of algorithms proposed in the literature and a comparative research on their effectiveness. In the course of the study an optimal feature vector...

    Full text available to download

  • Bimodal classification of English allophones employing acoustic speech signal and facial motion capture

    A method for automatic transcription of English speech into International Phonetic Alphabet (IPA) system is developed and studied. The principal objective of the study is to evaluate to what extent the visual data related to lip reading can enhance recognition accuracy of the transcription of English consonantal and vocalic allophones. To this end, motion capture markers were placed on the faces of seven speakers to obtain lip...

    Full text to download in external service

  • Design Elements of Affect Aware Video Games

    Publication

    - Year 2015

    In this paper issues of design and development process of affect-aware video games are presented. Several important design aspects of such games are pointed out. A concept of a middleware framework is proposed that separates the development of affect-aware video games from emotion recognition algorithms and support from input sensors. Finally, two prototype affect-aware video games are presented that conform to the presented architecture...

    Full text to download in external service

  • Affect aware video games

    Publication

    - Year 2022

    In this chapter a problem of affect aware video games is described, including such issue as: emotional model of the player, design, development and UX testing of affect-aware video games, multimodal emotion recognition and a featured review of affect-aware video games.

    Full text to download in external service

  • Voice command recognition using hybrid genetic algorithm

    Publication

    Abstract: Speech recognition is a process of converting the acoustic signal into a set of words, whereas voice command recognition consists in the correct identification of voice commands, usually single words. Voice command recognition systems are widely used in the military, control systems, electronic devices, such as cellular phones, or by people with disabilities (e.g., for controlling a wheelchair or operating a computer...

    Full text available to download

  • Affective Learning Manifesto – 10 Years Later

    Publication

    - Year 2014

    In 2004 a group of affective computing researchers proclaimed a manifesto of affective learning that outlined the prospects and white spots of research at that time. Ten years passed by and affective computing developed many methods and tools for tracking human emotional states as well as models for affective systems construction. There are multiple examples of affective methods applications in Intelligent Tutoring Systems (ITS)....

  • Robot Eye Perspective in Perceiving Facial Expressions in Interaction with Children with Autism

    Publication

    The paper concerns automatic facial expression analysis applied in a study of natural “in the wild” interaction between children with autism and a social robot. The paper reports a study that analyzed the recordings captured via a camera located in the eye of a robot. Children with autism exhibit a diverse level of deficits, including ones in social interaction and emotional expression. The aim of the study was to explore the possibility...

    Full text to download in external service

  • Noise profiling for speech enhancement employing machine learning models

    Publication

    - Journal of the Acoustical Society of America - Year 2022

    This paper aims to propose a noise profiling method that can be performed in near real-time based on machine learning (ML). To address challenges related to noise profiling effectively, we start with a critical review of the literature background. Then, we outline the experiment performed consisting of two parts. The first part concerns the noise recognition model built upon several baseline classifiers and noise signal features...

    Full text available to download

  • Examining Feature Vector for Phoneme Recognition / Analiza parametrów w kontekście automatycznej klasyfikacji fonemów

    Publication

    - Year 2017

    The aim of this paper is to analyze usability of descriptors coming from music information retrieval to the phoneme analysis. The case study presented consists in several steps. First, a short overview of parameters utilized in speech analysis is given. Then, a set of time and frequency domain-based parameters is selected and discussed in the context of stop consonant acoustical characteristics. A toolbox created for this purpose...

  • Using Different Information Channels for Affect-Aware Video Games - A Case Study

    Publication

    - Year 2018

    This paper presents the problem of creating affect-aware video games that use different information channels, such as image, video, physiological signals, input devices, and player’s behaviour, for emotion recognition. Presented case studies of three affect-aware games show certain conditions and limitations for using specific signals to recognize emotions and lead to interesting conclusions.

    Full text to download in external service

  • Biometria i przetwarzanie mowy 2023

    e-Learning Courses
    • J. Daciuk

    {mlang pl} Celem kursu jest zapoznanie studentów z: metodami ustalania i potwierdzania tożsamości ludzi na podstawie mierzalnych cech organizmu cechami mowy ludzkiej, w szczególności polskiej metodami rozpoznawania mowy metodami syntezy mowy {mlang} {mlang en} The aim of the course is to familiarize the students with: methods of identification and verification of identity of people based on measurable features of their...

  • Biometria i przetwarzanie mowy 2024

    e-Learning Courses
    • J. Daciuk

    {mlang pl} Celem kursu jest zapoznanie studentów z: metodami ustalania i potwierdzania tożsamości ludzi na podstawie mierzalnych cech organizmu cechami mowy ludzkiej, w szczególności polskiej metodami rozpoznawania mowy metodami syntezy mowy {mlang} {mlang en} The aim of the course is to familiarize the students with: methods of identification and verification of identity of people based on measurable features of their...

  • EMBOA - affective loop in Socially Assistive Robotics as an intervention tool for children with autism

    e-Learning Courses
    • M. Wróbel
    • A. Landowska

    The aim of the training course "Intensive programmes for higher education learner" within the EMBOA project is to familiarise participants with the use of social robots as an intervention tool for children with autism, emotion recognition and the combination of both methods. Students will be informed about the guidelines and results of the project.

  • Vocalic Segments Classification Assisted by Mouth Motion Capture

    Visual features convey important information for automatic speech recognition (ASR), especially in noisy environment. The purpose of this study is to evaluate to what extent visual data (i.e. lip reading) can enhance recognition accuracy in the multi-modal approach. For that purpose motion capture markers were placed on speakers' faces to obtain lips tracking data during speaking. Different parameterizations strategies were tested...

    Full text to download in external service

  • Affective reactions to playing digital games

    Publication

    - Year 2015

    The paper presents a study of emotional states during a gameplay. An experiment of two-player Tetris game is reported, followed by the analysis of the results - self-reported emotional states as well as physiological signals measurements interpretation. The study reveals the diversity of emotional reactions and concludes, that a representative player's emotional model is hard to define. Instead, an adaptive approach to emotion...

    Full text to download in external service

  • Automatic recognition of males and females among web browser users based on behavioural patterns of peripherals usage

    Publication

    - Internet Research - Year 2016

    Purpose The purpose of this paper is to answer the question whether it is possible to recognise the gender of a web browser user on the basis of keystroke dynamics and mouse movements. Design/methodology/approach An experiment was organised in order to track mouse and keyboard usage using a special web browser plug-in. After collecting the data, a number of parameters describing the users’ keystrokes, mouse movements and clicks...

    Full text to download in external service

  • PHONEME DISTORTION IN PUBLIC ADDRESS SYSTEMS

    Publication

    - Year 2015

    The quality of voice messages in speech reinforcement and public address systems is often poor. The sound engineering projects of such systems take care of sound intensity and possible reverberation phenomena in public space without, however, considering the influence of acoustic interference related to the number and distribution of loudspeakers. This paper presents the results of measurements and numerical simulations of the...