Laboratorium Akustyki Fonicznej - Administrative Units - Bridge of Knowledge

Search

Laboratorium Akustyki Fonicznej

Filters

total: 37

  • Category
  • Year
  • Options

clear Chosen catalog filters disabled

Catalog Publications

  • Computer-assisted pronunciation training—Speech synthesis is almost all you need
    Publication

    - SPEECH COMMUNICATION - Year 2022

    The research community has long studied computer-assisted pronunciation training (CAPT) methods in non-native speech. Researchers focused on studying various model architectures, such as Bayesian networks and deep learning methods, as well as on the analysis of different representations of the speech signal. Despite significant progress in recent years, existing CAPT methods are not able to detect pronunciation errors with high...

    Full text available to download

  • Introduction to the special issue on machine learning in acoustics
    Publication
    • Z. Michalopoulou
    • P. Gerstoft
    • B. Kostek
    • M. A. Roch

    - Journal of the Acoustical Society of America - Year 2021

    When we started our Call for Papers for a Special Issue on “Machine Learning in Acoustics” in the Journal of the Acoustical Society of America, our ambition was to invite papers in which machine learning was applied to all acoustics areas. They were listed, but not limited to, as follows: • Music and synthesis analysis • Music sentiment analysis • Music perception • Intelligent music recognition • Musical source separation • Singing...

    Full text available to download

  • Mispronunciation Detection in Non-Native (L2) English with Uncertainty Modeling
    Publication

    - Year 2021

    A common approach to the automatic detection of mispronunciation in language learning is to recognize the phonemes produced by a student and compare it to the expected pronunciation of a native speaker. This approach makes two simplifying assumptions: a) phonemes can be recognized from speech with high accuracy, b) there is a single correct way for a sentence to be pronounced. These assumptions do not always hold, which can result...

    Full text to download in external service

  • Evaluation of aspiration problems in L2 English pronunciation employing machine learning

    The approach proposed in this study includes methods specifically dedicated to the detection of allophonic variation in English. This study aims to find an efficient method for automatic evaluation of aspiration in the case of Polish second-language (L2) English speakers’ pronunciation when whole words are analyzed instead of particular allophones extracted from words. Sample words including aspirated and unaspirated allophones...

    Full text available to download

  • Acoustic Sensing Analytics Applied to Speech in Reverberation Conditions
    Publication

    The paper aims to discuss a case study of sensing analytics and technology in acoustics when applied to reverberation conditions. Reverberation is one of the issues that makes speech in indoor spaces challenging to understand. This problem is particularly critical in large spaces with few absorbing or diffusing surfaces. One of the natural remedies to improve speech intelligibility in such conditions may be achieved through speaking...

    Full text available to download

  • Sound intensity distribution around organ pipe

    The aim of the paper was to compare acoustic field around the open and stopped organ pipes. The wooden organ pipe was located in the anechoic chamber and activated with a constant air flow, produced by an external air-compressor. Thus, long-term steady state response was possible to obtain. Multichannel acoustic vector sensor was used to measure the sound intensity distribution of radiated acoustic energy. Measurements have been...

    Full text available to download

  • Detection of Lexical Stress Errors in Non-Native (L2) English with Data Augmentation and Attention
    Publication

    - Year 2021

    This paper describes two novel complementary techniques that improve the detection of lexical stress errors in non-native (L2) English speech: attention-based feature extraction and data augmentation based on Neural Text-To-Speech (TTS). In a classical approach, audio features are usually extracted from fixed regions of speech such as the syllable nucleus. We propose an attention-based deep learning model that automatically de...

    Full text available to download

  • Algoritmically improved microwave radar monitors breathing more acurrate than sensorized belt
    Publication

    - Scientific Reports - Year 2022

    This paper describes a novel way to measure, process, analyze, and compare respiratory signals acquired by two types of devices: a wearable sensorized belt and a microwave radar-based sensor. Both devices provide breathing rate readouts. First, the background research is presented. Then, the underlying principles and working parameters of the microwave radar-based sensor, a contactless device for monitoring breathing, are described....

    Full text available to download

  • Mining Knowledge of Respiratory Rate Quantification and Abnormal Pattern Prediction
    Publication

    The described application of granular computing is motivated because cardiovascular disease (CVD) remains a major killer globally. There is increasing evidence that abnormal respiratory patterns might contribute to the development and progression of CVD. Consequently, a method that would support a physician in respiratory pattern evaluation should be developed. Group decision-making, tri-way reasoning, and rough set–based analysis...

    Full text available to download

  • Rediscovering Automatic Detection of Stuttering and Its Subclasses through Machine Learning—The Impact of Changing Deep Model Architecture and Amount of Data in the Training Set
    Publication

    - Applied Sciences-Basel - Year 2023

    This work deals with automatically detecting stuttering and its subclasses. An effective classification of stuttering along with its subclasses could find wide application in determining the severity of stuttering by speech therapists, preliminary patient diagnosis, and enabling communication with the previously mentioned voice assistants. The first part of this work provides an overview of examples of classical and deep learning...

    Full text available to download

  • Detecting Lombard Speech Using Deep Learning Approach
    Publication
    • K. Kąkol
    • G. Korvel
    • G. Tamulevicius
    • B. Kostek

    - SENSORS - Year 2023

    Robust Lombard speech-in-noise detecting is challenging. This study proposes a strategy to detect Lombard speech using a machine learning approach for applications such as public address systems that work in near real time. The paper starts with the background concerning the Lombard effect. Then, assumptions of the work performed for Lombard speech detection are outlined. The framework proposed combines convolutional neural networks...

    Full text available to download

  • Automatic music signal mixing system based on one-dimensional Wave-U-Net autoencoders
    Publication

    The purpose of this paper is to show a music mixing system that is capable of automatically mixing separate raw recordings with good quality regardless of the music genre. This work recalls selected methods for automatic audio mixing first. Then, a novel deep model based on one-dimensional Wave-U-Net autoencoders is proposed for automatic music mixing. The model is trained on a custom-prepared database. Mixes created using the...

    Full text available to download

  • Machine learning applied to acoustic-based road traffic monitoring

    The motivation behind this study lies in adapting acoustic noise monitoring systems for road traffic monitoring for driver’s safety. Such a system should recognize a vehicle type and weather-related pavement conditions based on the audio level measurement. The study presents the effectiveness of the selected machine learning algorithms in acoustic-based road traffic monitoring. Bases of the operation of the acoustic road traffic...

    Full text available to download

  • How Machine Learning Contributes to Solve Acoustical Problems
    Publication
    • M. A. Roch
    • P. Gerstoft
    • B. Kostek
    • Z. Michalopoulou

    - Journal of the Acoustical Society of America - Year 2021

    Machine learning is the process of learning functional relationships between measured signals (called percepts in the artificial intelligence literature) and some output of interest. In some cases, we wish to learn very specific relationships from signals such as identifying the language of a speaker (e.g. Zissman, 1996) which has direct applications such as in call center routing or performing a music information retrieval task...

    Full text available to download

  • Predicting emotion from color present in images and video excerpts by machine learning
    Publication

    This work aims at predicting emotion based on the colors present in images and video excerpts using a machine-learning approach. The purpose of this paper is threefold: (a) to develop a machine-learning algorithm that classifies emotions based on the color present in an image, (b) to select the best-performing algorithm from the first phase and apply it to film excerpt emotion analysis based on colors, (c) to design an online survey...

    Full text available to download

  • Noise profiling for speech enhancement employing machine learning models
    Publication

    - Journal of the Acoustical Society of America - Year 2022

    This paper aims to propose a noise profiling method that can be performed in near real-time based on machine learning (ML). To address challenges related to noise profiling effectively, we start with a critical review of the literature background. Then, we outline the experiment performed consisting of two parts. The first part concerns the noise recognition model built upon several baseline classifiers and noise signal features...

    Full text available to download

  • Investigating Noise Interference on Speech Towards Applying the Lombard Effect Automatically
    Publication

    - Year 2022

    The aim of this study is two-fold. First, we perform a series of experiments to examine the interference of different noises on speech processing. For that purpose, we concentrate on the Lombard effect, an involuntary tendency to raise speech level in the presence of background noise. Then, we apply this knowledge to detecting speech with the Lombard effect. This is for preparing a dataset for training a machine learning-based...

    Full text available to download

  • Data, Information, Knowledge, Wisdom Pyramid Concept Revisited in the Context of Deep Learning
    Publication

    - Year 2023

    In this paper, the data, information, knowledge, and wisdom (DIKW) pyramid is revisited in the context of deep learning applied to machine learningbased audio signal processing. A discussion on the DIKW schema is carried out, resulting in a proposal that may supplement the original concept. Parallels between DIWK pertaining to audio processing are presented based on examples of the case studies performed by the author and her collaborators....

    Full text to download in external service

  • Pursuing Analytically the Influence of Hearing Aid Use on Auditory Perception in Various Acoustic Situations
    Publication

    - Vibrations in Physical Systems - Year 2022

    The paper presents the development of a method for assessing auditory perception and the effectiveness of applying hearing aids for hard-of-hearing people during short-term (up to 7 days) and longer-term (up to 3 months) use. The method consists of a survey based on the APHAB questionnaire. Additional criteria such as the degree of hearing loss, technological level of hearing aids used, as well as the user experience are taken...

    Full text available to download

  • Pursuing Listeners’ Perceptual Response in Audio-Visual Interactions - Headphones vs Loudspeakers: A Case Study
    Publication

    This study investigates listeners’ perceptual responses in audio-visual interactions concerning binaural spatial audio. Audio stimuli are coupled with or without visual cues to the listeners. The subjective test participants are tasked to indicate the direction of the incoming sound while listening to the audio stimulus via loudspeakers or headphones with the head-related transfer function (HRTF) plugin. First, the methodology...

    Full text available to download