Search results for: audio - Bridge of Knowledge

Search

Search results for: audio

Search results for: audio

  • IFE: NN-aided Instantaneous Pitch Estimation

    Publication

    Pitch estimation is still an open issue in contemporary signal processing research. Nowadays, growing momentum of machine learning techniques application in the data-driven society allows for tackling this problem from a new perspective. This work leverages such an opportunity to propose a refined Instantaneous Frequency and power based pitch Estimator method called IFE. It incorporates deep neural network based pitch estimation...

    Full text available to download

  • Igor Garnik dr inż.

    Igor Garnik graduated from the Faculty of Electronics at the Gdańsk University of Technology (1992). He works at the Gdańsk University of Technology since 1997 - first employed as an assistant in the Department of Ergonomics and Maintenance of Technical Systems at the Faculty of Management and Economics, and then - after obtaining the degree of doctor in 2006 as an assistant professor. In the years 2009–2015 he was the coordinator...

  • Testbed analysis of video and VoIP transsmission performance in IEEE 802.11 b/g/n networks

    The aim of the work is to analyze capabilities and limitations of different implementations of IEEE 802.11 technologies (IEEE 802.11 b/g/n), utilized for both video streaming and VoIP calls directed to mobile devices. Our preliminary research showed that results obtained with currently popular simulation tools can be drastically different than these possible in real-world environment, so, in order to correctly evaluate performance...

    Full text to download in external service

  • Multimodal Surveillance Based Personal Protection System

    A novel, multimodal approach for automatic detection of abduction of a protected individual, employing dedicated personal protection device and a city monitoring system is proposed and overviewed. The solution is based on combining four modalities (signals coming from: Bluetooth, fixed and PTZ cameras, thermal camera, acoustic sensors). The Bluetooth signal is used continuously to monitor the protected person presence, and in case...

  • Multimodal system for diagnosis and polysensory stimulation of subjects with communication disorders

    An experimental multimodal system, designed for polysensory diagnosis and stimulation of persons with impaired communication skills or even non-communicative subjects is presented. The user interface includes an eye tracking device and the EEG monitoring of the subject. Furthermore, the system consists of a device for objective hearing testing and an autostereoscopic projection system designed to stimulate subjects through their...

  • A Study of Cross-Linguistic Speech Emotion Recognition Based on 2D Feature Spaces

    Publication
    • G. Tamulevicius
    • G. Korvel
    • A. B. Yayak
    • P. Treigys
    • J. Bernataviciene
    • B. Kostek

    - Electronics - Year 2020

    In this research, a study of cross-linguistic speech emotion recognition is performed. For this purpose, emotional data of different languages (English, Lithuanian, German, Spanish, Serbian, and Polish) are collected, resulting in a cross-linguistic speech emotion dataset with the size of more than 10.000 emotional utterances. Despite the bi-modal character of the databases gathered, our focus is on the acoustic representation...

    Full text available to download

  • Ranking Speech Features for Their Usage in Singing Emotion Classification

    Publication

    - Year 2020

    This paper aims to retrieve speech descriptors that may be useful for the classification of emotions in singing. For this purpose, Mel Frequency Cepstral Coefficients (MFCC) and selected Low-Level MPEG 7 descriptors were calculated based on the RAVDESS dataset. The database contains recordings of emotional speech and singing of professional actors presenting six different emotions. Employing the algorithm of Feature Selection based...

    Full text available to download

  • ANALIZA PARAMETRÓW SYGNAŁU MOWY W KONTEKŚCIE ICH PRZYDATNOŚCI W AUTOMATYCZNEJ OCENIE JAKOŚCI EKSPRESJI ŚPIEWU

    Praca dotyczy podejścia do parametryzacji w przypadku klasyfikacji emocji w śpiewie oraz porównania z klasyfikacją emocji w mowie. Do tego celu wykorzystano bazę mowy i śpiewu nacechowanego emocjonalnie RAVDESS (Ryerson Audio-Visual Database of Emotional Speech and Song), zawierającą nagrania profesjonalnych aktorów prezentujących sześć różnych emocji. Następnie obliczono współczynniki mel-cepstralne (MFCC) oraz wybrane deskryptory...

    Full text available to download

  • SYNAT_MUSIC_GENRE_FV_173

    Open Research Data

    This is the original dataset containing 51582 music tracks (22 music genres) and 173 element-feature vector [1-6,9]. A collection of more than 50000 music excerpts described with a set of descriptors obtained through the analysis of 30-second mp3 recordings was gathered in a database called SYNAT. The SYNAT database was realized by the Gdansk University...

  • Emotions in polish speech recordings

    Open Research Data
    open access

    The data set presents emotions recorded in sound files that are expressions of Polish speech. Statements were made by people aged 21-23, young voices of 5 men. Each person said the following words / nie – no, oddaj - give back, podaj – pass, stop - stop, tak - yes, trzymaj -hold / five times representing a specific emotion - one of three - anger (a),...

  • New Applications of Multimodal Human-Computer Interfaces

    Publication

    - Year 2012

    Multimodal computer interfaces and examples of their applications to education software and for the disabled people are presented. The proposed interfaces include the interactive electronic whiteboard based on video image analysis, application for controlling computers with gestures and the audio interface for speech stretching for hearing impaired and stuttering people. Application of the eye-gaze tracking system to awareness...

  • Rough Sets Applied to Mood of Music Recognition

    Publication

    - Year 2016

    With the growth of accessible digital music libraries over the past decade, there is a need for research into automated systems for searching, organizing and recommending music. Mood of music is considered as one of the most intuitive criteria for listeners, thus this work is focused on the emotional content of music and its automatic recognition. The research study presented in this work contains an attempt to music emotion recognition...

  • Musical Instrument Identification Using Deep Learning Approach

    Publication

    - SENSORS - Year 2022

    The work aims to propose a novel approach for automatically identifying all instruments present in an audio excerpt using sets of individual convolutional neural networks (CNNs) per tested instrument. The paper starts with a review of tasks related to musical instrument identification. It focuses on tasks performed, input type, algorithms employed, and metrics used. The paper starts with the background presentation, i.e., metadata...

    Full text available to download

  • Architecture Design of a Networked Music Performance Platform for a Chamber Choir

    This paper describes an architecture design process for Networked Music Performance (NMP) platform for medium-sized conducted music ensembles, based on remote rehearsals of Academic Choir of Gdańsk University of Technology. The issues of real-time remote communication, in-person music performance, and NMP are described. Three iterative steps defining and extending the architecture of the NMP platform with additional features to...

    Full text to download in external service

  • Study on CPU and RAM Resource Consumption of Mobile Devices using Streaming Services

    Publication

    Streaming multimedia services have become very popular in recent years, due to the development of wireless networks. With the growing number of mobile devices worldwide, service providers offer dedicated applications that allow to deliver on-demand audio and video content anytime and everywhere. The aim of this study was to compare different streaming services and investigate their impact on the CPU and RAM resources, with respect...

    Full text to download in external service

  • Subjective and Objective Quality Evaluation Study of BPL -PLC Wired Medium

    Publication

    - Elektronika Ir Elektrotechnika - Year 2020

    This paper presents results of research on the effectiveness of bi-directional voice transmission in a 6 kV mine cable network using BPL-PLC (Broadband over Power Line - Power Line Communication) technology. It concerns both emergency cable state (supply outage with cable shorted at both ends) and loaded with distorted current waveforms. The narrowband (0.5 MHz–15 MHz) and broadband (two different modes, frequency range of 3 MHz–7.5...

    Full text available to download

  • Bimodal Emotion Recognition Based on Vocal and Facial Features

    Emotion recognition is a crucial aspect of human communication, with applications in fields such as psychology, education, and healthcare. Identifying emotions accurately is challenging, as people use a variety of signals to express and perceive emotions. In this study, we address the problem of multimodal emotion recognition using both audio and video signals, to develop a robust and reliable system that can recognize emotions...

    Full text available to download

  • Halucynacje chatbotów a prawda: główne nurty debaty i ich interpretacje

    Publication
    • J. Kreft
    • M. Boguszewicz-kreft
    • B. Cyrek

    - Roczniki Nauk Społecznych - Year 2024

    Generatywne systemy sztucznej inteligencji (SI) są w stanie tworzyć treści medialne poprzez zastosowanie uczenia maszynowego do dużych ilości danych szkoleniowych. Te nowe dane mogą obejmować tekst (np. Bard firmy Google, LLaMa firmy Meta lub ChatGPT firmy OpenAI) oraz elementy wizualne (np. Stable Diffusion lub DALL-E OpenAI) i dźwięk (np. VALL-E firmy Micro- soft). Stopień zaawansowania tych treści może czynić je nieodróżnialnymi...

    Full text available to download

  • Video recordings of bees at entrance to hives

    Open Research Data
    open access - series: Bees

    Video recordings of bees at entrance to hives from 2017-04-22, 2017-04-23 and 2018-05-22. All recordings were made using hand-held full HD camera (Samsung Galaxy S3) and encoded using H.264 video codec (Standard Baseline Profile for mov files from 2017, High Profile for mp4 files from 2018) , 30 FPS and bit rate 14478 kb/s (mov files from 2017) or 16869 kb/s...

  • Smart Virtual Bass Synthesis Algorithm Based on Music Genre Classification

    Publication

    The aim of this paper is to present a novel approach to the Virtual Bass Synthesis (VBS) algorithms applied to portable computers. The proposed algorithm employed automatic music genre recognition to determine the optimum parameters for the synthesis of additional frequencies. The synthesis was carried out using the non-linear device (NLD) and phase vocoder (PV) methods depending on the music excerpt genre. Classification of musical...