Wyniki wyszukiwania dla: VOICE CONVERSION - MOST Wiedzy

Wyszukiwarka

Wyniki wyszukiwania dla: VOICE CONVERSION

Filtry

wszystkich: 3

wyczyść wszystkie filtry


Filtry wybranego katalogu

  • Kategoria

  • Rok

  • Opcje

wyczyść Filtry wybranego katalogu niedostępne

Wyniki wyszukiwania dla: VOICE CONVERSION

  • Cross-Lingual Knowledge Distillation via Flow-Based Voice Conversion for Robust Polyglot Text-to-Speech

    Publikacja
    • D. Piotrowski
    • R. Korzeniowski
    • A. Falai
    • S. Cygert
    • K. Pokora
    • G. Tinchev
    • Z. Zhang
    • K. Yanagisawa

    - Rok 2023

    In this work, we introduce a framework for cross-lingual speech synthesis, which involves an upstream Voice Conversion (VC) model and a downstream Text-To-Speech (TTS) model. The proposed framework consists of 4 stages. In the first two stages, we use a VC model to convert utterances in the target locale to the voice of the target speaker. In the third stage, the converted data is combined with the linguistic features and durations...

    Pełny tekst do pobrania w serwisie zewnętrznym

  • Creating new voices using normalizing flows

    Publikacja
    • P. Biliński
    • T. Merritt
    • A. Ezzerg
    • K. Pokora
    • S. Cygert
    • K. Yanagisawa
    • R. Barra-Chicote
    • D. Korzekwa

    - Rok 2022

    Creating realistic and natural-sounding synthetic speech remains a big challenge for voice identities unseen during training. As there is growing interest in synthesizing voices of new speakers, here we investigate the ability of normalizing flows in text-to-speech (TTS) and voice conversion (VC) modes to extrapolate from speakers observed during training to create unseen speaker identities. Firstly, we create an approach for TTS...

    Pełny tekst do pobrania w portalu

  • Automated detection of pronunciation errors in non-native English speech employing deep learning

    Publikacja

    - Rok 2023

    Despite significant advances in recent years, the existing Computer-Assisted Pronunciation Training (CAPT) methods detect pronunciation errors with a relatively low accuracy (precision of 60% at 40%-80% recall). This Ph.D. work proposes novel deep learning methods for detecting pronunciation errors in non-native (L2) English speech, outperforming the state-of-the-art method in AUC metric (Area under the Curve) by 41%, i.e., from...

    Pełny tekst do pobrania w portalu