0

NaturalSpeech: End-to-End Text to Speech Synthesis with Human-Level Quality

Avatar
Voices Powered byElevenlabs logo

Avatar
user

how do i do tts

AI: To implement TTS using NaturalSpeech, follow these steps: 1. Pre-train the phoneme encoder on a large-scale text corpus using masked language modeling on phoneme sequences. 2. Train the fully end-to-end NaturalSpeech system using the proposed loss function, including waveform reconstruction loss, bidirectional prior/posterior loss, and end-to-end optimization. 3. In the inference phase, sample a prior from the predicted distribution given the input text sequence. 4. Use the sampled prior to generate the speech waveform using the waveform decoder. Please note that these steps provide a high-level overview, and for more specific details, refer to the scientific paper mentioned.

Authors

Xu Tan, Jiawei Chen, Haohe Liu, Jian Cong, Chen Zhang, Yanqing Liu, Xi Wang, Yichong Leng, Yuanhao Yi, Lei He, Frank Soong, Tao Qin, Sheng Zhao, Tie-Yan Liu

Abstract

Text to speech (TTS) has made rapid progress in both academia and industry in recent years. Some questions naturally arise that whether a TTS system can achieve human-level quality, how to define/judge that quality and how to achieve it. In this paper, we answer these questions by first defining the human-level quality based on the statistical significance of subjective measure and introducing appropriate guidelines to judge it, and then developing a TTS system called NaturalSpeech that achieves human-level quality on a benchmark dataset. Specifically, we leverage a variational autoencoder (VAE) for end-to-end text to waveform generation, with several key modules to enhance the capacity of the prior from text and reduce the complexity of the posterior from speech, including phoneme pre-training, differentiable duration modeling, bidirectional prior/posterior modeling, and a memory mechanism in VAE. Experiment evaluations on popular LJSpeech dataset show that our proposed NaturalSpeech achieves -0.01 CMOS (comparative mean opinion score) to human recordings at the sentence level, with Wilcoxon signed rank test at p-level p >> 0.05, which demonstrates no statistically significant difference from human recordings for the first time on this dataset.

Follow Us on

0 comments

Add comment
Recommended SciCasts