Emilia: An Extensive, Multilingual, and Diverse Speech Dataset for Large-Scale Speech Generation

Emilia Team
Abstract

Recently, speech generation models have made significant progress by using large-scale training data. However, the research community struggle to produce highly spontaneous and human-like speech due to the lack of large-scale, diverse, and spontaneous speech data. This paper presents Emilia, the first multilingual speech generation dataset from in-the-wild speech data, and Emilia-Pipe, the first open-source preprocessing pipeline designed to transform in-the-wild speech data into high-quality training data with annotations for speech generation. Emilia starts with over 101k hours of speech in six languages and features diverse speech with varied speaking styles. To facilitate the scale-up of Emilia, the open-source pipeline Emilia-Pipe can process one hour of raw speech data ready for model training in a few mins, which enables the research community to collaborate on large-scale speech generation research. Experimental results validate the effectiveness of Emilia. Demos are available at: https://emilia-dataset.github.io/Emilia-Demo-Page/.

The Emilia Dataset

Overview

The Emilia dataset is constructed from a vast collection of speech data sourced from diverse video platforms and podcasts on the Internet, covering various content genres such as talk shows, interviews, debates, sports commentary, and audiobooks. This variety ensures the dataset captures a wide array of real human speaking styles. The initial version of the Emilia dataset includes a total of 101,654 hours of multilingual speech data in six different languages: English, French, German, Chinese, Japanese, and Korean. The table and chart below provide the duration statistics for each language in the dataset.

{{ item }}
{{ value.toLocaleString("en-US") }}

The figure below compares the acoustic and semantic diversities between Emilia and MLS datasets, which is sourced from audiobooks. The more scattered pattern highlights the Emilia dataset as encompassing a richer acoustic characteristic and semantic coverage compared to the existing audiobook dataset.

Data Preview

To better understand the performance of the pipeline as well as the diversity and quality of the dataset, we have sampled a few speech examples below for preview.

{{ item.name }}

The Emilia-Pipe Preprocessing Pipeline

Emilia-Pipe is the first open-source preprocessing pipeline designed to transform in-the-wild speech data into high-quality training data with annotations for speech generation. It consists of six major steps: Standardization, Source Separation, Speaker Diarization, Fine-grained Segmentation by VAD, ASR, and Filtering. The figure below provides an overview of the Emilia-Pipe.

After processing, the Emilia-Pipe outputs the speech data in JSON and MP3 format. The JSON file contains metadata such as language, and transcription, while the MP3 file contains the speech data. The JSON file is structured as follows:

Demos

In this section, we demonstrate the zero-shot TTS performance of the models (Soundstorm and VoiceBox) trained on Emilia.

Samples generated by models trained with the full Emilia (101k hours) multilingual dataset.
{{ item.name }}
{{ lang_egs.name }}
Target Text: {{ item.targetText }}
Samples generated by models trained on the English subset of Emilia and MLS respectively.
{{ item.name }}
Target Text: {{ item.targetText }}