Emilia: An Extensive, Multilingual, and Diverse Speech Dataset for Large-Scale Speech Generation

Haorui He1,★Zengqiang Shang2,★Chaoren Wang1,★Xuyuan Li2,3,★Yicheng Gu1Hua Hua2,3Liwei Liu1Chen Yang2,3Jiaqi Li1Peiyang Shi2Yuancheng Wang1Kai Chen4Pengyuan Zhang2,3,‡Zhizheng Wu1,4,‡
1 The Chinese University of Hong Kong, Shenzhen, China2 Laboratory of Speech & Intelligent Information Processing, Institute of Acoustics, CAS, China3 University of Chinese Academy of Sciences, Beijing, China4 Shanghai AI Laboratory, Shanghai, China
Equal contribution, and the names are listed in random order. Corresponding authors.
Abstract

Recent advancements in speech generation models have been significantly driven by the use of large-scale training data. However, producing highly spontaneous, human-like speech remains a challenge due to the scarcity of large, diverse, and spontaneous speech datasets. In response, we introduce Emilia, the first large-scale, multilingual, and diverse speech generation dataset. Emilia starts with over 101k hours of speech across six languages, covering a wide range of speaking styles to enable more natural and spontaneous speech generation. To facilitate the scale-up of Emilia, we also present Emilia-Pipe, the first open-source preprocessing pipeline designed to efficiently transform raw, in-the-wild speech data into high-quality training data with speech annotations. Experimental results demonstrate the effectiveness of both Emilia and Emilia-Pipe. Demos are available at: https://emilia-dataset.github.io/Emilia-Demo-Page/.

The Emilia Dataset

Overview

The Emilia dataset is constructed from a vast collection of speech data sourced from diverse video platforms and podcasts on the Internet, covering various content genres such as talk shows, interviews, debates, sports commentary, and audiobooks. This variety ensures the dataset captures a wide array of real human speaking styles. The initial version of the Emilia dataset includes a total of 101,654 hours of multilingual speech data in six different languages: English, French, German, Chinese, Japanese, and Korean. The table and chart below provide the duration statistics for each language in the dataset.

Lang.
Duration (hours)
En
46,828
Zh
49,922
De
1,590
Fr
1,381
Ja
1,715
Ko
217

The figure below compares the acoustic and semantic diversities between Emilia and MLS datasets, which is sourced from audiobooks. The more scattered pattern highlights the Emilia dataset as encompassing a richer acoustic characteristic and semantic coverage compared to the existing audiobook dataset.

Data Preview

To better understand the performance of the pipeline as well as the diversity and quality of the dataset, we have sampled a few speech examples below for preview.

English
Chinese
German
French
Japanese
Korean

The Emilia-Pipe Preprocessing Pipeline

Emilia-Pipe is the first open-source preprocessing pipeline designed to transform in-the-wild speech data into high-quality training data with annotations for speech generation. It consists of six major steps: Standardization, Source Separation, Speaker Diarization, Fine-grained Segmentation by VAD, ASR, and Filtering. The figure below provides an overview of the Emilia-Pipe.

After processing, the Emilia-Pipe outputs the speech data in JSON and MP3 format. The JSON file contains metadata such as language, and transcription, while the MP3 file contains the speech data. The JSON file is structured as follows:

Demos

In this section, we demonstrate the zero-shot TTS performance of the models (Soundstorm and VoiceBox) trained on Emilia.

Samples generated by models trained with the full Emilia (101k hours) multilingual dataset.
Language
Speech Prompt
SoundStorm
VoiceBox
English
Target Text: Dealing with family secrets is never easy. Yet, sometimes, omission is a form of protection, intending to safeguard some from the harsh truths. One day, I hope you understand the reasons behind my actions. Until then, Anna, please, bear with me
Target Text: I don't really care what you call me. I've been a silent spectator, watching species evolve, empires rise and fall. But always remember, I am mighty and enduring. Respect me and I'll nurture you; ignore me and you shall face the consequences.
Chinese
Target Text: 突然,身边一阵笑声。我看着他们,意气风发地挺直了胸膛,甩了甩那稍显肉感的双臂,轻笑道:"我身上的肉,是为了掩饰我爆棚的魅力,否则,岂不吓坏了你们呢?"
Target Text: 气氛变得沉郁起来。乍看之下,一切的困扰仿佛都围绕在我身边。我皱着眉头,感受着那份压力,但我知道我不能放弃,不能认输。于是,我深吸一口气,心底的声音告诉我:“无论如何,都要冷静下来,重新开始。”
German
Target Text: Er ist damit in der Geschichte des Gerichts der bisher einzige Richter aus Kanada.
Target Text: Der Film bedeutete den Durchbruch der Steadicam.
French
Target Text: Connu comme grand buveur depuis longtemps, il sombre dans l’alcoolisme.
Target Text: Il est le, principal traducteur de la Bible en tahitien.
Japanese
Target Text: ここに物を置いてはいけません
Target Text: 答えを書いた紙を出してください
Korean
Target Text: 하나님이 주셔서 나와 함께하게 하신 여자
Target Text: 내가 너로 여자와 원수가 되게하고 너의 후손도 여자의 후손과 원수가 되게 하리니
Samples generated by models trained on the English subset of Emilia and MLS respectively.
Speech Prompt
SoundStorm-Emilia
SoundStorm-MLS
VoiceBox-Emilia
VoiceBox-MLS
Target Text: Probably. I mean, but at nine dollars a piece, we don't want to see too many doubles today. What do we have here? Another dog.
Target Text: "Then, dear," said Mrs. Whitney, "you must be kinder to her than ever; think what it would be for one of you to be away from home even among friends."
Target Text: And then it's those two thick long things are connected to a thicker thing down there. And then it has these