Publications

カンファレンス (国際) ESPnet-SE++: Speech Enhancement for Robust Speech Recognition, Translation, and Understanding

Yen-Ju Lu (Academia Sinica), Xuankai Chang (CMU), Chenda Li (SJTU), Wangyou Zhang (SJTU), Samuele Cornell (Universit`a Politecnica delle Marche), Zhaoheng Ni (Meta AI), Yoshiki Masuyama (CMU/TMU), Brian Yan (CMU), Robin Scheibler, Zhong-Qiu Wang (CMU), Yu Tsao (Academica Sinica), Yanmin Qian (SJTU), Shinji Watanabe (CMU)

The 23rd Annual Conference of the International Speech Communication Association (INTERSPEECH 2022)

2022.9.18

This paper presents recent progress on integrating speech separation and enhancement (SSE) into the ESPnet toolkit. Compared with the previous ESPnet-SE work, numerous features have been added, including recent state-of-the-art speech enhancement models with their respective training and evaluation recipes. Importantly, a new interface has been designed to flexibly combine speech enhancement front-ends with other tasks, including automatic speech recognition (ASR), speech translation (ST), and spoken language understanding (SLU). To showcase such integration, we performed experiments on carefully designed synthetic datasets for noisy-reverberant multi-channel ST and SLU tasks, which can be used as benchmark corpora for future research. In addition to these new tasks, we also use CHiME-4 and WSJ0-2Mix to benchmark multi-and single-channel SE approaches. Results show that the integration of SE front-ends with back-end tasks is a promising research direction even for tasks besides ASR, especially in the multi-channel scenario. The code is available online at https://github.com/ESPnet/ESPnet. The multi-channel ST and SLU datasets, which are another contribution of this work, are released on HuggingFace.

Paper : ESPnet-SE++: Speech Enhancement for Robust Speech Recognition, Translation, and Understanding新しいタブまたはウィンドウで開く (外部サイト)