Publications

カンファレンス (国際) Disentangled Speaker and Language Representations Using Mutual Information Minimization and Domain Adaptation for Cross-Lingual TTS

Detai Xin (The University of Tokyo), Tatsuya Komatsu, Shinnosuke Takamichi (The University of Tokyo), Hiroshi Saruwatari (The University of Tokyo)

2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2021)

2021.6.6

We propose a method for obtaining disentangled speaker and language representations via mutual information minimization and domain adaptation for cross-lingual text-to-speech (TTS) synthesis. The proposed method extracts speaker and language embeddings from acoustic features by a speaker encoder and a language encoder. Then the proposed method applies domain adaptation on the two embeddings to obtain language-invariant speaker embedding and speaker-invariant language embedding. To get more disentangled representations, the proposed method further uses mutual information minimization between the two embeddings to remove entangled information within each embedding. Disentangled representations of speaker and language are critical for cross-lingual TTS synthesis since entangled representations make it difficult to maintain speaker identity information when changing the language representation and consequently causes performance degradation. We evaluate the proposed method using English and Japanese multi-speaker datasets with a total of 207 speakers. Experimental result demonstrates that the proposed method significantly improves the naturalness and speaker similarity of both intra-lingual and cross-lingual TTS synthesis. Furthermore, we show that the proposed method has a good capability of maintaining the speaker identity between languages.

Paper : Disentangled Speaker and Language Representations Using Mutual Information Minimization and Domain Adaptation for Cross-Lingual TTS新しいタブまたはウィンドウで開く (外部サイト)