Publications

CONFERENCE (INTERNATIONAL) Cross-Modal Multi-Tasking for Speech-to-Text Translation via Hard Parameter Sharing

Brian Yan (Carnegie Mellon University), Xuankai Chang (Carnegie Mellon University), Antonios Anastasopoulos (George Mason University), Yuya Fujita, Shinji Watanabe (Carnegie Mellon University)

2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2024)

April 14, 2024

Recent works in end-to-end speech-to-text translation (ST) have proposed multi-tasking methods with soft parameter sharing which leverage machine translation (MT) data via secondary encoders that map text inputs to an eventual cross-modal representation. In this work, we instead propose a ST/MT multi-tasking framework with hard parameter sharing in which all model parameters are shared cross-modally. Our method reduces the speech-text modality gap via a pre-processing stage which converts speech and text inputs into two discrete token sequences of similar length – this allows models to indiscriminately process both modalities simply using a joint vocabulary. With experiments on MuST-C, we demonstrate that our multi-tasking framework improves attentional encoder-decoder, Connectionist Temporal Classification (CTC), transducer, and joint CTC/attention models by an average of +0.5 BLEU without any ex- ternal MT data. Further, we show that this framework incorporates external MT data, yielding +0.8 BLEU, and also improves transfer learning from pre-trained textual models, yielding +1.8 BLEU.

Paper : Cross-Modal Multi-Tasking for Speech-to-Text Translation via Hard Parameter Sharingopen into new tab or window (external link)