Publications

その他 (国際) Online Register for Dual-Mode Self-Supervised Speech Models: Mitigating The Lack of Future Context

Keita Goto, Takashi Maekaku, Jin Sakuma, Jinchuan Tian (Carnegie Mellon University), Yusuke Shinohara, Shinji Watanabe (Carnegie Mellon University)

arXiv.org (arXiv)

2026.3.2

Dual-mode self-supervised speech models (S3Ms), which jointly pre-trained in the offline and online mode, suffer from attention mismatch in streaming scenarios due to missing future context. To address this challenge, we proposed online registers, learnable tokens appended to each chunk in online mode. These tokens act as virtual placeholders for unseen future frames, enabling the model to compensate for missing context without introducing additional latency. Furthermore, we introduce a future prediction loss that explicitly guides the registers to capture predictive cues, thereby enriching their ability to retain future information. Experiments on LibriSpeech, and out-of-domain benchmarks demonstrate that online registers consistently reduce the performance gap between offline and online modes, achieving a 3.4% relative improvement on LibriSpeech with 160 ms chunks, especially in low-latency settings.

Paper : Online Register for Dual-Mode Self-Supervised Speech Models: Mitigating The Lack of Future Context新しいタブまたはウィンドウで開く (外部サイト)