Publications
カンファレンス (国際) End-to-end ASR to jointly predict transcriptions and linguistic annotations
Motoi Omachi, Yuya Fujita, Shinji Watanabe (Johns Hopkins University), Matthew Wiesner (Johns Hopkins University)
The 2021 North American Chapter of the Association for Computational Linguistics : Human Language Technologies (NAACL-HLT2021)
2021.6.6
We propose a Transformer-based sequence-to-sequence model for automatic speech recognition (ASR) capable of simultaneously transcribing and annotating audio with linguistic information such as phonemic transcripts or part-of-speech (POS) tags. Since linguistic information is important in natural language processing (NLP), the proposed ASR is especially useful for speech interface applications, including spoken dialogue systems and speech translation, which combine ASR and NLP. To produce linguistic annotations, we train the ASR system using modified training targets: each grapheme or multi-grapheme unit in the target transcript is followed by an aligned phoneme sequence and/or POS tag. Since our method has access to the underlying audio data, we can estimate linguistic annotations more accurately than pipeline approaches in which NLP-based methods are applied to a hypothesized ASR transcript. Experimental results on Japanese and English datasets show that the proposed ASR system is capable of simultaneously producing high-quality transcriptions and linguistic annotations.
Paper : End-to-end ASR to jointly predict transcriptions and linguistic annotations (外部サイト)