Publications
カンファレンス (国際) Audio Difference Learning for Audio Captioning
Tatsuya Komatsu, Yusuke Fujita, Kazuya Takeda (Nagoya University), Tomoki Toda (Nagoya University)
2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2024)
2024.4.14
This study introduces a novel training paradigm, audio difference learning, for improving audio captioning. The fundamental concept of the proposed learning method is to create a feature representation space that preserves the relationship between audio, enabling the generation of captions that detail intricate audio information. This method employs a reference audio along with the input audio, both of which are transformed into feature representations via a shared encoder. Captions are then generated from these differential features to describe their differences. Furthermore, a unique technique is proposed that involves mixing the input audio with additional audio, and using the additional audio as a reference. This results in the difference between the mixed audio and the reference audio reverting back to the original input audio. This allows the original input's caption to be used as the caption for their difference, eliminating the need for additional annotations for the differences. In the experiments using the Clotho and ESC50 datasets, the proposed method demonstrated an improvement in the SPIDEr score by 7% compared to conventional methods.
Paper : Audio Difference Learning for Audio Captioning (外部サイト)