Publications

CONFERENCE (INTERNATIONAL) Language-based Audio Moment Retrieval

Hokuto Munakata, Taichi Nishimura, Shota Nakada, Tatsuya Komatsu

2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2025)

April 11, 2025

In this paper, we propose and design a new task called audio moment retrieval (AMR). Unlike conventional language-based audio retrieval tasks that search for short audio clips from an audio database, AMR aims to predict relevant moments in untrimmed long audio based on a text query. Given the lack of prior work in AMR, we first build a dedicated dataset, Clotho-Moment, consisting of large-scale simulated audio recordings with moment annotations. We then propose a Detection Transformer-based model, named Audio Moment DETR (AM-DETR), as a fundamental framework for AMR tasks. This model captures temporal dependencies within audio features, inspired by similar video moment retrieval tasks, thus surpassing conventional clip-level audio retrieval methods. Additionally, we provide manually annotated datasets to properly measure the effectiveness and robustness of our methods on real data. Experimental results show that AM-DETR, trained with Clotho-Moment, outperforms a baseline model that applies a clip-level audio retrieval method with a sliding window on all metrics, particularly improving Recall1@0.7 by 9.00 points. Our datasets and code are publicly available in https://h-munakata.github.io/Language-based-Audio-Moment-Retrieval.

Paper : Language-based Audio Moment Retrievalopen into new tab or window (external link)

PDF : Language-based Audio Moment Retrieval