Publications

CONFERENCE (INTERNATIONAL) Weakly-Supervised Sound Event Detection with Self-Attention

Koichi Miyazaki, Tatsuya Komatsu, Tomoki Hayashi (Nagoya University), Shinji Watanabe (Johns Hopkins University), Tomoki Toda (Nagoya University), Kazuya Takeda (Nagoya University)

2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2020)

May 04, 2020

In this paper, we propose a novel sound event detection (SED) method that incorporates a self-attention mechanism of the Transformer for a weakly-supervised learning scenario. The proposed method utilizes the Transformer encoder, which consists of multiple self-attention modules, allowing to take both local and global context information of the input feature sequence into account. Furthermore, inspired by the great success of BERT in the natural language processing field, the proposed method introduces a special tag token into the input sequence for weak label prediction, which enables the aggregation of the whole sequence information. To demonstrate the performance of the proposed method, we conduct the experimental evaluation using the DCASE2019 Task4 dataset. The experimental results demonstrate that the proposed method outperforms the DCASE2019 Task4 baseline method, which is based on the convolutional recurrent neural network, and the self-attention mechanism effectively works for SED.

Paper : Weakly-Supervised Sound Event Detection with Self-Attentionopen into new tab or window (external link)