Publications

CONFERENCE (INTERNATIONAL) Multichannel Loss Function for Supervised Speech Source Separation by Mask-based Beamforming

Yoshiki Masuyama (Waseda University), Masahito Togami, Tatsuya Komatsu

The 20th Annual Conference of the International Speech Communication Association (INTERSPEECH 2019)

September 15, 2019

In this paper, we propose two mask-based beamforming methods using a deep neural network (DNN) trained by multichannel loss functions. Beamforming technique using time-frequency (TF)-masks estimated by a DNN have been applied to many applications where TF-masks are used for estimating spatial covariance matrices. To train a DNN for mask-based beamforming, loss functions designed for monaural speech enhancement/separation have been employed. Although such a training criterion is simple, it does not directly correspond to the performance of mask-based beamforming. To overcome this problem, we use multichannel loss functions which evaluate the estimated spatial covariance matrices based on the multichannel Itakura?Saito divergence. DNNs trained by the multichannel loss functions can be applied to construct several beamformers. Experimental results confirmed their effectiveness and robustness to microphone configurations.

Paper : Multichannel Loss Function for Supervised Speech Source Separation by Mask-based Beamformingopen into new tab or window (external link)