Publications
CONFERENCE (INTERNATIONAL) Consistency-Aware Multi-Channel Speech Enhancement Using Deep Neural Networks
Yoshiki Masuyama (Waseda University), Masahito Togami, Tatsuya Komatsu
2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2020)
May 04, 2020
This paper proposes a deep neural network (DNN)?based multichannel speech enhancement system in which a DNN is trained to maximize the quality of the enhanced time-domain signal. DNN-based multi-channel speech enhancement is often conducted in the time-frequency (T-F) domain because spatial filtering can be efficiently implemented in the T-F domain. In such a case, ordinary objective functions are computed on the estimated T-F mask or spectrogram. However, the estimated spectrogram is often inconsistent, and its amplitude and phase may change when the spectrogram is converted back to the time-domain. That is, the objective function does not evaluate the enhanced time-domain signal properly. To address this problem, we propose to use an objective function defined on the reconstructed time-domain signal. Specifically, speech enhancement is conducted by multi-channel Wiener filtering in the T-F domain, and its result is converted back to the time-domain. We propose two objective functions computed on the reconstructed signal where the first one is defined in the time-domain, and the other one is defined in the T-F domain. Our experiment demonstrates the effectiveness of the proposed system comparing to T-F masking and mask-based beamforming.
Paper : Consistency-Aware Multi-Channel Speech Enhancement Using Deep Neural Networks (external link)