Publications
カンファレンス (国際) Deep Speech Extraction with Time-Varying Spatial Filtering Guided By Desired Direction Attractor
Yu Nakagome (Waseda University), Masahito Togami, Tetsuji Ogawa (Waseda University), Tetsunori Kobayashi (Waseda University)
2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2020)
2020.5.4
In this investigation, a deep neural network (DNN) based speech extraction method is proposed to enhance a speech signal propagating from the desired direction. The proposed method integrates knowledge based on a sound propagation model and the time-varying characteristics of a speech source, into a DNN-based separation framework. This approach outputs a separated speech source using time-varying spatial filtering, which achieves superior speech extraction performance compared with time-invariant spatial filtering. Given that the gradient of all modules can be calculated, back-propagation can be performed to maximize the speech quality of the output signal in an end-to-end manner. Guided information is also modeled based on the sound propagation model, which facilitates disentangled representations of the target speech source and noise signals. The experimental results demonstrate that the proposed method can extract the target speech source more accurately than conventional DNN-based speech source separation and conventional speech extraction using time-invariant spatial filtering.
Paper : Deep Speech Extraction with Time-Varying Spatial Filtering Guided By Desired Direction Attractor (外部サイト)