1-3hit |
Seongkyu MUN Minkyu SHIN Suwon SHON Wooil KIM David K. HAN Hanseok KO
Recent acoustic event classification research has focused on training suitable filters to represent acoustic events. However, due to limited availability of target event databases and linearity of conventional filters, there is still room for improving performance. By exploiting the non-linear modeling of deep neural networks (DNNs) and their ability to learn beyond pre-trained environments, this letter proposes a DNN-based feature extraction scheme for the classification of acoustic events. The effectiveness and robustness to noise of the proposed method are demonstrated using a database of indoor surveillance environments.
Minkyu SHIN Seongkyu MUN David K. HAN Hanseok KO
In this paper, a multichannel speech enhancement system which adopts a denoising auto-encoder as part of the beamformer is proposed. The proposed structure of the generalized sidelobe canceller generates enhanced multi-channel signals, instead of merely one channel, to which the following denoising auto-encoder can be applied. Because the beamformer exploits spatial information and compensates for differences in the transfer functions of each channel, the proposed system is expected to resolve the difficulty of modelling relative transfer functions consisting of complex numbers which are hard to model with a denoising auto-encoder. As a result, the modelling capability of the denoising auto-encoder can concentrate on removing the artefacts caused by the beamformer. Unlike conventional beamformers, which combine these artefacts into one channel, they remain separated for each channel in the proposed method. As a result, the denoising auto-encoder can remove the artefacts by referring to other channels. Experimental results prove that the proposed structure is effective for the six-channel data in CHiME, as indicated by improvements in terms of speech enhancement and word error rate in automatic speech recognition.
Seongkyu MUN Suwon SHON Wooil KIM David K. HAN Hanseok KO
Various types of classifiers and feature extraction methods for acoustic scene classification have been recently proposed in the IEEE Detection and Classification of Acoustic Scenes and Events (DCASE) 2016 Challenge Task 1. The results of the final evaluation, however, have shown that even top 10 ranked teams, showed extremely low accuracy performance in particular class pairs with similar sounds. Due to such sound classes being difficult to distinguish even by human ears, the conventional deep learning based feature extraction methods, as used by most DCASE participating teams, are considered facing performance limitations. To address the low performance problem in similar class pair cases, this letter proposes to employ a recurrent neural network (RNN) based source separation for each class prior to the classification step. Based on the fact that the system can effectively extract trained sound components using the RNN structure, the mid-layer of the RNN can be considered to capture discriminative information of the trained class. Therefore, this letter proposes to use this mid-layer information as novel discriminative features. The proposed feature shows an average classification rate improvement of 2.3% compared to the conventional method, which uses additional classifiers for the similar class pair issue.