A new type of sound source segregation method using robot-mounted microphones, which are free from strict head related transfer function (HRTF) estimation, has been proposed and successfully applied to three simultaneous speech recognition systems. The proposed segregation method is executed with sound intensity differences that are due to the particular arrangement of the four directivity microphones and the existence of a robot head acting as a sound barrier. The proposed method consists of three-layered signal processing: two-line SAFIA (binary masking based on the narrow band sound intensity comparison), two-line spectral subtraction and their integration. We performed 20 K vocabulary continuous speech recognition test in the presence of three speakers' simultaneous talk, and achieved more than 70% word error reduction compared with the case without any segregation processing.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Naoya MOCHIKI, Tetsuji OGAWA, Tetsunori KOBAYASHI, "Ears of the Robot: Three Simultaneous Speech Segregation and Recognition Using Robot-Mounted Microphones" in IEICE TRANSACTIONS on Information,
vol. E90-D, no. 9, pp. 1465-1468, September 2007, doi: 10.1093/ietisy/e90-d.9.1465.
Abstract: A new type of sound source segregation method using robot-mounted microphones, which are free from strict head related transfer function (HRTF) estimation, has been proposed and successfully applied to three simultaneous speech recognition systems. The proposed segregation method is executed with sound intensity differences that are due to the particular arrangement of the four directivity microphones and the existence of a robot head acting as a sound barrier. The proposed method consists of three-layered signal processing: two-line SAFIA (binary masking based on the narrow band sound intensity comparison), two-line spectral subtraction and their integration. We performed 20 K vocabulary continuous speech recognition test in the presence of three speakers' simultaneous talk, and achieved more than 70% word error reduction compared with the case without any segregation processing.
URL: https://globals.ieice.org/en_transactions/information/10.1093/ietisy/e90-d.9.1465/_p
Copy
@ARTICLE{e90-d_9_1465,
author={Naoya MOCHIKI, Tetsuji OGAWA, Tetsunori KOBAYASHI, },
journal={IEICE TRANSACTIONS on Information},
title={Ears of the Robot: Three Simultaneous Speech Segregation and Recognition Using Robot-Mounted Microphones},
year={2007},
volume={E90-D},
number={9},
pages={1465-1468},
abstract={A new type of sound source segregation method using robot-mounted microphones, which are free from strict head related transfer function (HRTF) estimation, has been proposed and successfully applied to three simultaneous speech recognition systems. The proposed segregation method is executed with sound intensity differences that are due to the particular arrangement of the four directivity microphones and the existence of a robot head acting as a sound barrier. The proposed method consists of three-layered signal processing: two-line SAFIA (binary masking based on the narrow band sound intensity comparison), two-line spectral subtraction and their integration. We performed 20 K vocabulary continuous speech recognition test in the presence of three speakers' simultaneous talk, and achieved more than 70% word error reduction compared with the case without any segregation processing.},
keywords={},
doi={10.1093/ietisy/e90-d.9.1465},
ISSN={1745-1361},
month={September},}
Copy
TY - JOUR
TI - Ears of the Robot: Three Simultaneous Speech Segregation and Recognition Using Robot-Mounted Microphones
T2 - IEICE TRANSACTIONS on Information
SP - 1465
EP - 1468
AU - Naoya MOCHIKI
AU - Tetsuji OGAWA
AU - Tetsunori KOBAYASHI
PY - 2007
DO - 10.1093/ietisy/e90-d.9.1465
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E90-D
IS - 9
JA - IEICE TRANSACTIONS on Information
Y1 - September 2007
AB - A new type of sound source segregation method using robot-mounted microphones, which are free from strict head related transfer function (HRTF) estimation, has been proposed and successfully applied to three simultaneous speech recognition systems. The proposed segregation method is executed with sound intensity differences that are due to the particular arrangement of the four directivity microphones and the existence of a robot head acting as a sound barrier. The proposed method consists of three-layered signal processing: two-line SAFIA (binary masking based on the narrow band sound intensity comparison), two-line spectral subtraction and their integration. We performed 20 K vocabulary continuous speech recognition test in the presence of three speakers' simultaneous talk, and achieved more than 70% word error reduction compared with the case without any segregation processing.
ER -