The optimal way to build speech understanding modules depends on the amount of training data available. When only a small amount of training data is available, effective allocation of the data is crucial to preventing overfitting of statistical methods. We have developed a method for allocating a limited amount of training data in accordance with the amount available. Our method exploits rule-based methods for when the amount of data is small, which are included in our speech understanding framework based on multiple model combinations, i.e., multiple automatic speech recognition (ASR) modules and multiple language understanding (LU) modules, and then allocates training data preferentially to the modules that dominate the overall performance of speech understanding. Experimental evaluation showed that our allocation method consistently outperforms baseline methods that use a single ASR module and a single LU module while the amount of training data increases.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Kazunori KOMATANI, Mikio NAKANO, Masaki KATSUMARU, Kotaro FUNAKOSHI, Tetsuya OGATA, Hiroshi G. OKUNO, "Automatic Allocation of Training Data for Speech Understanding Based on Multiple Model Combinations" in IEICE TRANSACTIONS on Information,
vol. E95-D, no. 9, pp. 2298-2307, September 2012, doi: 10.1587/transinf.E95.D.2298.
Abstract: The optimal way to build speech understanding modules depends on the amount of training data available. When only a small amount of training data is available, effective allocation of the data is crucial to preventing overfitting of statistical methods. We have developed a method for allocating a limited amount of training data in accordance with the amount available. Our method exploits rule-based methods for when the amount of data is small, which are included in our speech understanding framework based on multiple model combinations, i.e., multiple automatic speech recognition (ASR) modules and multiple language understanding (LU) modules, and then allocates training data preferentially to the modules that dominate the overall performance of speech understanding. Experimental evaluation showed that our allocation method consistently outperforms baseline methods that use a single ASR module and a single LU module while the amount of training data increases.
URL: https://globals.ieice.org/en_transactions/information/10.1587/transinf.E95.D.2298/_p
Copy
@ARTICLE{e95-d_9_2298,
author={Kazunori KOMATANI, Mikio NAKANO, Masaki KATSUMARU, Kotaro FUNAKOSHI, Tetsuya OGATA, Hiroshi G. OKUNO, },
journal={IEICE TRANSACTIONS on Information},
title={Automatic Allocation of Training Data for Speech Understanding Based on Multiple Model Combinations},
year={2012},
volume={E95-D},
number={9},
pages={2298-2307},
abstract={The optimal way to build speech understanding modules depends on the amount of training data available. When only a small amount of training data is available, effective allocation of the data is crucial to preventing overfitting of statistical methods. We have developed a method for allocating a limited amount of training data in accordance with the amount available. Our method exploits rule-based methods for when the amount of data is small, which are included in our speech understanding framework based on multiple model combinations, i.e., multiple automatic speech recognition (ASR) modules and multiple language understanding (LU) modules, and then allocates training data preferentially to the modules that dominate the overall performance of speech understanding. Experimental evaluation showed that our allocation method consistently outperforms baseline methods that use a single ASR module and a single LU module while the amount of training data increases.},
keywords={},
doi={10.1587/transinf.E95.D.2298},
ISSN={1745-1361},
month={September},}
Copy
TY - JOUR
TI - Automatic Allocation of Training Data for Speech Understanding Based on Multiple Model Combinations
T2 - IEICE TRANSACTIONS on Information
SP - 2298
EP - 2307
AU - Kazunori KOMATANI
AU - Mikio NAKANO
AU - Masaki KATSUMARU
AU - Kotaro FUNAKOSHI
AU - Tetsuya OGATA
AU - Hiroshi G. OKUNO
PY - 2012
DO - 10.1587/transinf.E95.D.2298
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E95-D
IS - 9
JA - IEICE TRANSACTIONS on Information
Y1 - September 2012
AB - The optimal way to build speech understanding modules depends on the amount of training data available. When only a small amount of training data is available, effective allocation of the data is crucial to preventing overfitting of statistical methods. We have developed a method for allocating a limited amount of training data in accordance with the amount available. Our method exploits rule-based methods for when the amount of data is small, which are included in our speech understanding framework based on multiple model combinations, i.e., multiple automatic speech recognition (ASR) modules and multiple language understanding (LU) modules, and then allocates training data preferentially to the modules that dominate the overall performance of speech understanding. Experimental evaluation showed that our allocation method consistently outperforms baseline methods that use a single ASR module and a single LU module while the amount of training data increases.
ER -