Kernel methods such as the support vector machines map input vectors into a high-dimensional feature space and linearly separate them there. The dimensionality of the feature space depends on a kernel function and is sometimes of an infinite dimension. The Gauss kernel is such an example. We discuss the effective dimension of the feature space with the Gauss kernel and show that it can be approximated to a sum of polynomial kernels and that its dimensionality is determined by the boundedness of the input space by considering the Taylor expansion of the kernel Gram matrix.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Kazushi IKEDA, "Boundedness of Input Space and Effective Dimension of Feature Space in Kernel Methods" in IEICE TRANSACTIONS on Information,
vol. E87-D, no. 1, pp. 258-260, January 2004, doi: .
Abstract: Kernel methods such as the support vector machines map input vectors into a high-dimensional feature space and linearly separate them there. The dimensionality of the feature space depends on a kernel function and is sometimes of an infinite dimension. The Gauss kernel is such an example. We discuss the effective dimension of the feature space with the Gauss kernel and show that it can be approximated to a sum of polynomial kernels and that its dimensionality is determined by the boundedness of the input space by considering the Taylor expansion of the kernel Gram matrix.
URL: https://globals.ieice.org/en_transactions/information/10.1587/e87-d_1_258/_p
Copy
@ARTICLE{e87-d_1_258,
author={Kazushi IKEDA, },
journal={IEICE TRANSACTIONS on Information},
title={Boundedness of Input Space and Effective Dimension of Feature Space in Kernel Methods},
year={2004},
volume={E87-D},
number={1},
pages={258-260},
abstract={Kernel methods such as the support vector machines map input vectors into a high-dimensional feature space and linearly separate them there. The dimensionality of the feature space depends on a kernel function and is sometimes of an infinite dimension. The Gauss kernel is such an example. We discuss the effective dimension of the feature space with the Gauss kernel and show that it can be approximated to a sum of polynomial kernels and that its dimensionality is determined by the boundedness of the input space by considering the Taylor expansion of the kernel Gram matrix.},
keywords={},
doi={},
ISSN={},
month={January},}
Copy
TY - JOUR
TI - Boundedness of Input Space and Effective Dimension of Feature Space in Kernel Methods
T2 - IEICE TRANSACTIONS on Information
SP - 258
EP - 260
AU - Kazushi IKEDA
PY - 2004
DO -
JO - IEICE TRANSACTIONS on Information
SN -
VL - E87-D
IS - 1
JA - IEICE TRANSACTIONS on Information
Y1 - January 2004
AB - Kernel methods such as the support vector machines map input vectors into a high-dimensional feature space and linearly separate them there. The dimensionality of the feature space depends on a kernel function and is sometimes of an infinite dimension. The Gauss kernel is such an example. We discuss the effective dimension of the feature space with the Gauss kernel and show that it can be approximated to a sum of polynomial kernels and that its dimensionality is determined by the boundedness of the input space by considering the Taylor expansion of the kernel Gram matrix.
ER -