1-4hit |
Gentle AdaBoost is widely used in object detection and pattern recognition due to its efficiency and stability. To focus on instances with small margins, Gentle AdaBoost assigns larger weights to these instances during the training. However, misclassification of small-margin instances can still occur, which will cause the weights of these instances to become larger and larger. Eventually, several large-weight instances might dominate the whole data distribution, encouraging Gentle AdaBoost to choose weak hypotheses that fit only these instances in the late training phase. This phenomenon, known as “classifier distortion”, degrades the generalization error and can easily lead to overfitting since the deviation of all selected weak hypotheses is increased by the late-selected ones. To solve this problem, we propose a new variant which we call “Penalized AdaBoost”. In each iteration, our approach not only penalizes the misclassification of instances with small margins but also restrains the weight increase for instances with minimal margins. Our method performs better than Gentle AdaBoost because it avoids the “classifier distortion” effectively. Experiments show that our method achieves far lower generalization errors and a similar training speed compared with Gentle AdaBoost.
This paper presents a new effective partitioning technique of linearly transformed input space in Adaptive Network based Fuzzy Inference System (ANFIS). The ANFIS is the fuzzy system with a hybrid parameter learning method, which is composed of a gradient and a least square method. The input space can be partitioned flexibly using new modeling inputs, which are the weighted linear combination of the original inputs by the proposed input partitioning technique, thus, the parameter learning time and the modeling error of ANFIS can be reduced. The simulation result illustrates the effectiveness of the proposed technique.
Akira HIRABAYASHI Hidemitsu OGAWA Akiko NAKASHIMA
In supervised learning, one of the major learning methods is memorization learning (ML). Since it reduces only the training error, ML does not guarantee good generalization capability in general. When ML is used, however, acquiring good generalization capability is expected. This usage of ML was interpreted by one of the present authors, H. Ogawa, as a means of realizing 'true objective learning' which directly takes generalization capability into account, and introduced the concept of admissibility. If a learning method can provide the same generalization capability as a true objective learning, it is said that the objective learning admits the learning method. Hence, if admissibility does not hold, making it hold becomes important. In this paper, we introduce the concept of realization of admissibility, and devise a realization method of admissibility of ML with respect to projection learning which directly takes generalization capability into account.
Akira HIRABAYASHI Hidemitsu OGAWA Yukihiko YAMASHITA
In learning of feed-forward neural networks, so-called 'training error' is often minimized. This is, however, not related to the generalization capability which is one of the major goals in the learning. It can be interpreted as a substitute for another learning which considers the generalization capability. Admissibility is a concept to discuss whether a learning can be a substitute for another learning. In this paper, we discuss the case where the learning which minimizes a training error is used as a substitute for the projection learning, which considers the generalization capability, in the presence of noise. Moreover, we give a method for choosing a training set which satisfies the admissibility.