Keyword Search Result

[Keyword] back-propagation algorithm(3hit)

1-3hit
  • A Training Algorithm for Multilayer Neural Networks of Hard-Limiting Units with Random Bias

    Hongbing ZHU  Kei EGUCHI  Toru TABATA  

     
    PAPER

      Vol:
    E83-A No:6
      Page(s):
    1040-1048

    The conventional back-propagation algorithm cannot be applied to networks of units having hard-limiting output functions, because these functions cannot be differentiated. In this paper, a gradient descent algorithm suitable for training multilayer feedforward networks of units having hard-limiting output functions, is presented. In order to get a differentiable output function for a hard-limiting unit, we utilized that if the bias of a unit in such a network is a random variable with smooth distribution function, the probability of the unit's output being in a particular state is a continuously differentiable function of the unit's inputs. Three simulation results are given, which show that the performance of this algorithm is similar to that of the conventional back-propagation.

  • Introduction of Orthonormal Transform into Neural Filter for Accelerating Convergence Speed

    Isao NAKANISHI  Yoshio ITOH  Yutaka FUKUI  

     
    LETTER

      Vol:
    E83-A No:2
      Page(s):
    367-370

    As the nonlinear adaptive filter, the neural filter is utilized to process the nonlinear signal and/or system. However, the neural filter requires large number of iterations for convergence. This letter presents a new structure of the multi-layer neural filter where the orthonormal transform is introduced into all inter-layers to accelerate the convergence speed. The proposed structure is called the transform domain neural filter (TDNF) for convenience. The weights are basically updated by the Back-Propagation (BP) algorithm but it must be modified since the error back-propagates through the orthogonal transform. Moreover, the variable step size which is normalized by the transformed signal power is introduced into the BP algorithm to realize the orthonormal transform. Through the computer simulation, it is confirmed that the introduction of the orthonormal transform is effective for speedup of convergence in the neural filter.

  • Neural Networks with Interval Weights for Nonlinear Mappings of Interval Vectors

    Kitaek KWON  Hisao ISHIBUCHI  Hideo TANAKA  

     
    PAPER-Mapping

      Vol:
    E77-D No:4
      Page(s):
    409-417

    This paper proposes an approach for approximately realizing nonlinear mappings of interval vectors by interval neural networks. Interval neural networks in this paper are characterized by interval weights and interval biases. This means that the weights and biases are given by intervals instead of real numbers. First, an architecture of interval neural networks is proposed for dealing with interval input vectors. Interval neural networks with the proposed architecture map interval input vectors to interval output vectors by interval arithmetic. Some characteristic features of the nonlinear mappings realized by the interval neural networks are described. Next, a learning algorithm is derived. In the derived learning algorithm, training data are the pairs of interval input vectors and interval target vectors. Last, using a numerical example, the proposed approach is illustrated and compared with other approaches based on the standard back-propagation neural networks with real number weights.

FlyerIEICE has prepared a flyer regarding multilingual services. Please use the one in your native language.