Keyword Search Result

[Keyword] local minimum(3hit)

1-3hit
  • An Approach to Solve Local Minimum Problem in Sound Source and Microphone Localization

    Kazunori KOBAYASHI  Ken'ichi FURUYA  Yoichi HANEDA  Akitoshi KATAOKA  

     
    PAPER-Engineering Acoustics

      Vol:
    E90-A No:12
      Page(s):
    2826-2834

    We previously proposed a method of sound source and microphone localization. The method estimates the locations of sound sources and microphones from only time differences of arrival between signals picked up by microphones even if all their locations are unknown. However, there is a problem that some estimation results converge to local minimum solutions because this method estimates locations iteratively and the error function has multiple minima. In this paper, we present a new iterative method to solve the local minimum problem. This method achieves accurate estimation by selecting effective initial locations from many random initial locations. The computer simulation and experimental results demonstrate that the presented method eliminates most local minimum solutions. Furthermore, the computational complexity of the presented method is similar to that of the previous method.

  • A Fast Neural Network Learning with Guaranteed Convergence to Zero System Error

    Teruo AJIMURA  Isao YAMADA  Kohichi SAKANIWA  

     
    PAPER-Stochastic Process/Learning

      Vol:
    E79-A No:9
      Page(s):
    1433-1439

    It is thought that we have generally succeeded in establishing learning algorithms for neural networks, such as the back-propagation algorithm. However two major issues remain to be solved. First, there are possibilities of being trapped at a local minimum in learning. Second, the convergence rate is too slow. Chang and Ghaffar proposed to add a new hidden node, whenever stopping at a local minimum, and restart to train the new net until the error converges to zero. Their method designs newly generated weights so that the new net after introducing a new hidden node has less error than that at the original local minimum. In this paper, we propose a new method that improves their convergence rate. Our proposed method is expected to give a lower system error and a larger error gradient magnitude than their method at a starting point of the new net, which leads to a faster convergence rate. Actually, it is shown through numerical examples that the proposed method gives a much better performance than the conventional Chang and Ghaffar's method.

  • Segmentation of Brain MR Images Based on Neural Networks

    Rachid SAMMOUDA  Noboru NIKI  Hiromu NISHITANI  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E79-D No:4
      Page(s):
    349-356

    In this paper, we present some contributions to improve a previous work's approach presented for the segmentation of magnetic resonance images of the human brain, based on the unsupervised Hopfield neural network. We formulate the segmentation problem as a minimization of an energy function constructed with two terms, the cost-term as a sum of errors' squares, and the second term is a temporary noise added to the cost-term as an excitation to the network to escape from certain local minimums and be more close to the global minimum. Also, to ensure the convergence of the network and its utility in clinic with useful results, the minimization is achieved with a step function permitting the network to reach its stability corresponding to a local minimum close to the global minimum in a prespecified period of time. We present here our approach segmentations results of a patient data diagnosed with a metastatic tumor in the brain, and we compare them to those obtained based on, previous works using Hopfield neural networks, Boltzmann machine and the conventional ISODATA clustering technique.

FlyerIEICE has prepared a flyer regarding multilingual services. Please use the one in your native language.