Keyword Search Result

[Keyword] convergence(117hit)

81-100hit(117hit)

  • Analysis of the Sign-Sign Algorithm Based on Gaussian Distributed Tap Weights

    Shin'ichi KOIKE  

     
    PAPER-Adaptive Signal Processing

      Vol:
    E83-A No:8
      Page(s):
    1551-1558

    In this paper, a new set of difference equations is derived for transient analysis of the convergence of adaptive FIR filters using the Sign-Sign Algorithm with Gaussian reference input and additive Gaussian noise. The analysis is based on the assumption that the tap weights are jointly Gaussian distributed. Residual mean squared error after convergence and simpler approximate difference equations are further developed. Results of experiment exhibit good agreement between theoretically calculated convergence and that of simulation for a wide range of parameter values of adaptive filters.

  • Studies on the Convergence Speed of Over-Sampled Subband Adaptive Digital Filters

    Shuichi OHNO  

     
    PAPER-Adaptive Signal Processing

      Vol:
    E83-A No:8
      Page(s):
    1531-1538

    To evaluate or compare the convergence speed of adaptive digital filters (ADF) with least mean squared (LMS) algorithm, the condition numbers of correlation matrices of tap-input vectors are often used. In this paper, however, the comparison of the conventional fullband ADF and the subband ADF based on their condition numbers is shown to be invalid. In some cases, the over-sampled subband ADF converges faster than the fullband ADF, although the former has larger condition numbers. To explain the above phenomenon, an expression for the convergence behavior of the subband ADF and simulation results are provided.

  • Analysis on Convergence Property of INLMS Algorithm Suitable for Fixed Point Processing

    Kensaku FUJII  Juro OHGA  

     
    PAPER-Adaptive Signal Processing

      Vol:
    E83-A No:8
      Page(s):
    1539-1544

    The individually normalized least mean square (INLMS) algorithm is proposed as an adaptive algorithm suitable for the fixed point processing. The convergence property of the INLMS algorithm, however, is not yet analyzed enough. This paper first derives an equation describing the convergence property by exploiting the technique of expressing the INLMS algorithm as a first order infinite impulse response (IIR) filter. According to the equation derived thus, the decreasing process of the estimation error is represented as the response of another IIR filter expression. By using the representation, this paper second derives the convergence condition of the INLMS algorithm as the range of the step size making a low path filter of the latter IIR filter. This paper also derives the step size maximizing the convergence speed as the maximum coefficient of the latter IIR filter and finally clarifies the range of the step size recommended in the practical system design.

  • An Evaluation of Visual Fatigue in 3-D Displays: Focusing on the Mismatching of Convergence and Accommodation

    Toshiaki SUGIHARA  Tsutomu MIYASATO  Ryohei NAKATSU  

     
    PAPER

      Vol:
    E82-C No:10
      Page(s):
    1814-1822

    In this paper, we describe an experimental evaluation of visual fatigue in a binocular disparity type 3-D display system. To evaluate this fatigue, we use a subjective assessment method and focus on mismatching between convergence and accommodation, which is a major weakness of binocular disparity 3-D displays. For this subjective assessment, we use a newly-developed binocular disparity 3-D display system with a compensation function for accommodation. Because this equipment only allowed us to compare the terms of the mismatching itself, the evaluation is more accurate than similar previous works.

  • New High-Order Associative Memory System Based on Newton's Forward Interpolation

    Hiromitsu HAMA  Chunfeng XING  Zhongkan LIU  

     
    PAPER-Algorithms and Data Structures

      Vol:
    E81-A No:12
      Page(s):
    2688-2693

    A double-layer Associative Memory System (AMS) based on the Cerebella Model Articulation Controller (CMAC) (CMAC-AMS), owing to its advantages of simple structures, fast searching procedures and strong mapping capability between multidimensional input/output vectors, has been successfully used in such applications as real-time intelligent control, signal processing and pattern recognition. However, it is still suffering from its requirement for a large memory size and relatively low precision. Furthermore, the hash code used in its addressing mechanism for memory size reduction can cause a data-collision problem. In this paper, a new high-order Associative Memory System based on the Newton's forward interpolation formula (NFI-AMS) is proposed. The NFI-AMS is capable of implementing high-precision approximation to multivariable functions with arbitrarily given sampling data. A learning algorithm and a convergence theorem of the NFI-AMS are proposed. The network structure and the scheme of its learning algorithm reveal that the NFI-AMS has advantages over the conventional CMAC-type AMS in terms of high precision of learning, much less required memory size without the data-collision problem, and also has advantages over the multilayer Back Propagation (BP) neural networks in terms of much less computational effort for learning and fast convergence rate. Numerical simulations verify these advantages. The proposed NFI-AMS, therefore, has potential in many application areas as a new kind of associative memory system.

  • Dynamic Sample Selection: Theory

    Peter GECZY  Shiro USUI  

     
    PAPER-Neural Networks

      Vol:
    E81-A No:9
      Page(s):
    1931-1939

    Conventional approaches to neural network training do not consider possibility of selecting training samples dynamically during the learning phase. Neural network is simply presented with the complete training set at each iteration of the learning. The learning can then become very costly for large data sets. Huge redundancy of data samples may lead to the ill-conditioned training problem. Ill-conditioning during the training causes rank-deficiencies of error and Jacobean matrices, which results in slower convergence speed, or in the worst case, the failure of the algorithm to progress. Rank-deficiencies of essential matrices can be avoided by an appropriate selection of training exemplars at each iteration of training. This article presents underlying theoretical grounds for dynamic sample selection (DSS), that is mechanism enabling to select a subset of training set at each iteration. Theoretical material is first presented for general objective functions, and then for the objective functions satisfying the Lipschitz continuity condition. Furthermore, implementation specifics of DSS to first order line search techniques are theoretically described.

  • Dynamic Sample Selection: Implementation

    Peter GECZY  Shiro USUI  

     
    PAPER-Neural Networks

      Vol:
    E81-A No:9
      Page(s):
    1940-1947

    Computational expensiveness of the training techniques, due to the extensiveness of the data set, is among the most important factors in machine learning and neural networks. Oversized data set may cause rank-deficiencies of Jacobean matrix which plays essential role in training techniques. Then the training becomes not only computationally expensive but also ineffective. In [1] the authors introduced the theoretical grounds for dynamic sample selection having a potential of eliminating rank-deficiencies. This study addresses the implementation issues of the dynamic sample selection based on the theoretical material presented in [1]. The authors propose a sample selection algorithm implementable into an arbitrary optimization technique. An ability of the algorithm to select a proper set of samples at each iteration of the training has been observed to be very beneficial as indicated by several experiments. Recently proposed approaches to sample selection work reasonably well if pattern-weight ratio is close to 1. Small improvements can be detected also at the values of the pattern-weight ratio equal to 2 or 3. The dynamic sample selection approach, presented in this article, can increase the convergence speed of first order optimization techniques, used for training MLP networks, even at the value of the pattern-weight ratio (E-FP) as high as 15 and possibly even more.

  • Prefiltering for LMS Based Adaptive Receivers in DS/CDMA Communications

    Teruyuki MIYAJIMA  Kazuo YAMANAKA  

     
    PAPER

      Vol:
    E80-A No:12
      Page(s):
    2357-2365

    In this paper, three issues concerning the linear adaptive receiver using the LMS algorithm for single-user demodulation in direct-sequence/code-division multiple-access (DS/CDMA) systems are considered. First, the convergence rate of the LMS algorithm in DS/CDMA environment is considered theoretically. Both upper and lower bounds of the eigenvalue spread of the autocorrelation matrix of receiver input signals are derived. It is cleared from the results that the convergence rate of the LMS algorithm becomes slow when the signal power of interferer is large. Second, fast converging technique using a prefilter is considered. The LMS based adaptive receiver using an adaptive prefilter adjusted by a Hebbian learning algorithm to decorrelate the input signals is proposed. Computer simulation results show that the proposed receiver provides faster convergence than the LMS based receiver. Third, the complexity reduction of the proposed receiver by prefiltering is considered. As for the reduced complexity receiver, it is shown that the performance degradation is little as compared with the full complexity receiver.

  • Interference Cancellation Characteristics of a BSCMA Adaptive Array Antenna with a DBF Configuration

    Toyohisa TANAKA  Ryu MIURA  Isamu CHIBA  Yoshio KARASAWA  

     
    PAPER-Antennas and Propagation

      Vol:
    E80-B No:9
      Page(s):
    1363-1371

    We have developed a Beam Space CMA (Constant Modulus Algorithm) Adaptive Array Antenna system (BSCMA adaptive array antenna) that may be suitable for mobile communications. In this paper, we present experimental results of interference cancellation characteristics using the developed system. The experiment was carried out in a large radio anechoic chamber, while desired and interference signals were transmitted to the system. We focused on the characteristics of capture, convergence and tracking in adaptive processing. The experimental results show excellent interference cancellation characteristics, and demonstrate that the BSCMA adaptive array antenna has a greater feasibility to be applied practically in mobile communications.

  • Absolute Exponential Stability of Neural Networks with Asymmetric Connection Matrices

    Xue-Bin LIANG  Toru YAMAGUCHI  

     
    LETTER-Neural Networks

      Vol:
    E80-A No:8
      Page(s):
    1531-1534

    In this letter, the absolute exponential stability result of neural networks with asymmetric connection matrices is obtained, which generalizes the existing one about absolute stability of neural networks, by a new proof approach. It is demonstrated that the network time constant is inversely proportional to the global exponential convergence rate of the network trajectories to the unique equilibrium. A numerical simulation example is also given to illustrate the obtained analysis results.

  • On the Absolute Exponential Stability of Neural Networks with Globally Lipschitz Continuous Activation Functions

    Xue-Bin LIANG  Toru YAMAGUCHI  

     
    LETTER-Bio-Cybernetics and Neurocomputing

      Vol:
    E80-D No:6
      Page(s):
    687-690

    In this letter, we obtain the absolute exponential stability result of neural networks with globally Lipschitz continuous, increasing and bounded activation functions under a sufficient condition which can unify some relevant sufficient ones for absolute stability in the literature. The obtained absolute exponential stability result generalizes the existing ones about absolute stability of neural networks. Moreover, it is demonstrated, by a mathematically rigorous proof, that the network time constant is inversely proportional to the global exponential convergence rate of the network trajectories to the unique equilibrium. A numerical simulation example is also presented to illustrate the analysis results.

  • Convergence Characteristics of the Adaptive Array Using RLS Algorithm

    Futoshi ASANO  Yoiti SUZUKI  Toshio SONE  

     
    PAPER-Digital Signal Processing

      Vol:
    E80-A No:1
      Page(s):
    148-158

    The convergence characteristics of the adaptive beamformer with the RLS algorithm are analyzed in this paper. In case of the RLS adaptive beamformer, the convergence characteristics are significantly affected by the spatial characteristics of the signals/noises in the environment. The purpose of this paper is to show how these physical parameters affect the convergence characteristics. In this paper, a typical environment where a few directional noises are accompanied by background noise is assumed, and the influence of each component of the environment is analyzed separately using rank analysis of the correlation matrix. For directional components, the convergence speed is faster for a smaller number of noise sources since the effective rank of the input correlation matrix is reduced. In the presence of background noise, the convergence speed is slowed down due to the increase of the effective rank. However, the convergence speed can be improved by controlling the initial matrix of the RLS algorithm. The latter section of this paper focuses on the physical interpretation of this initial matrix, in an attempt to elucidate the mechanism of the convergence characterisitics.

  • Derivation and Applications of Difference Equations for Adaptive Filters Based on a General Tap Error Distribution

    Shin'ichi KOIKE  

     
    PAPER-Digital Signal Processing

      Vol:
    E79-A No:12
      Page(s):
    2166-2175

    In this paper stochastic aradient adaptive filters using the Sign or Sign-Sign Algorithm are analyzed based upon general assumptions on the reference signal, additive noise and particularly jointly distributed tap errors. A set of difference equations for calculating the convergence process of the mean and covariance of the tap errors is derived with integrals involving characteristic function and its derivative of the tap error distribution. Examples of echo canceller convergence with jointly Gaussian distributed tap errors show an excellent agreement between the empirical results and the theory.

  • Convergence Analysis of Quantizing Method with Correlated Gaussian Data

    Kiyoshi TAKAHASHI  Noriyoshi KUROYANAGI  Shinsaku MORI  

     
    PAPER

      Vol:
    E79-A No:8
      Page(s):
    1157-1165

    In this paper the normalized lease mean square (NLMS) algorithm based on clipping input samples with an arbitrary threshold level is studied. The convergence characteristics of these clipping algorithms with correlated data are presented. In the clipping algorithm, the input samples are clipped only when the input samples are greater than or equal to the threshold level and otherwise the input samples are set to zero. The results of the analysis yield that the gain constant to ensure convergence, the speed of the convergence, and the misadjustment are functions of the threshold level. Furthermore an optimum threshold level is derived in terms of the convergence speed under the condition of the constant misadjustment.

  • Convergence Analysis of Processing Cost Reduction Method of NLMS Algorithm with Correlated Gaussian Data

    Kiyoshi TAKAHASHI  Noriyoshi KUROYANAGI  

     
    PAPER-Digital Signal Processing

      Vol:
    E79-A No:7
      Page(s):
    1044-1050

    Reduction of the complexity of the NLMS algorithm has recceived attention in the area of adaptive filtering. A processing cost reduction method, in which the component of the weight vector is updated when the absolute value of the sample is greater than or equal to an arbitrary threshold level, has been proposed. The convergence analysis of the processing cost reduction method with white Gaussian data has been derived. However, a convergence analysis of this method with correlated Gaussian data, which is important for an actual application, is not studied. In this paper, we derive the convergence cheracteristics of the processing cost reduction method with correlated Gaussian data. From the analytical results, it is shown that the range of the gain constant to insure convergence is independent of the correlation of input samples. Also, it is shown that the misadjustment is independent of the correlation of input samples. Moreover, it is shown that the convergence rate is a function of the threshold level and the eigenvalues of the covariance matrix of input samples as well as the gain constant.

  • A Fast Block-Type Adaptive Filter Algorithm with Short Processing Delay

    Hector PEREZ-MEANA  Mariko NAKANO-MIYATAKE  Laura ORTIZ-BALBUENA  Alejandro MARTINEZ-GONZALEZ  Juan Carlos SANCHEZ-GARCIA  

     
    LETTER-Digital Signal Processing

      Vol:
    E79-A No:5
      Page(s):
    721-726

    This letter propose a fast frequency domain adaptive filter algorithm (FADF) for applications in which large order adaptive filters are required. Proposed FADF algorithm reduces the block delay of conventional FADF algorithms allowing a more efficient selection of the fast Fourier Transform (FFT) size. Proposed FADF algorithm also provides faster convergence rates than conventional FBAF algorithms by using a near-optimum convergence factor derived by using the FFT. Computer simulations using white and colored signals are given to show the desirable features of proposed scheme.

  • Performance Improvement of Variable Stepsize NLMS

    Jirasak TANPREEYACHAYA  Ichi TAKUMI  Masayasu HATA  

     
    PAPER

      Vol:
    E78-A No:8
      Page(s):
    905-914

    Improvement of the convergence characteristics of the NLMS algorithm has received attention in the area of adaptive filtering. A new variable stepsize NLMS method, in which the stepsize is updated optimally by using variances of the measured error signal and the estimated noise, is proposed. The optimal control equation of the stepsize has been derived from a convergence characteristic approximation. A new condition to judge convergence is introduced in this paper to ensure the fastest initial convergence speed by providing precise timing to start estimating noise level. And further, some adaptive smoothing devices have been added into the ADF to overcome the saturation problem of the identification error caused by some random deviations. By the simulation, The initial convergence speed and the identification error in precise identification mode is improved significantly by more precise adjustment of stepsize without increasing in computational cost. The results are the best ever reported performanced. This variable stepsize NLMS-ADF also shows good effectiveness even in severe conditions, such as noisy or fast changing circumstances.

  • A New Adaptive Convergence Factor Algorithm with the Constant Damping Parameter

    Isao NAKANISHI  Yutaka FUKUI  

     
    PAPER

      Vol:
    E78-A No:6
      Page(s):
    649-655

    This paper presents a new Adaptive Convergence Factor (ACF) algorithm without the damping parameter adjustment acoording to the input signal and/or the composition of the filter system. The damping parameter in the ACF algorithms has great influence on the convergence characteristics. In order to examine the relation between the damping parameter and the convergence characteristics, the normalization which is realized by the related signal terms divided by each maximum value is introduced into the ACF algorithm. The normalized algorithm is applied to the modeling of unknown time-variable systems which makes it possible to examine the relation between the parameters and the misadjustment in the adaptive algorithms. Considering the experimental and theoretical results, the optimum value of the damping parameter can be defined as the minimum value where the total misadjustment becomes minimum. To keep the damping parameter optimum in any conditions, the new ACF algorithm is proposed by improving the invariability of the damping parameter in the normalized algorithm. The algorithm is investigated by the computer simulations in the modeling of unknown time-variable systems and the system indentification. The results of simulations show that the proposed algorithm needs no adjustment of the optimum damping parameter and brings the stable convergence characteristics even if the filter system is changed.

  • Neural Networks for Digital Sequential Circuits

    Hiroshi NINOMIYA  Hideki ASAI  

     
    LETTER-Neural Networks

      Vol:
    E77-A No:12
      Page(s):
    2112-2115

    In this letter an SR-latch circuit using Hopfield neural networks is introduced. An energy function suited for a neural SR-latch circuit is defined for which the global convergence is guaranteed. We also demonstrate how to compose master-slave (M/S) SR- and JK-flip flops of novel SR-latch circuits, and further an asynchronous binary counter of M/S JK-flip flops. Computer simulations are included to illustrate how each presented circuit operates.

  • On Quadratic Convergence of the Katzenelson-Like Algorithm for Solving Nonlinear Resistive Networks

    Kiyotaka YAMAMURA  

     
    PAPER-Nonlinear Circuits and Systems

      Vol:
    E77-A No:10
      Page(s):
    1700-1706

    A globally and quadratically convergent algorithm is presented for solving nonlinear resistive networks containing transistors modeled by the Gummel-Poon model or the Shichman-Hodges model. This algorithm is based on the Katzenelson algorithm that is globally convergent for a broad class of piecewise-linear resistive networks. An effective restart technique is introduced, by which the algorithm converges to the solutions of the nonlinear resistive networks quadratically. The quadratic convergence is proved and also verified by numerical examples.

81-100hit(117hit)

FlyerIEICE has prepared a flyer regarding multilingual services. Please use the one in your native language.