1-10hit |
The steady-state and convergence performances are important indicators to evaluate adaptive algorithms. The step-size affects these two important indicators directly. Many relevant scholars have also proposed some variable step-size adaptive algorithms for improving performance. However, there are still some problems in these existing variable step-size adaptive algorithms, such as the insufficient theoretical analysis, the imbalanced performance and the unachievable parameter. These problems influence the actual performance of some algorithms greatly. Therefore, we intend to further explore an inherent relationship between the key performance and the step-size in this paper. The variation of mean square deviation (MSD) is adopted as the cost function. Based on some theoretical analyses and derivations, a novel variable step-size algorithm with a dynamic limited function (DLF) was proposed. At the same time, the sufficient theoretical analysis is conducted on the weight deviation and the convergence stability. The proposed algorithm is also tested with some typical algorithms in many different environments. Both the theoretical analysis and the experimental result all have verified that the proposed algorithm equips a superior performance.
Tso-Cho CHEN Erl-Huei LU Chia-Jung LI Kuo-Tsang HUANG
In this paper, a weighted multiple bit flipping (WMBF) algorithman for decoding low-density parity-check (LDPC) codes is proposed first. Then the improved WMBF algorithm which we call the efficient weighted bit-flipping (EWBF) algorithm is developed. The EWBF algorithm can dynamically choose either multiple bit-flipping or single bit-flipping in each iteration according to the log-likelihood ratio of the error probability of the received bits. Thus, it can efficiently increase the convergence speed of decoding and prevent the decoding process from falling into loop traps. Compared with the parallel weighted bit-flipping (PWBF) algorithm, the EWBF algorithm can achieve significantly lower computational complexity without performance degradation when the Euclidean geometry (EG)-LDPC codes are decoded. Furthermore, the flipping criterion does not require any parameter adjustment.
Yufei HAN Mingjiang WANG Boya ZHAO
Improved fractional variable tap-length adaptive algorithm that contains Sigmoid limited fluctuation function and adaptive variable step-size of tap-length based on fragment-full error is presented. The proposed algorithm can solve many deficiencies in previous algorithm, comprising small convergence rate and weak anti-interference ability. The parameters are able to modify reasonably on the basis of different situations. The Sigmoid constrained function can decrease the fluctuant amplitude of the instantaneous errors effectively and improves the ability of anti-noise interference. Simulations demonstrate that the proposed algorithm equips better performance.
Min-Ho JANG Beomkyu SHIN Woo-Myoung PARK Jong-Seon NO Dong-Joon SHIN
In this letter, we analyze the convergence speed of layered decoding of block-type low-density parity-check codes and verify that the layered decoding gives faster convergence speed than the sequential decoding with randomly selected check node subsets. Also, it is shown that using more subsets than the maximum variable node degree does not improve the convergence speed.
Yitao ZHANG Osamu MUTA Yoshihiko AKAIWA
The adaptive predistorter and the negative feedback system are known as methods to compensate for the nonlinear distortion of a power amplifier. Although the feedback method is a simple technique, its instability impedes the capability of high-feedback gain to achieve a high-compensation effect. On the other hand, the predistorter requires a long time for convergence of the adaptive predistorters. In this paper, we propose a nonlinear distortion compensation method for a narrow-band signal. In this method, an adaptive predistorter and negative feedback are combined. In addition, to shorten the convergence time to minimize nonlinear distortion, a variable step-size (VS) method is also applied to the algorithm to determine the parameters of the adaptive predistorter. Using computer simulations, we show that the proposed scheme achieves both five times faster convergence speed than that of the predistorter and three times higher permissible delay time in the feedback amplifier than that of a negative feedback only amplifier.
To evaluate or compare the convergence speed of adaptive digital filters (ADF) with least mean squared (LMS) algorithm, the condition numbers of correlation matrices of tap-input vectors are often used. In this paper, however, the comparison of the conventional fullband ADF and the subband ADF based on their condition numbers is shown to be invalid. In some cases, the over-sampled subband ADF converges faster than the fullband ADF, although the former has larger condition numbers. To explain the above phenomenon, an expression for the convergence behavior of the subband ADF and simulation results are provided.
Conventional approaches to neural network training do not consider possibility of selecting training samples dynamically during the learning phase. Neural network is simply presented with the complete training set at each iteration of the learning. The learning can then become very costly for large data sets. Huge redundancy of data samples may lead to the ill-conditioned training problem. Ill-conditioning during the training causes rank-deficiencies of error and Jacobean matrices, which results in slower convergence speed, or in the worst case, the failure of the algorithm to progress. Rank-deficiencies of essential matrices can be avoided by an appropriate selection of training exemplars at each iteration of training. This article presents underlying theoretical grounds for dynamic sample selection (DSS), that is mechanism enabling to select a subset of training set at each iteration. Theoretical material is first presented for general objective functions, and then for the objective functions satisfying the Lipschitz continuity condition. Furthermore, implementation specifics of DSS to first order line search techniques are theoretically described.
Computational expensiveness of the training techniques, due to the extensiveness of the data set, is among the most important factors in machine learning and neural networks. Oversized data set may cause rank-deficiencies of Jacobean matrix which plays essential role in training techniques. Then the training becomes not only computationally expensive but also ineffective. In [1] the authors introduced the theoretical grounds for dynamic sample selection having a potential of eliminating rank-deficiencies. This study addresses the implementation issues of the dynamic sample selection based on the theoretical material presented in [1]. The authors propose a sample selection algorithm implementable into an arbitrary optimization technique. An ability of the algorithm to select a proper set of samples at each iteration of the training has been observed to be very beneficial as indicated by several experiments. Recently proposed approaches to sample selection work reasonably well if pattern-weight ratio is close to 1. Small improvements can be detected also at the values of the pattern-weight ratio equal to 2 or 3. The dynamic sample selection approach, presented in this article, can increase the convergence speed of first order optimization techniques, used for training MLP networks, even at the value of the pattern-weight ratio (E-FP) as high as 15 and possibly even more.
In this paper, a new structure which is useful for the detection of multiple sinusoids is presented. The proposed structure is based on the direct form second-order IIR notch filter using simplified adaptive algorithm. It has been shown that the convergence characteristics of the proposed structure are much improved compared with the previously proposed structure. A cascaded adaptive notch filter using the proposed second-order section is also shown. It takes multiple sinusoids corrupted by white Gaussian noise and produces the individual sinusoids at each of the outputs. The results of computer simulation are shown which confirm the theoretical prediction.
In this letter, a new structure of adaptive IIR notch filter is presented. The structure is based on direct form realization and uses the similar adaptation algorithm given in Ref. (4). A quantitative analysis for convergence properties is developed. It is shown that the proposed structure shows superior performance comparing with previously proposed designs. The results of computer simulations are presented to substantiate the analysis.