In this paper we extend hyperparameter-free sparse signal reconstruction approaches to permit the high-resolution time delay estimation of spread spectrum signals and demonstrate their feasibility in terms of both performance and computation complexity by applying them to the ISO/IEC 24730-2.1 real-time locating system (RTLS). Numerical examples show that the sparse asymptotic minimum variance (SAMV) approach outperforms other sparse algorithms and multiple signal classification (MUSIC) regardless of the signal correlation, especially in the case where the incoming signals are closely spaced within a Rayleigh resolution limit. The performance difference among the hyperparameter-free approaches decreases significantly as the signals become more widely separated. SAMV is sometimes strongly influenced by the noise correlation, but the degrading effect of the correlated noise can be mitigated through the noise-whitening process. The computation complexity of SAMV can be feasible for practical system use by setting the power update threshold and the grid size properly, and/or via parallel implementations.
Bei ZHAO Chen CHENG Zhenguo MA Feng YU
Cross correlation is a general way to estimate time delay of arrival (TDOA), with a computational complexity of O(n log n) using fast Fourier transform. However, since only one spike is required for time delay estimation, complexity can be further reduced. Guided by Chinese Remainder Theorem (CRT), this paper presents a new approach called Co-prime Aliased Sparse FFT (CASFFT) in O(n1-1/d log n) multiplications and O(mn) additions, where m is smooth factor and d is stage number. By adjusting these parameters, it can achieve a balance between runtime and noise robustness. Furthermore, it has clear advantage in parallelism and runtime for a large range of signal-to-noise ratio (SNR) conditions. The accuracy and feasibility of this algorithm is analyzed in theory and verified by experiment.
Estimation of the time delay of arrival (TDOA) problem is important to acoustic source localization. The TDOA estimation problem is defined as finding the relative delay between several microphone signals for the direct sound. To estimate TDOA, the generalized cross-correlation (GCC) method is the most frequently used, but it has a disadvantage in terms of reverberant environments. In order to overcome this problem, the adaptive eigenvalue decomposition (AED) method has been developed, which estimates the room transfer function and finds the direct-path delay. However, the algorithm does not take into account the fact that the room transfer function is a sparse channel, and so sometimes the estimated transfer function is too dense, resulting in failure to exact direct-path and delay. In this paper, an enhanced AED algorithm that makes use of a proportionate step-size control and a direct-path constraint is proposed instead of a constant step size and the L2-norm constraint. The simulation results show that the proposed algorithm has enhanced performance as compared to both the conventional AED method and the phase-transform (PHAT) algorithm.
Zhixin LIU Dexiu HU Yongjun ZHAO Chengcheng LIU
Considering the obvious bias of the traditional interpolation method, a novel time delay estimation (TDE) interpolation method with sub-sample accuracy is presented in this paper. The proposed method uses a generalized extended approximation method to obtain the objection function. Then the optimized interpolation curve is generated by Second-order Cone programming (SOCP). Finally the optimal TDE can be obtained by interpolation curve. The delay estimate of proposed method is not forced to lie on discrete samples and the sample points need not to be on the interpolation curve. In the condition of the acceptable computation complexity, computer simulation results clearly indicate that the proposed method is less biased and outperforms the other interpolation algorithms in terms of estimation accuracy.
In the traditional time delay estimation methods, it is usually implicitly assumed that the observed signals are either only direct path propagate or coherently received. In practice, the multipath propagation and incoherent reception always exist simultaneously. In response to this situation, the joint maximum likelihood (ML) estimation of multipath delays and system error is proposed, and the estimation of the number of multipath is considered as well for the specific incoherent signal model. Furthermore, an algorithm based Gibbs sampling is developed to solve the multi-dimensional nonlinear ML estimation. The efficiency of the proposed estimator is demonstrated by simulation results.
Bo WU Yan WANG Xiuying CAO Pengcheng ZHU
Attenuated and delayed versions of the pulse signal overlap in multipath propagation. Previous algorithms can resolve them only if signal sampling is ideal, but fail to resolve two counterparts with non-ideal sampling. In this paper, we propose a novel method which can resolve the general types of non-ideally sampled pulse signals in the time domain via Taylor Series Expansion (TSE) and estimate multipath signals' precise time delays and amplitudes. In combination with the CLEAN algorithm, the overlapped pulse signal parameters are estimated one by one through an iteration method. Simulation results verify the effectiveness of the proposed method.
The Generalized cross-correlation (GCC) method is most commonly used for time delay estimation (TDE). However, the GCC method can result in false peak errors (FPEs) especially at a low signal to noise ratio (SNR). These FPEs significantly degrade TDE, since the estimation error, which is the difference between a true time delay and an estimated time delay, is larger than at least one sampling period. This paper introduces an algorithm that estimates two peaks for two cross-correlation functions using three types of signals such as a reference signal, a delayed signal, and a delayed signal with an additional time delay of half a sampling period. A peak selection algorithm is also proposed in order to identify which peak is closer to the true time delay using subsample TDE methods. This paper presents simulations that compare the algorithms' performance for varying amounts of noise and delay. The proposed algorithms can be seen to display better performance, in terms of the probability of the integer TDE errors, as well as the mean and standard deviation of absolute values of the time delay estimation errors.
In this paper, time-difference estimation of filtered random signals passed through multipath channels is discussed. First, we reformulate the approach based on innovation-rate sampling (IRS) to fit our random signal model, then use the IRS results to drive the nonlinear least-squares (NLS) minimization algorithm. This hybrid approach (referred to as the IRS-NLS method) provides consistent estimates even for cases with sub-Nyquist sampling assuming the use of compactly-supported sampling kernels that satisfies the recently-developed nonaliasing condition in the frequency domain. Numerical simulations show that the proposed NLS-IRS method can improve performance over the straight-forward IRS method, and provides approximately the same performance as the NLS method with reduced sampling rate, even for closely-spaced time delays. This enables, given a fixed observation time, significant reduction in the required number of samples, while maintaining the same level of estimation performance.
Seong-Hyun JANG Yeong-Sam KIM Sang-Hoon YOON Jong-Wha CHONG
In this letter, we analyze the effect of the size of observed data on the performance of time delay estimation (TDE) in the chirp spread spectrum (CSS) system. By adjusting the size of observed data, we reduce the effect of DC offsets, which would otherwise degrade the performance of TDE based on CSS, and we optimize the performance of TDE in CSS system. Finally, we derive the optimal size of observed data of TDE in CSS system.
Kenneth Wing Kin LUI Hing Cheung SO
In this Letter, the problem of estimating the time-difference-of-arrival between signals received at two spatially separated sensors is addressed. By taking discrete Fourier transform of the sensor outputs, time delay estimation corresponds to finding the frequency of a noisy sinusoid with time-varying amplitude. The generalized weighted linear predictor is utilized to estimate the time delay and it is shown that its estimation accuracy attains Cramér-Rao lower bound.
Yanxin YAO Qishan ZHANG Dongkai YANG
A method is proposed for estimating code and carrier phase parameters of GNSS reflected signals in low SNR (signal-to-noise ratio) environments. Simulation results show that the multipath impact on code and carrier with 0.022 C/A chips delay can be estimated in 0 dB SNR in the condition of 46 MHz sampling rate.
Jang Sub KIM Ho Jin SHIN Dong Ryeol SHIN
In this paper, a multiuser receiver based on a Gaussian Mixture Sigma Point Particle Filter (GMSPPF), which can be used for joint channel coefficient estimation and time delay tracking in CDMA communication systems, is introduced. The proposed algorithm has better improved estimation performance than either Extended Kalman Filter (EKF) or Particle Filter (PF). The Cramer-Rao Lower Bound (CRLB) is derived for the estimator, and the simulation result demonstrates that it is almost completely near-far resistant. For this reason, it is believed that the proposed estimator can replace well-known filters such as the EKF or PF.
Chi-Hui HUANG Shyh-Neng LIN Shiunn-Jang CHERN Jiun-Je JIAN
The convergence speed of the conventional adaptive LMS algorithms for time delay estimation (TDE) is highly dependent on the spectral distribution of the desired random source signals of interest, thus the performance of TDE might be degraded, dramatically. To solve this problem, in this letter, a DCT-transform domain constrained adaptive normalized-LMS filtering scheme, referred to as the adaptive constrained DCT-LMS algorithm, is devised for TDE. Computer simulation results verify that the proposed scheme can be used to achieve desired performance, for input random signals with different spectral distributions; it outperforms the unconstrained DCT-LMS and time-domain constrained adaptive LMS algorithms.
A new method of explicitly adaptive time delay estimation (EATDE) algorithm is proposed for estimating a varying time delay parameter. The proposed method is based on the Haar wavelet transform of cross-correlations. The proposed algorithm can be viewed as a gradient-based optimization of lowpass filtered cross-correlations, but requires less computational power. The algorithm shows a global convergence property for wide-band signals with uncorrelated noises. A convergence analysis including mean behavior, mean-square-error behavior, and steady-state error of delay estimate is given. Simulation results are also provided to demonstrate the performance of the proposed algorithm.
Wei CHEN Erry GUNAWAN Kah Chan TEH
Space-time array manifold model is usually used in a fast fading channel to estimate delay for the radio location. The existing additive white Gaussian noise (AWGN) estimation error model significantly overestimates the delay estimation. In this paper, we model the estimation error of the space-time array manifold channel impulse response (CIR) matrix as a correlated AWGN matrix and its performance is shown to be closer to the estimation error of practical systems than the existing model.
This paper addresses the estimation of time delay between two spatially separated noisy signals by system identification modeling with the input and output corrupted by additive white Gaussian noise. The proposed method is based on a modified adaptive Butler-Cantoni equalizer that decouples noise variance estimation from channel estimation. The bias in time delay estimates that is induced by input noise is reduced by an IIR whitening filter whose coefficients are found by the Burg algorithm. For step time-variant delays, a dual mode operation scheme is adopted in which we define a normal operating (tracking) mode and an interrupt operating (optimization) mode. In the tracking mode, only a few coefficients of the impulse response vector are monitored through L1-normed finite forward differences tracking, while in the optimization mode, the time delay optimized. Simulation results confirm the superiority of the proposed approach at low signal-to-noise ratios.
Feng-Xiang GE Qun WAN Jian YANG Ying-Ning PENG
The problem of the super-resolution time delay estimation of the real stationary signals is addressed in this paper. The time delay estimation is first converted into a frequency estimation problem. Then a MUSIC-type algorithm to estimate the subsequent frequency from the single-experiment data is proposed, which not only avoids the mathematical model mismatching but also utilizes the advantages of the subspace-based methods. The mean square errors (MSEs) of the time delay estimate of the MUSIC-type method for varying signal-to-noise (SNR) and separation of two received signal components are shown to illustrate that they approximately coincide with the corresponding Cramer-Rao bound (CRB). Finally, the comparison between the MUSIC-type method and the other conventional methods is presented to show the advantages of the proposed method in this paper.
In this paper, we propose a new approach to the adaptive MLSE receiver, which is based on the delay estimation of the paths in the fading channel. The path delays are estimated by using the known training sequence, and based on this estimation the proposed MLSE tracks not the T-spaced equivalent channel but the variations of each path in the frequency-selective channel directly. It will be shown through computer simulations that the proposed MLSE can improve the performance of the conventional MLSE receivers, when the number of paths is small.
Nozomu TOGAWA Yoshiharu KATAOKA Yuichiro MIYAOKA Masao YANAGISAWA Tatsuo OHTSUKI
Hardware/software partitioning is one of the key processes in a hardware/software cosynthesis system for digital signal processor cores. In hardware/software partitioning, area and delay estimation of a processor core plays an important role since the hardware/software partitioning process must determine which part of a processor core should be realized by hardware units and which part should be realized by a sequence of instructions based on execution time of an input application program and area of a synthesized processor core. This paper proposes area and delay estimation equations for digital signal processor cores. For area estimation, we show that total area for a processor core can be derived from the sum of area for a processor kernel and area for additional hardware units. Area for a processor kernel can be mainly obtained by minimum area for a processor kernel and overheads for adding hardware units and registers. Area for a hardware unit can be mainly obtained by its type and operation bit width. For delay estimation, we show that critical path delay for a processor core can be derived from the delay of a hardware unit which is on the critical path in the processor core. Experimental results demonstrate that errors of area estimation are less than 2% and errors of delay estimation are less than 2 ns when comparing estimated area and delay with logic-synthesized area and delay.
Sirirat TREETASANATAVORN Toshiyuki YOSHIDA Yoshinori SAKAI
In this paper, we propose an idea for intramedia synchronization control using a method of end-to-end delay monitoring to estimate future delay in delay compensation protocol. The estimated value by Kalman filtering at the presentation site is used for feedback control to adjust the retrieval schedule at the source according to the network conditions. The proposed approach is applicable for the real time retrieving application where `tightness' of temporal synchronization is required. The retrieval schedule adjustment is achieved by two resynchronization mechanisms-retrieval offset adjustment and data unit skipping. The retrieval offset adjustment is performed along with a buffer level check in order to compensate for the change in delay jitter, while the data unit skipping control is performed to accelerate the recovery of unsynchronization period under severe conditions. Simulations are performed to verify the effectiveness of the proposed scheme. It is found that with a limited buffer size and tolerable latency in initial presentation, using a higher efficient delay estimator in our proposed resynchronization scheme, the synchronization performance can be improved particularly in the critically congested network condition. In the study, Kalman filtering is shown to perform better than the existing estimation methods using the previous measured jitter or the average value as an estimate.