1-13hit |
Junya KIYOHARA Tsutomu KAWABATA
We study Lempel-Ziv-Yokoo algorithm [1, Algorithm 4] for universal data compression. In this paper, we give a simpler implementation of Lempel-Ziv-Yokoo algorithm than the original one [1, Algorithm 4] and show its asymptotic optimality for a stationary ergodic source.
Mohammad M. RASHID Tsutomu KAWABATA
Prediction of actual symbol probability is crucial for statistical data compression that uses arithmetic coder. Krichevsky-Trofimov (KT) estimator has been a standard predictor and applied in CTW or FWCTW methods. However, KT-estimator performs poorly when non occurring symbols appear. To rectify this we proposed a zero-redundancy estimator, especially with a finite window(Rashid and Kawabata, ISIT2003) for non stationary source. In this paper, we analyze the zero-redundancy estimators in the case of Markovian source and give an asymptotic evaluation of the redundancy. We show that one of the estimators has the per symbol redundancy given by one half of the dimension of positive parameters divided by the window size when the window size is large.
Tsutomu KAWABATA Frans M. J. WILLEMS
We propose a variation of the Context Tree Weighting algorithm for tree source modified such that the growth of the context resembles Lempel-Ziv parsing. We analyze this algorithm, give a concise upper bound to the individual redundancy for any tree source, and prove the asymptotic optimality of the data compression rate for any stationary and ergodic source.
We show that the permanent of an m n rectangular matrix can be computed with O(n 2m 3m) multiplications and additions. Asymptotically, this is better than straightforward extensions of the best known algorithms for the permanent of a square matrix when m/n log3 2 and n .
Ziv-Lempel incremental parsing [1] is a fundamental algorithm for lossless data compression. There is a simple enumerative implementation [7] which preserves a duality between the encoder and the decoder. However, due to its compactness, the implementation when combined with a complete integer code, allows only an input sequence with a length consistent with the parsing boundaries. In this letter, we propose a simple additional mechanism for post-processing a binary file of arbitrary length, provided the file punctuation is externally managed.
The expected lengths of the parsed segments obtained by applying Lempel-Ziv incremental parsing algorithm for i.i.d. source satisfy simple recurrence relations. By extracting a combinatorial essence from the previous proof, we obtain a simpler derivation.
Rate-distortion theory for the points that are distributed with the uniform density (Poisson point processes) is studied. The rate-distortion function per point for n neighboring points Rn(D) is introduced and the function R (D) is defined as a limitting function of Rn(D) for infinitely large n. A Shannon lower bound for the rate-distortion function is obtained and it is shown that the rate-distortion function for an interval length between neighboring points is the better lower bound. The behavior of Dmax(n), the value of D where Rn(D) first reaches zero, is studied. A coding scheme that constitutes an upper bound to R(D) is evaluated and it is shown that the rate-distortion function for the corresponding Wiener process is the better upper bound for large distortion. Some discussions are made on the coding theorem for our problem.
We consider the optimal average cost of variable length source code averaged with a given probability distribution over source messages. The problem was argued in Csiszar and Korner's book. In a special case of binary alphabet, we find an upper bound to the optimal cost minus an ideal cost, where the ideal cost is the entropy of the source divided by a unique scalar that makes negative costs logarithmic probabilities. Our bound is better than the one given in the book.
A multi-input erasure channel is defined as the J(J+1) discrete memoryless channel, for which we study a capacity formula, through the method by Muroga. We first give a simpler capacity formula for the multi-input erasure channel with no cross probability. Next we give an upper bound to the capacity for the general case. Finally we remark that the upper bound is actually the capacity when the cross probability is small.
Keigo TAKEUCHI Toshiyuki TANAKA Tsutomu KAWABATA
Kudekar et al. proved an interesting result in low-density parity-check (LDPC) convolutional codes: The belief-propagation (BP) threshold is boosted to the maximum-a-posteriori (MAP) threshold by spatial coupling. Furthermore, the authors showed that the BP threshold for code-division multiple-access (CDMA) systems is improved up to the optimal one via spatial coupling. In this letter, a phenomenological model for elucidating the essence of these phenomenon, called threshold improvement, is proposed. The main result implies that threshold improvement occurs for spatially-coupled general graphical models.
The uniform switching system is the family of non-linear n m binary arrays constrained such that all columns are from the constant weight k vectors and all rows have weights divisible by p > 0. For this system, we present a cardinality formula and an enumerative algorithm.
Shuhei HORIO Keigo TAKEUCHI Tsutomu KAWABATA
For low-density parity-check codes, spatial coupling was proved to boost the performance of iterative decoding up to the optimal performance. As an application of spatial coupling, in this paper, bit-interleaved coded modulation (BICM) with spatially coupled (SC) interleaving — called SC-BICM — is considered to improve the performance of iterative channel estimation and decoding for block-fading channels. In the iterative receiver, feedback from the soft-in soft-out decoder is utilized to refine the initial channel estimates in linear minimum mean-squared error (LMMSE) channel estimation. Density evolution in the infinite-code-length limit implies that the SC-BICM allows the receiver to attain accurate channel estimates even when the pilot overhead for training is negligibly small. Furthermore, numerical simulations show that the SC-BICM can provide a steeper reduction in bit error rate than conventional BICM, as well as a significant improvement in the so-called waterfall performance for high rate systems.