Tetsunao MATSUTA Tomohiko UYEMATSU
In this paper, we deal with the fixed-length lossy compression, where a fixed-length sequence emitted from the information source is encoded into a codeword, and the source sequence is reproduced from the codeword with a certain distortion. We give lower and upper bounds on the minimum number of codewords such that the probability of exceeding a given distortion level is less than a given probability. These bounds are characterized by using the α-mutual information of order infinity. Further, for i.i.d. binary sources, we provide numerical examples of tight upper bounds which are computable in polynomial time in the blocklength.
This paper presents two different algorithms for random number generation. One algorithm generates a random sequence with an arbitrary distribution from a sequence of pure random numbers, i.e. a sequence with uniform distribution. The other algorithm generates a sequence of pure random numbers from a sequence of a given i.i.d. source. Both algorithms can be regarded as an implementation of the interval algorithm by using the integer arithmetic with limited precision. We analyze the approximation error measured by the variational distance between probability distributions of the desired random sequence and the output sequence generated by the algorithms. Further, we give bounds on the expected length of input sequence per one output symbol, and compare it with that of the original interval algorithm.
Takayuki FUKATANI Ryutaroh MATSUMOTO Tomohiko UYEMATSU
We propose use of QR factorization with sort and Dijkstra's algorithm for decreasing the computational complexity of the sphere decoder that is used for ML detection of signals on the multi-antenna fading channel. QR factorization with sort decreases the complexity of searching part of the decoder with small increase in the complexity required for preprocessing part of the decoder. Dijkstra's algorithm decreases the complexity of searching part of the decoder with increase in the storage complexity. The computer simulation demonstrates that the complexity of the decoder is reduced by the proposed methods significantly.
Tomohiko UYEMATSU Junya KAGA Eiji OKAMOTO
This paper investigates the error correcting capabilities of concatenated codes employing algebraic geometry codes as outer codes and time-varying randomly selected inner codes, used on discrete memoryless channels with maximum likelihood decoding. It is proved that Gallager's random coding error exponent can be obtained for all rates by such codes. Further, it is clarified that the error exponent arbitrarily close to Gallager's can be obtained for almost all random selections of inner codes with a properly chosen code length, provided that the length of the outer code is sufficiently large. For a class of regular channels, the result is also valid for linear concatenated codes, and Gallager's expurgated error exponent can be asymptotically obtained for all rates.
Masaaki KATAYAMA Tomohiko UYEMATSU
Tetsunao MATSUTA Tomohiko UYEMATSU
Weissman introduced a coding problem for channels with action-dependent states. In this coding problem, there are two encoders and a decoder. An encoder outputs an action that affects the state of the channel. Then, the other encoder outputs a codeword of the message into the channel by using the channel state. The decoder receives a noisy observation of the codeword, and reconstructs the message. In this paper, we show an exponential error bound for channels with action-dependent states based on the random coding argument.
Tetsunao MATSUTA Tomohiko UYEMATSU
In this paper, we consider a source coding with side information partially used at the decoder through a codeword. We assume that there exists a relative delay (or gap) of the correlation between the source sequence and side information. We also assume that the delay is unknown but the maximum of possible delays is known to two encoders and the decoder, where we allow the maximum of delays to change by the block length. In this source coding, we give an inner bound and an outer bound on the achievable rate region, where the achievable rate region is the set of rate pairs of encoders such that the decoding error probability vanishes as the block length tends to infinity. Furthermore, we clarify that the inner bound coincides with the outer bound when the maximum of delays for the block length converges to a constant.
Masashi NAITO Shun WATANABE Ryutaroh MATSUMOTO Tomohiko UYEMATSU
We consider the problem of secret key agreement in Gaussian Maurer's Model. In Gaussian Maurer's model, legitimate receivers, Alice and Bob, and a wire-tapper, Eve, receive signals randomly generated by a satellite through three independent memoryless Gaussian channels respectively. Then Alice and Bob generate a common secret key from their received signals. In this model, we propose a protocol for generating a common secret key by using the result of soft-decision of Alice and Bob's received signals. Then, we calculate a lower bound on the secret key rate in our proposed protocol. As a result of comparison with the protocol that only uses hard-decision, we found that the higher rate is obtained by using our protocol.
Kouichi YAMAZAKI Osamu HIROTA Masao NAKAGAWA Tomohiko UYEMATSU Masanori OHYA
It is shown that error correcting code improves an essential perfomance limitation of photon communications with energy loss. The coded photon signals allow us the loss about 13 dB to keep the advantage of photon number state signals while uncoded one is about 7 dB. Furthermore the necessity of weight distribution control of code words is discussed.
Tetsunao MATSUTA Tomohiko UYEMATSU
We normally hold a lot of confidential information in hard disk drives and solid-state drives. When we want to erase such information to prevent the leakage, we have to overwrite the sequence of information with a sequence of symbols independent of the information. The overwriting is needed only at places where overwritten symbols are different from original symbols. Then, the cost of overwrites such as the number of overwritten symbols to erase information is important. In this paper, we clarify the minimum cost such as the minimum number of overwrites to erase information under weak and strong independence criteria. The former (resp. the latter) criterion represents that the mutual information between the original sequence and the overwritten sequence normalized (resp. not normalized) by the length of the sequences is less than a given desired value.
Tetsunao MATSUTA Tomohiko UYEMATSU Ryutaroh MATSUMOTO
Low-density parity-check (LDPC) codes become very popular in channel coding, since they can achieve the performance close to maximum-likelihood (ML) decoding with linear complexity of the block length. Recently, Muramatsu et al. proposed a code using LDPC matrices for Slepian-Wolf source coding, and showed that their code can achieve any point in the achievable rate region of Slepian-Wolf source coding. However, since they employed ML decoding, their decoder needs to know the probability distribution of the source. Hence, it is an open problem whether there exists a universal code using LDPC matrices, where universal code means that the error probability of the code vanishes as the block length tends to infinity for all sources whose achievable rate region contains the rate pair of encoders. In this paper, we show the existence of a universal Slepian-Wolf source code using LDPC matrices for stationary memoryless sources.
This paper clarifies two variable-to-fixed length codes which achieve optimum large deviations performance of empirical compression ratio. One is Lempel-Ziv code with fixed number of phrases, and the other is an arithmetic code with fixed codeword length. It is shown that Lempel-Ziv code is asymptotically optimum in the above sense, for the class of finite-alphabet and finite-state sources, and that the arithmetic code is asymptotically optimum for the class of finite-alphabet unifilar sources.
Kouya TOCHIKUBO Tomohiko UYEMATSU Ryutaroh MATSUMOTO
We propose efficient secret sharing schemes realizing general access structures. Our proposed schemes are perfect secret sharing schemes and include Shamir's (k, n)-threshold schemes as a special case. Furthermore, we show that a verifiable secret sharing scheme for general access structures is realized by one of the proposed schemes.
Tetsunao MATSUTA Tomohiko UYEMATSU
The multiple-access channel (MAC) becomes very popular in various communication systems, because multi-terminal communication systems have been widely used in practical systems, e.g., mobile phones and P2P, etc. For some MACs, it is known that feedback can enlarge the capacity region, where the capacity region is the set of rate pairs such that the error probability can be made arbitrarily small for sufficiently large block length. The capacity region for general MACs, which are not required to satisfy ergodicity and stationarity with perfect feedback was first shown by Tatikonda and Mitter without the proof, where perfect feedback means that the channel output is perfectly fed back to senders. In this paper, we generalize Tatikonda and Mitter's result to the case of deterministic feedback, where the values of deterministic functions of past channel outputs is fed back to senders. We show that the capacity region for general MACs with deterministic feedback can be represented by the information-spectrum formula introduced by Han and Verdu, and directed information introduced by Massey. We also investigate the compound MAC problem, the ε-coding problem, the strong converse property and the cost constraint problem for general MACs with deterministic feedback.
Ken-ichi IWATA Masakatu MORII Tomohiko UYEMATSU
This paper describes an efficient and simple coding algorithm of universally optimal codes for stationary (ergodic) sources and noiseless channel with unequal symbol costs. The symbol cost indicates the required time (or space) for the transmission (or storage) of that symbol, and the cost of any code symbol depends only on that symbol. The proposed coding algorithm mainly consists of two parts. The first part is based on the well-known Ziv-Lempel coding algorighm proposed in 1978 (sometimes called LZ78), and the second part is based on the Varn coding algorithm. The coding algorithm asymptotically achieves an optimal average cost of codes for stationary sources, and also achieves an optimal cost of codes for stationary ergodic sources with probability one. Furthermore, the computational complexity of the proposed coding algorithm is linear with respect to the length of source sequence and coded sequence.
Tetsunao MATSUTA Tomohiko UYEMATSU
This paper deals with a broadcast network with a server and many users. The server has files of content such as music and videos, and each user requests one of these files, where each file consists of some separated layers like a file encoded by a scalable video coding. On the other hand, each user has a local memory, and a part of information of the files is cached (i.e., stored) in these memories in advance of users' requests. By using the cached information as side information, the server encodes files based on users' requests. Then, it sends a codeword through an error-free shared link for which all users can receive a common codeword from the server without error. We assume that the server transmits some layers up to a certain level of requested files at each different transmission rate (i.e., the codeword length per file size) corresponding to each level. In this paper, we focus on the region of tuples of these rates such that layers up to any level of requested files are recovered at users with an arbitrarily small error probability. Then, we give inner and outer bounds on this region.
Eiji OKAMOTO Tomohiko UYEMATSU Masahiro MAMBO
A permutation cipher scheme using polynomials over a field is presented. A permutation as well as substitution plays a major role in almost all conventional cryptosystems. But the security of the permutation depends on how symbols are permuted. This paper proposes the use of polynomials for the permutation and show that the scheme satisfies the following security criteria. (1) There are enough encryption keys to defend exhaustive attacks. (2) The permutation moves almost all samples into places which are different from the original places. (3) Most samples are shifted differently by different permutations. The permutation cipher scheme could be regarded as a scheme based on Reed-Solomon codes. The information symbols of the codes compose a key of the permutation cipher scheme.
Tetsunao MATSUTA Tomohiko UYEMATSU
We consider the coding problem for lossy source coding with side information at the decoder, which is known as the Wyner-Ziv source coding problem. The goal of the coding problem is to find the minimum rate such that the probability of exceeding a given distortion threshold is less than the desired level. We give an equivalent expression of the minimum rate by using the chromatic number and notions of covering of a set. This allows us to analyze the coding problem in terms of graph coloring and covering.
Shun WATANABE Ryutaroh MATSUMOTO Tomohiko UYEMATSU
Privacy amplification is a technique to distill a secret key from a random variable by a function so that the distilled key and eavesdropper's random variable are statistically independent. There are three kinds of security criteria for the key distilled by privacy amplification: the normalized divergence criterion, which is also known as the weak security criterion, the variational distance criterion, and the divergence criterion, which is also known as the strong security criterion. As a technique to distill a secret key, it is known that the encoder of a Slepian-Wolf (the source coding with full side-information at the decoder) code can be used as a function for privacy amplification if we employ the weak security criterion. In this paper, we show that the encoder of a Slepian-Wolf code cannot be used as a function for privacy amplification if we employ the criteria other than the weak one.
Jun KURIHARA Tomohiko UYEMATSU Ryutaroh MATSUMOTO
This paper precisely characterizes secret sharing schemes based on arbitrary linear codes by using the relative dimension/length profile (RDLP) and the relative generalized Hamming weight (RGHW). We first describe the equivocation Δm of the secret vector