Author Search Result

[Author] Tomoharu SHIBUYA(27hit)

1-20hit(27hit)

  • Simple Estimation for the Dimension of Subfield Subcodes of AG Codes

    Tomoharu SHIBUYA  Ryutaroh MATSUMOTO  Kohichi SAKANIWA  

     
    PAPER-Coding Theory

      Vol:
    E80-A No:11
      Page(s):
    2058-2065

    In this paper, we present a lower bound for the dimension of subfield subcodes of residue Goppa codes on the curve Cab, which exceeds the lower bound given by Stichtenoth when the number of check symbols is not small. We also give an illustrative example which shows that the proposed bound for the dimension of certain residue Goppa code exceeds the true dimension of a BCH code with the same code length and designed distance.

  • On the Dependence of Error Performance of Spatially Coupled LDPC Codes on Their Design Parameters

    Hiroyuki IHARA  Tomoharu SHIBUYA  

     
    LETTER-Coding Theory

      Vol:
    E96-A No:12
      Page(s):
    2447-2451

    Spatially coupled (SC) low-density parity-check (LDPC) codes are defined by bipartite graphs that are obtained by assembling prototype graphs. The combination and connection of prototype graphs are designated by specifying some parameters, and Kudekar et al. showed that BP threshold of the ensemble of SC LDPC codes agrees with MAP threshold of the ensemble of regular LDPC codes when those parameters are grown up so that the code length tends to infinity. When we design SC LDPC codes with practical code length, however, it is not clear how to set those parameters to enhance the performance of SC LDPC codes. In this paper, we provide the result of numerical experiments that suggest the dependence of error performance of SC LDPC codes over BEC on their design parameters.

  • Detailedly Represented Irregular Low-Density Parity-Check Codes

    Kenta KASAI  Tomoharu SHIBUYA  Kohichi SAKANIWA  

     
    PAPER-Coding Theory

      Vol:
    E86-A No:10
      Page(s):
    2435-2444

    Richardson and Urbanke developed a powerful method density evolution which determines, for various channels, the capacity of irregular low-density parity-check code ensembles. We develop generalized density evolution for minutely represented ensembles and show it includes conventional representation as a special case. Furthermore, we present an example of code ensembles used over binary erasure channel and binary input additive white Gaussian noise channel which have better thresholds than highly optimized ensembles with conventional representation.

  • On a Characterization of a State of Rank-Modulation Scheme Over Multi-Cell Ranking by a Group Action

    Tomoharu SHIBUYA  Takeru SUDO  

     
    PAPER-Coding Theory and Techniques

      Vol:
    E100-A No:12
      Page(s):
    2558-2571

    In this paper, we propose a group theoretic representation suitable for the rank-modulation (RM) scheme over the multi-cell ranking presented by En Gad et al. By introducing an action of the group of all permutation matrices on the set of all permutations, the scheme is clearly reformulated. Moreover, we introduce the concept of r-dominating sets over the multi-cell ranking, which is a generalization of conventional dominating sets, in the design of rank-modulation rewriting codes. The concept together with the proposed group theoretic representation yields an explicit formula of an upper bound on the size of the set of messages that can be stored in the memory by using RM rewriting codes over multi-cell ranking. This bound enables us to consider the trade-off between the size of the storable message set and the rewriting cost more closely. We also provide a concrete example of RM rewriting code that is not available by conventional approaches and whose size of the storable message set can not be achieved by conventional codes.

  • On Tanner's Lower Bound for the Minimum Distance of Regular LDPC Codes Based on Combinatorial Designs

    Tomoharu SHIBUYA  Masatoshi ONIKUBO  Kohichi SAKANIWA  

     
    PAPER-Coding Theory

      Vol:
    E86-A No:10
      Page(s):
    2428-2434

    In this paper, we investigate Tanner's lower bound for the minimum distance of regular LDPC codes based on combinatorial designs. We first determine Tanner's lower bound for LDPC codes which are defined by modifying bipartite graphs obtained from combinatorial designs known as Steiner systems. Then we show that Tanner's lower bound agrees with or exceeds conventional lower bounds including the BCH bound, and gives the true minimum distance for some EG-LDPC codes.

  • Detailed Evolution of Degree Distributions in Residual Graphs with Joint Degree Distributions

    Takayuki NOZAKI  Kenta KASAI  Tomoharu SHIBUYA  Kohichi SAKANIWA  

     
    PAPER-Coding Theory

      Vol:
    E91-A No:10
      Page(s):
    2737-2744

    Luby et al. derived evolution of degree distributions in residual graphs for irregular LDPC code ensembles. Evolution of degree distributions in residual graphs is important characteristic which is used for finite-length analysis of the expected block and bit error probability over the binary erasure channel. In this paper, we derive detailed evolution of degree distributions in residual graphs for irregular LDPC code ensembles with joint degree distributions.

  • Asymptotic Weight and Stopping Set Distributions for Detailedly Represented Irregular LDPC Code Ensembles

    Ryoji IKEGAYA  Kenta KASAI  Tomoharu SHIBUYA  Kohichi SAKANIWA  

     
    PAPER-Coding Theory

      Vol:
    E87-A No:10
      Page(s):
    2484-2492

    In this paper, we provide explicit representations of average weight and stopping set distributions and asymptotic expressions of their exponent for detailedly represented irregular LDPC code ensembles. Further we present numerical examples which compare a detailedly represented irregular LDPC code ensemble with a conventional one with respect to both of weight and stopping set distributions.

  • Iterative Decoding Based on the Concave-Convex Procedure

    Tomoharu SHIBUYA  Ken HARADA  Ryosuke TOHYAMA  Kohichi SAKANIWA  

     
    PAPER-Coding Theory

      Vol:
    E88-A No:5
      Page(s):
    1346-1364

    New decoding algorithms for binary linear codes based on the concave-convex procedure are presented. Numerical experiments show that the proposed decoding algorithms surpass Belief Propagation (BP) decoding in error performance. Average computational complexity of one of the proposed decoding algorithms is only a few times greater than that of the BP decoding.

  • A Lower Bound for Generalized Hamming Weights and a Condition for t-th Rank MDS

    Tomoharu SHIBUYA  Ryo HASEGAWA  Kohichi SAKANIWA  

     
    PAPER-Information Theory and Coding Theory

      Vol:
    E82-A No:6
      Page(s):
    1090-1101

    In this paper, we introduce a lower bound for the generalized Hamming weights, which is applicable to arbitrary linear code, in terms of the notion of well-behaving. We also show that any [n,k] linear code C over a finite field F is the t-th rank MDS for t such that g(C)+1 t k where g(C) is easily calculated from the basis of Fn so chosen that whose first n-k elements generate C. Finally, we apply our result to Reed-Solomon, Reed-Muller and algebraic geometry codes on Cab, and determine g(C) for each code.

  • Improvement of the Quality of Visual Secret Sharing Schemes with Constraints on the Usage of Shares

    Mariko FUJII  Tomoharu SHIBUYA  

     
    PAPER

      Pubricized:
    2019/10/07
      Vol:
    E103-D No:1
      Page(s):
    11-24

    (k,n)-visual secret sharing scheme ((k,n)-VSSS) is a method to divide a secret image into n images called shares that enable us to restore the original image by only stacking at least k of them without any complicated computations. In this paper, we consider (2,2)-VSSS to share two secret images at the same time only by two shares, and investigate the methods to improve the quality of decoded images. More precisely, we consider (2,2)-VSSS in which the first secret image is decoded by stacking those two shares in the usual way, while the second one is done by stacking those two shares in the way that one of them is used reversibly. Since the shares must have some subpixels that inconsistently correspond to pixels of the secret images, the decoded pixels do not agree with the corresponding pixels of the secret images, which causes serious degradation of the quality of decoded images. To reduce such degradation, we propose several methods to construct shares that utilize 8-neighbor Laplacian filter and halftoning. Then we show that the proposed methods can effectively improve the quality of decoded images. Moreover, we demonstrate that the proposed methods can be naturally extended to (2,2)-VSSS for RGB images.

  • Characterization of Factor Graph by Mooij's Sufficient Condition for Convergence of the Sum-Product Algorithm

    Tomoharu SHIBUYA  

     
    LETTER-Coding Theory

      Vol:
    E93-A No:11
      Page(s):
    2083-2088

    Recently, Mooij et al. proposed new sufficient conditions for convergence of the sum-product algorithm, and it was also shown that if the factor graph is a tree, Mooij's sufficient condition for convergence is always activated. In this letter, we show that the converse of the above statement is also true under some assumption, and that the assumption holds for the sum-product decoding. These newly obtained fact implies that Mooij's sufficient condition for convergence of the sum-product decoding is activated if and only if the factor graph of the a posteriori probability of the transmitted codeword is a tree.

  • Average Bit Erasure Probability of Regular LDPC Code Ensembles under MAP Decoding over BEC

    Takayuki ITSUI  Kenta KASAI  Ryoji IKEGAYA  Tomoharu SHIBUYA  Kohichi SAKANIWA  

     
    PAPER

      Vol:
    E90-A No:9
      Page(s):
    1763-1771

    The average bit erasure probability of a binary linear code ensemble under maximum a-posteriori probability (MAP) decoding over binary erasure channel (BEC) can be calculated with the average support weight distribution of the ensemble via the EXIT function and the shortened information function. In this paper, we formulate the relationship between the average bit erasure probability under MAP decoding over BEC and the average support weight distribution for a binary linear code ensemble. Then, we formulate the average support weight distribution and the average bit erasure probability under MAP decoding over BEC for regular LDPC code ensembles.

  • On the Performance of Algebraic Geometric Codes

    Tomoharu SHIBUYA  Hajime JINUSHI  Shinji MIURA  Kohichi SAKANIWA  

     
    PAPER-Information Theory and Coding Theory

      Vol:
    E79-A No:6
      Page(s):
    928-937

    In this paper, we show that the conventional BCH codes can be better than the AG codes when the number of check symbols is relatively small. More precisely, we consider an AG code on Cab whose number of check symbols is less than min {g+a, n-g}, where n and g denote the code length and the genus of the curve, respectively. It is shown that there always exists an extended BCH code, (i) which has the same designed distance as the Feng-Rao designed distance of the AG code and the code length and the rate greater than those of the AG code, or (ii) which has the same number of check symbols as that of the AG code, the designed distance not less than that of the AG code and the code length longer than that of the AG code.

  • An Improved Bound for the Dimension of Subfield Subcodes

    Tomoharu SHIBUYA  Ryutaroh MATSUMOTO  Kohichi SAKANIWA  

     
    PAPER-Information Theory and Coding Theory

      Vol:
    E80-A No:5
      Page(s):
    876-880

    In this paper, we give a new lower bound for the dimension of subfield subcodes. This bound improves the lower bound given by Stichtenoth. A BCH code and a subfield subcode of algebraic geometric code on a hyper elliptic curve are discussed as special cases.

  • Ring Theoretic Approach to Reversible Codes Based on Circulant Matrices

    Tomoharu SHIBUYA  

     
    PAPER-Coding Theory

      Vol:
    E94-A No:11
      Page(s):
    2121-2126

    Recently, Haley and Grant introduced the concept of reversible codes – a class of binary linear codes that can reuse the decoder architecture as the encoder and encodable by the iterative message-passing algorithm based on the Jacobi method over F2. They also developed a procedure to construct parity check matrices of a class of reversible codes named type-I reversible codes by utilizing properties specific to circulant matrices. In this paper, we refine a mathematical framework for reversible codes based on circulant matrices through a ring theoretic approach. This approach enables us to clarify the necessary and sufficient condition on which type-I reversible codes exist. Moreover, a systematic procedure to construct all circulant matrices that constitute parity check matrices of type-I reversible codes is also presented.

  • A Dual of Well-Behaving Type Designed Minimum Distance

    Tomoharu SHIBUYA  Kohichi SAKANIWA  

     
    PAPER-Coding Theory

      Vol:
    E84-A No:2
      Page(s):
    647-652

    In this paper, we propose a lower bound for the minimum distance of [n,k] linear codes which are specified by generator matrices whose rows are k vectors of a given sequence of n linearly independent vectors over a finite field. The Feng-Rao bound and the order bound give the lower bounds for the minimum distance of the dual codes of the codes considered in this paper. We show that the proposed bound gives the true minimum distance for Reed-Solomon and Reed-Muller codes and exceeds the Goppa bound for some L-type algebraic geometry codes.

  • Performance of a Decoding Algorithm for LDPC Codes Based on the Concave-Convex Procedure

    Tomoharu SHIBUYA  Kohichi SAKANIWA  

     
    LETTER-Coding Theory

      Vol:
    E86-A No:10
      Page(s):
    2601-2606

    In this letter, we show the effectiveness of a double-loop algorithm based on the concave-convex procedure (CCCP) in decoding linear codes. For this purpose, we numerically compare the error performance of CCCP-based decoding algorithm with that of a conventional iterative decoding algorithm based on belief propagation (BP). We also investigate computational complexity and its relation to the error performance.

  • Average Coset Weight Distribution of Multi-Edge Type LDPC Code Ensembles

    Kenta KASAI  Yuji SHIMOYAMA  Tomoharu SHIBUYA  Kohichi SAKANIWA  

     
    PAPER-Coding Theory

      Vol:
    E89-A No:10
      Page(s):
    2519-2525

    Multi-Edge type Low-Density Parity-Check codes (MET-LDPC codes) introduced by Richardson and Urbanke are generalized LDPC codes which can be seen as LDPC codes obtained by concatenating several standard (ir)regular LDPC codes. We prove in this paper that MET-LDPC code ensembles possess a certain symmetry with respect to their Average Coset Weight Distributions (ACWD). Using this symmetry, we drive ACWD of MET-LDPC code ensembles from ACWD of their constituent ensembles.

  • Design of Irregular Repeat Accumulate Codes with Joint Degree Distributions

    Kenta KASAI  Shinya MIYAMOTO  Tomoharu SHIBUYA  Kohichi SAKANIWA  

     
    LETTER-Coding Theory

      Vol:
    E89-A No:11
      Page(s):
    3351-3354

    Irregular Repeat-Accumulate (IRA) codes, introduced by Jin et al., have a linear-time encoding algorithm and their decoding performance is comparable to that of irregular low-density parity-check (LDPC) codes. Meanwhile the authors have introduced detailedly represented irregular LDPC code ensembles specified with joint degree distributions between variable nodes and check nodes. In this paper, by using density evolution method [7],[8], we optimize IRA codes specified with joint degree distributions. Resulting codes have higher thresholds than Jin's IRA codes.

  • Efficient Linear Time Encoding for LDPC Codes

    Tomoharu SHIBUYA  Kazuki KOBAYASHI  

     
    PAPER-Coding Theory

      Vol:
    E97-A No:7
      Page(s):
    1556-1567

    In this paper, we propose a new encoding method applicable to any linear codes over arbitrary finite field whose computational complexity is O(δ*n) where δ* and n denote the maximum column weight of a parity check matrix of a code and the code length, respectively. This means that if a code has a parity check matrix with the constant maximum column weight, such as LDPC codes, it can be encoded with O(n) computation. We also clarify the relation between the proposed method and conventional methods, and compare the computational complexity of those methods. Then we show that the proposed encoding method is much more efficient than the conventional ones.

1-20hit(27hit)

FlyerIEICE has prepared a flyer regarding multilingual services. Please use the one in your native language.