IEICE TRANSACTIONS on Fundamentals

  • Impact Factor

    0.40

  • Eigenfactor

    0.003

  • article influence

    0.1

  • Cite Score

    1.1

Advance publication (published online immediately after acceptance)

Volume E95-A No.5  (Publication Date:2012/05/01)

    Regular Section
  • Stationary and Non-stationary Wide-Band Noise Reduction Using Zero Phase Signal

    Weerawut THANHIKAM  Yuki KAMAMORI  Arata KAWAMURA  Youji IIGUNI  

     
    PAPER-Engineering Acoustics

      Page(s):
    843-852

    This paper proposes a wide-band noise reduction method using a zero phase (ZP) signal which is defined as the IDFT of a spectral amplitude. When a speech signal has periodicity in a short observation, the corresponding ZP signal becomes also periodic. On the other hand, when a noise spectral amplitude is approximately flat, its ZP signal takes nonzero values only around the origin. Hence, when a periodic speech signal is embedded in a flat spectral noise in an analysis frame, its ZP signal becomes a periodic signal except around the origin. In the proposed noise reduction method, we replace the ZP signal around the origin with the ZP signal in the second or latter period. Then, we get an estimated speech ZP signal. The major advantages of this method are that it can reduce not only stationary wide-band noises but also non-stationary wide-band noises and does not require a prior estimation of the noise spectral amplitude. Simulation results show that the proposed noise reduction method improves the SNR more than 5 dB for a tunnel noise and 13 dB for a clap noise in a low SNR environment.

  • Supervised Single-Channel Speech Separation via Sparse Decomposition Using Periodic Signal Models

    Makoto NAKASHIZUKA  Hiroyuki OKUMURA  Youji IIGUNI  

     
    PAPER-Engineering Acoustics

      Page(s):
    853-866

    In this paper, we propose a method for supervised single-channel speech separation through sparse decomposition using periodic signal models. The proposed separation method employs sparse decomposition, which decomposes a signal into a set of periodic signals under a sparsity penalty. In order to achieve separation through sparse decomposition, the decomposed periodic signals have to be assigned to the corresponding sources. For the assignment of the periodic signal, we introduce clustering using a K-means algorithm to group the decomposed periodic signals into as many clusters as the number of speakers. After the clustering, each cluster is assigned to its corresponding speaker using preliminarily learnt codebooks. Through separation experiments, we compare our method with MaxVQ, which performs separation on the frequency spectrum domain. The experimental results in terms of signal-to-distortion ratio show that the proposed sparse decomposition method is comparable to the frequency domain approach and has less computational costs for assignment of speech components.

  • Implementation of Multimode-Multilevel Block Truncation Coding for LCD Overdrive

    Taegeun OH  Sanghoon LEE  

     
    PAPER-Digital Signal Processing

      Page(s):
    867-875

    The Liquid-crystal display (LCD) overdrive technique has been utilized to reduce motion blur on a display via a reduction in the response time. However, to measure the variation of the pixel amplitudes, it is necessary to store the previous frame using a large frame memory. To downscale the frame memory, block truncation coding (BTC) is commonly employed due to the simplicity of its implementation, even if some visual artifacts may occur for image blocks with high frequency components. In this paper, we present a multimode-multilevel BTC (MBTC) technique that improves performance while maintaining simplicity. To improve the visual quality, we uniquely determine the quantization level and coding mode of each block according to the distribution of the luminance and chrominance amplitudes. For a compression ratio of 6:1, the proposed method demonstrates higher coding efficiency and overdrive performance by up to 3.81 dB in the PSNR compared to other methods.

  • Identification of Quasi-ARX Neurofuzzy Model with an SVR and GA Approach

    Yu CHENG  Lan WANG  Jinglu HU  

     
    PAPER-Systems and Control

      Page(s):
    876-883

    The quasi-ARX neurofuzzy (Q-ARX-NF) model has shown great approximation ability and usefulness in nonlinear system identification and control. It owns an ARX-like linear structure, and the coefficients are expressed by an incorporated neurofuzzy (InNF) network. However, the Q-ARX-NF model suffers from curse-of-dimensionality problem, because the number of fuzzy rules in the InNF network increases exponentially with input space dimension. It may result in high computational complexity and over-fitting. In this paper, the curse-of-dimensionality is solved in two ways. Firstly, a support vector regression (SVR) based approach is used to reduce computational complexity by a dual form of quadratic programming (QP) optimization, where the solution is independent of input dimensions. Secondly, genetic algorithm (GA) based input selection is applied with a novel fitness evaluation function, and a parsimonious model structure is generated with only important inputs for the InNF network. Mathematical and real system simulations are carried out to demonstrate the effectiveness of the proposed method.

  • A Processor Accelerator for Software Decoding of Reed-Solomon Codes

    Kazuhito ITO  Keisuke NASU  

     
    PAPER-VLSI Design Technology and CAD

      Page(s):
    884-893

    Decoding of Reed-Solomon (RS) codes requires many arithmetic operations in the Galois field. While the software decoding of RS codes has the advantage of its flexibility to support RS codes of variable parameters, the speed of the software decoding is slower than dedicated hardware RS decoders because arithmetic operations in the Galois field on an ordinary processor require many instruction steps. To achieve fast software decoding of RS codes, it is effective to accelerate Galois operations by both dedicated circuitry and parallel processing. In this paper, an accelerator is proposed which is attached to the base processor to speed up the software decoding of RS codes by parallel execution of Galois operations.

  • NHPP-Based Software Reliability Models Using Equilibrium Distribution

    Xiao XIAO  Hiroyuki OKAMURA  Tadashi DOHI  

     
    PAPER-Reliability, Maintainability and Safety Analysis

      Page(s):
    894-902

    Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.

  • On the Hardness of Subset Sum Problem from Different Intervals

    Jun KOGURE  Noboru KUNIHIRO  Hirosuke YAMAMOTO  

     
    PAPER-Cryptography and Information Security

      Page(s):
    903-908

    The subset sum problem, which is often called as the knapsack problem, is known as an NP-hard problem, and there are several cryptosystems based on the problem. Assuming an oracle for shortest vector problem of lattice, the low-density attack algorithm by Lagarias and Odlyzko and its variants solve the subset sum problem efficiently, when the “density” of the given problem is smaller than some threshold. When we define the density in the context of knapsack-type cryptosystems, weights are usually assumed to be chosen uniformly at random from the same interval. In this paper, we focus on general subset sum problems, where this assumption may not hold. We assume that weights are chosen from different intervals, and make analysis of the effect on the success probability of above algorithms both theoretically and experimentally. Possible application of our result in the context of knapsack cryptosystems is the security analysis when we reduce the data size of public keys.

  • On the Codeword Length Distribution of T-Codes

    Ulrich SPEIDEL  T. Aaron GULLIVER  

     
    PAPER-Information Theory

      Page(s):
    909-917

    In 2008, the authors and Makwakwa demonstrated a close link between variable-length T-codes and cyclic equivalence classes, which introduces a limit on the the number of codewords of a particular length that a T-code can have. This paper presents a collection of new results on the codeword length distribution of T-codes based on this link. In particular, the average and average weighted codeword lengths are investigated for systematic T-codes. Several results are presented on subsets of T-code codewords for which the aforementioned limit is reached, and asymptotic expressions are derived for the variance and the coefficient of variation of codeword length distributions.

  • Further Results on the Stopping Distance of Array LDPC Matrices

    Haiyang LIU  Lu HE  Jie CHEN  

     
    PAPER-Coding Theory

      Page(s):
    918-926

    Given an odd prime q and an integer mq, an array-based parity-check matrix H(m,q) can be constructed for a quasi-cyclic low-density parity-check (LDPC) code C(m,q). For m=4 and q ≥ 11, we prove the stopping distance of H(4,q) is 10, which is equal to the minimum Hamming distance of the associated code C(4,q). In addition, a tighter lower bound on the stopping distance of H(m,q) is also given for m > 4 and q ≥ 11.

  • A Novel Framework for Extracting Visual Feature-Based Keyword Relationships from an Image Database

    Marie KATSURAI  Takahiro OGAWA  Miki HASEYAMA  

     
    PAPER-Image

      Page(s):
    927-937

    In this paper, a novel framework for extracting visual feature-based keyword relationships from an image database is proposed. From the characteristic that a set of relevant keywords tends to have common visual features, the keyword relationships in a target image database are extracted by using the following two steps. First, the relationship between each keyword and its corresponding visual features is modeled by using a classifier. This step enables detection of visual features related to each keyword. In the second step, the keyword relationships are extracted from the obtained results. Specifically, in order to measure the relevance between two keywords, the proposed method removes visual features related to one keyword from training images and monitors the performance of the classifier obtained for the other keyword. This measurement is the biggest difference from other conventional methods that focus on only keyword co-occurrences or visual similarities. Results of experiments conducted using an image database showed the effectiveness of the proposed method.

  • Simple Bitplane Coding and Its Application to Multi-Functional Image Compression

    Hisakazu KIKUCHI  Ryosuke ABE  Shogo MURAMATSU  

     
    PAPER-Image

      Page(s):
    938-951

    A simple image compression scheme is presented for various types of images, which include color/grayscale images, color-quantized images, and bilevel images such as document and digital halftone images. It is a bitplane coding composed of a new context modeling and adaptive binary arithmetic coding. A target bit to be encoded is conditioned by the estimates of the neighboring pixels including non-causal locations. Several functionalities are also integrated. They are arbitrary shaped ROI transmission, selective tile partitioning, accuracy scalability, and others. The proposed bitplane codec is competitive with JPEG-LS in lossless compression of 8-bit grayscale and 24-bit color images. The proposed codec is close to JBIG2 in bilevel image compression. It outperforms the existing standards in compression of 8-bit color-quantized images.

  • Decentralized Supervisory Control of Timed Discrete Event Systems Using a Partition of the Forcible Event Set

    Masashi NOMURA  Shigemasa TAKAI  

     
    PAPER-Concurrent Systems

      Page(s):
    952-960

    In the framework of decentralized supervisory control of timed discrete event systems (TDESs), each local supervisor decides the set of events to be enabled to occur and the set of events to be forced to occur under its own local observation in order for a given specification to be satisfied. In this paper, we focus on fusion rules for the enforcement decisions and adopt the combined fusion rule using the AND rule and the OR rule. We first derive necessary and sufficient conditions for the existence of a decentralized supervisor under the combined fusion rule for a given partition of the set of forcible events. We next study how to find a suitable partition.

  • Reduced-Reference Video Quality Estimation Using Representative Luminance

    Toru YAMADA  Yoshihiro MIYAMOTO  Masahiro SERIZAWA  Takao NISHITANI  

     
    PAPER-Measurement Technology

      Page(s):
    961-968

    This paper proposes a video-quality estimation method based on a reduced-reference model for realtime quality monitoring in video streaming services. The proposed method chooses representative-luminance values for individual original-video frames at a server side and transmits those values, along with the pixel-position information of the representative-luminance values in each frame. On the basis of this information, peak signal-to-noise ratio (PSNR) values at client sides can be estimated. This enables realtime monitoring of video-quality degradation by transmission errors. Experimental results show that accurate PSNR estimation can be achieved with additional information at a low bit rate. For SDTV video sequences which are encoded at 1 to 5 Mbps, accurate PSNR estimation (correlation coefficient of 0.92 to 0.95) is achieved with small amount of additional information of 10 to 50 kbps. This enables accurate realtime quality monitoring in video streaming services without average video-quality degradation.

  • Iterative Frequency Estimation for Accuracy Improvement of Three DFT Phase-Based Methods

    Hee-Suk PANG  Jun-Seok LIM  Oh-Jin KWON  Bhum Jae SHIN  

     
    LETTER-Digital Signal Processing

      Page(s):
    969-973

    We propose an iterative frequency estimation method for accuracy improvement of discrete Fourier transform (DFT) phase-based methods. It iterates frequency estimation and phase calculation based on the DFT phase-based methods, which maximizes the signal-to-noise floor ratio at the frequency estimation position. We apply it to three methods, the phase difference estimation, the derivative estimation, and the arctan estimation, which are known to be among the best DFT phase-based methods. Experimental results show that the proposed method shows meaningful reductions of the frequency estimation error compared to the conventional methods especially at low signal-to-noise ratio.

  • A Phenomenological Study on Threshold Improvement via Spatial Coupling

    Keigo TAKEUCHI  Toshiyuki TANAKA  Tsutomu KAWABATA  

     
    LETTER-Information Theory

      Page(s):
    974-977

    Kudekar et al. proved an interesting result in low-density parity-check (LDPC) convolutional codes: The belief-propagation (BP) threshold is boosted to the maximum-a-posteriori (MAP) threshold by spatial coupling. Furthermore, the authors showed that the BP threshold for code-division multiple-access (CDMA) systems is improved up to the optimal one via spatial coupling. In this letter, a phenomenological model for elucidating the essence of these phenomenon, called threshold improvement, is proposed. The main result implies that threshold improvement occurs for spatially-coupled general graphical models.

  • An Efficient Interpolation Based Erasure-Only Decoder for High-Rate Reed-Solomon Codes

    Qian GUO  Haibin KAN  

     
    LETTER-Coding Theory

      Page(s):
    978-981

    In this paper, we derive a simple formula to generate a wide-sense systematic generator matrix(we call it quasi-systematic) B for a Reed-Solomon code. This formula can be utilized to construct an efficient interpolation based erasure-only decoder with time complexity O(n2) and space complexity O(n). Specifically, the decoding algorithm requires 3kr + r2 - 2r field additions, kr + r2 + r field negations, 2kr + r2 - r + k field multiplications and kr + r field inversions. Compared to another interpolation based erasure-only decoding algorithm derived by D.J.J. Versfeld et al., our algorithm is much more efficient for high-rate Reed-Solomon codes.

  • Importance Sampling for Turbo Codes over Slow Rayleigh Fading Channels

    Takakazu SAKAI  Koji SHIBATA  

     
    LETTER-Coding Theory

      Page(s):
    982-985

    This study shows a fast simulation method of turbo codes over slow Rayleigh fading channels. The reduction of the simulation time is achieved by applying importance sampling (IS). The conventional IS method of turbo codes over Rayleigh fading channels focuses only on modification of additive white Gaussian noise (AWGN) sequences. The proposed IS method biases not only AWGNs but also channel gains of the Rayleigh fading channels. The computer runtime of the proposed method is about 1/5 of that of the conventional IS method on the evaluation of a frame error rate of 10-6. When we compare with the Monte Carlo simulation method, the proposed method needs only 1/100 simulation runtime under the condition of the same accuracy of the estimator.

  • Joint Symbol Timing and Carrier Frequency Offset Estimation for Mobile-WiMAX

    Yong-An JUNG  Young-Hwan YOU  

     
    LETTER-Communication Theory and Signals

      Page(s):
    986-989

    This letter proposes two efficient schemes for the joint estimation of symbol timing offset (STO) and carrier frequency offset (CFO) in orthogonal frequency division multiplexing (OFDM) based IEEE 802.16e systems. Avoiding the effects of inter symbol interference (ISI) over delay spread by the multipath fading channel is a primary purpose in the letter. To do this, the ISI-corrupted CP is excluded when a correlation function is devised for both schemes, achieving the improved performance. To demonstrate the efficiency of the proposed methods, the performance is compared with the conventional method and is evaluated by the mean square error (MSE), acquisition range of CFO, and complexity comparison.

  • Knowledge Reuse Method to Improve the Learning of Interference-Preventive Allocation Policies in Multi-Car Elevators

    Alex VALDIVIELSO CHIAN  Toshiyuki MIYAMOTO  

     
    LETTER-Concurrent Systems

      Page(s):
    990-995

    In this letter, we introduce a knowledge reuse method to improve the performance of a learning algorithm developed to prevent interference in multi-car elevators. This method enables the algorithm to use its previously acquired experience in new learning processes. The simulation results confirm the improvement achieved in the algorithm's performance.

FlyerIEICE has prepared a flyer regarding multilingual services. Please use the one in your native language.