Keyword Search Result

[Keyword] quantization(223hit)

81-100hit(223hit)

  • Video Watermarking by Space-Time Interest Points

    Lei-Da LI  Bao-Long GUO  Jeng-Shyang PAN  

     
    LETTER-Cryptography and Information Security

      Vol:
    E91-A No:8
      Page(s):
    2252-2256

    This letter presents a novel robust video watermarking scheme based on space-time interest points. These points correspond to inherent structures of the video so that they can be used as synchronization signals for watermark embedding and extraction. In the proposed scheme, local regions are generated using the space-time interest points, and the watermark is embedded into all the regions by quantization. It is a blind scheme and the watermark can be extracted from any position of the video. Experimental results show that the watermark is invisible and it can robustly survive traditional signal processing attacks and video-oriented attacks.

  • Robustness Analysis of M-ary Quantization Based Watermarking

    Jun-Horng CHEN  

     
    LETTER-Cryptography and Information Security

      Vol:
    E91-A No:8
      Page(s):
    2248-2251

    This work addresses the issue on the robustness performance in M-ary quantization watermarking. If the encoded messages are arranged in the order of Gray Code such that adjacent messages differ at only one bit, this work demonstrates the robustness will be substantially improved in low DNR scenarios. Furthermore, the two-bit quantization watermarking can outperform the LUT approach which also provides the robustness improvement in the high-noisy environments.

  • Design of Asymmetric VQ Codebooks Incorporating Channel Coding

    Jong-Ki HAN  Jae-Gon KIM  

     
    PAPER-Communication Theory and Signals

      Vol:
    E91-A No:8
      Page(s):
    2195-2204

    In this paper, a communication system using vector quantization (VQ) and channel coding is considered. Here, a design scheme has been proposed to optimize source codebooks in the transmitter and the receiver. In the proposed algorithm, the overall distortion including both the quantization error and channel distortion is minimized. The proposed algorithm is different from the previous work by the facts that the channel encoder is used in the VQ-based communication system, and the source VQ codebook used in the transmitter is different from the one used by the receiver, i.e. asymmetric VQ system. And the bounded-distance decoding (BDD) technique is used to combat the ambiguousness in the channel decoder. We can see from the computer simulations that the optimized system based on the proposed algorithm outperforms a conventional system based on a symmetric VQ codebook. Also, the proposed algorithm enables a reliable image communication over noisy channels.

  • Locally Adaptive Perceptual Compression for Color Images

    Kuo-Cheng LIU  Chun-Hsien CHOU  

     
    PAPER-Image

      Vol:
    E91-A No:8
      Page(s):
    2213-2222

    The main idea in perceptual image compression is to remove the perceptual redundancy for representing images at the lowest possible bit rate without introducing perceivable distortion. A certain amount of perceptual redundancy is inherent in the color image since human eyes are not perfect sensors for discriminating small differences in color signals. Effectively exploiting the perceptual redundancy will help to improve the coding efficiency of compressing color images. In this paper, a locally adaptive perceptual compression scheme for color images is proposed. The scheme is based on the design of an adaptive quantizer for compressing color images with the nearly lossless visual quality at a low bit rate. An effective way to achieve the nearly lossless visual quality is to shape the quantization error as a part of perceptual redundancy while compressing color images. This method is to control the adaptive quantization stage by the perceptual redundancy of the color image. In this paper, the perceptual redundancy in the form of the noise detection threshold associated with each coefficient in each subband of three color components of the color image is derived based on the finding of perceptually indistinguishable regions of color stimuli in the uniform color space and various masking effects of human visual perception. The quantizer step size for the target coefficient in each color component is adaptively adjusted by the associated noise detection threshold to make sure that the resulting quantization error is not perceivable. Simulation results show that the compression performance of the proposed scheme using the adaptively coefficient-wise quantization is better than that using the band-wise quantization. The nearly lossless visual quality of the reconstructed image can be achieved by the proposed scheme at lower entropy.

  • Initial Codebook Algorithm of Vector Quantizaton

    ShanXue CHEN  FangWei LI  WeiLe ZHU  TianQi ZHANG  

     
    LETTER-Algorithm Theory

      Vol:
    E91-D No:8
      Page(s):
    2189-2191

    A simple and successful design of initial codebook of vector quantization (VQ) is presented. For existing initial codebook algorithms, such as random method, the initial codebook is strongly influenced by selection of initial codewords and difficult to match with the features of the training vectors. In the proposed method, training vectors are sorted according to the norm of training vectors. Then, the ordered vectors are partitioned into N groups where N is the size of codebook. The initial codewords are obtained from calculating the centroid of each group. This initializtion method has a robust performance and can be combined with the VQ algorithm to further improve the quality of codebook.

  • Differential Energy Based Watermarking Algorithm Using Wavelet Tree Group Modulation (WTGM) and Human Visual System

    Min-Jen TSAI  Chang-Hsing SHEN  

     
    PAPER

      Vol:
    E91-A No:8
      Page(s):
    1961-1973

    Wavelet tree based watermarking algorithms are using the wavelet coefficient energy difference for copyright protection and ownership verification. WTQ (Wavelet Tree Quantization) algorithm is the representative technique using energy difference for watermarking. According to the cryptanalysis on WTQ, the watermark embedded in the protected image can be removed successfully. In this paper, we present a novel differential energy watermarking algorithm based on the wavelet tree group modulation structure, i.e. WTGM (Wavelet Tree Group Modulation). The wavelet coefficients of host image are divided into disjoint super trees (each super tree containing two sub-super trees). The watermark is embedded in the relatively high-frequency components using the group strategy such that energies of sub-super trees are close. The employment of wavelet tree structure, sum-of-subsets and positive/negative modulation effectively improve the drawbacks of the WTQ scheme for its insecurity. The integration of the HVS (Human Visual System) for WTGM provides a better visual effect of the watermarked image. The experimental results demonstrate the effectiveness of our algorithm in terms of robustness and imperceptibility.

  • Fast Searching Algorithm for Vector Quantization Based on Subvector Technique

    ShanXue CHEN  FangWei LI  WeiLe ZHU  TianQi ZHANG  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E91-D No:7
      Page(s):
    2035-2040

    A fast algorithm to speed up the search process of vector quantization encoding is presented. Using the sum and the partial norms of a vector, some eliminating inequalities are constructeded. First the inequality based on the sum is used for determining the bounds of searching candidate codeword. Then, using an inequality based on subvector norm and another inequality combining the partial distance with subvector norm, more unnecessary codewords are eliminated without the full distance calculation. The proposed algorithm can reject a lot of codewords, while introducing no extra distortion compared to the conventional full search algorithm. Experimental results show that the proposed algorithm outperforms the existing state-of-the-art search algorithms in reducing the computational complexity and the number of distortion calculation.

  • Quantization Parameter Refinement in H.264 through ρ-Domain Rate Model

    Yutao DONG  Xiangzhong FANG  Jing YANG  

     
    LETTER-Speech and Hearing

      Vol:
    E91-D No:6
      Page(s):
    1834-1837

    This letter proposes a new algorithm of refining the quantization parameter in H.264 real-time encoding. In the H.264 encoding, the quantization parameter computed according to the quadratic rate model is not accurate in meeting the target bit rate. In order to make the actual encoded bit rate closer to the target bit rate, ρ-domain rate model is introduced in our proposed quantization parameter refinement algorithm. Simulation results show that the proposed algorithm achieves obvious gain in PSNR and has stabler encoded bit rate compared to Jiang's algorithm.

  • Efficient Fingercode Classification

    Hong-Wei SUN  Kwok-Yan LAM  Dieter GOLLMANN  Siu-Leung CHUNG  Jian-Bin LI  Jia-Guang SUN  

     
    INVITED PAPER

      Vol:
    E91-D No:5
      Page(s):
    1252-1260

    In this paper, we present an efficient fingerprint classification algorithm which is an essential component in many critical security application systems e.g. systems in the e-government and e-finance domains. Fingerprint identification is one of the most important security requirements in homeland security systems such as personnel screening and anti-money laundering. The problem of fingerprint identification involves searching (matching) the fingerprint of a person against each of the fingerprints of all registered persons. To enhance performance and reliability, a common approach is to reduce the search space by firstly classifying the fingerprints and then performing the search in the respective class. Jain et al. proposed a fingerprint classification algorithm based on a two-stage classifier, which uses a K-nearest neighbor classifier in its first stage. The fingerprint classification algorithm is based on the fingercode representation which is an encoding of fingerprints that has been demonstrated to be an effective fingerprint biometric scheme because of its ability to capture both local and global details in a fingerprint image. We enhance this approach by improving the efficiency of the K-nearest neighbor classifier for fingercode-based fingerprint classification. Our research firstly investigates the various fast search algorithms in vector quantization (VQ) and the potential application in fingerprint classification, and then proposes two efficient algorithms based on the pyramid-based search algorithms in VQ. Experimental results on DB1 of FVC 2004 demonstrate that our algorithms can outperform the full search algorithm and the original pyramid-based search algorithms in terms of computational efficiency without sacrificing accuracy.

  • Antenna Selective Algebraic STBC Using Error Codebook on Correlated Fading Channels

    Rong RAN  JangHoon YANG  DongKu KIM  

     
    LETTER-Wireless Communication Technologies

      Vol:
    E91-B No:5
      Page(s):
    1653-1656

    In this letter, a simple but effective antenna selection algorithm for orthogonal space-time block codes with a linear complex precoder (OSTBC-LCP) is proposed and compared with two conventional algorithms in temporally and spatially correlated fading channels. The proposed algorithm, which minimizes pairwise error probability (MinPEP) with an error codebook (EC) constructed from the error vector quantization, is shown to provide nearly the same performance of MinPEP based on all possible error vectors, while keeping the complexity close to that of antenna selection algorithm based on maximum power criterion (Maxpower).

  • Designing Algebraic Trellis Code as a New Fixed Codebook Module for ACELP Coder

    Jakyong JUN  Sangwon KANG  Thomas R. FISCHER  

     
    LETTER-Multimedia Systems for Communications

      Vol:
    E91-B No:3
      Page(s):
    972-974

    In this paper, a block-constrained trellis coded quantization (BC-TCQ) algorithm is combined with an algebraic codebook to produce an algebraic trellis code (ATC) to be used in ACELP coding. In ATC, the set of allowed algebraic codebook pulse positions is expanded, and the expanded set is partitioned into subsets of pulse positions; the trellis branches are labeled with these subsets. The list Viterbi algorithm (LVA) is used to select the excitation codevector. The combination of an ATC codebook and LVA trellis search algorithm is denoted as an ATC-LVA block code. The ATC-LVA block code is used as the fixed codebook of the AMR-WB 8.85 kbps mode, reducing complexity compared to the conventional algebraic codebook.

  • A Subsampling-Based Digital Image Watermarking Scheme Resistant to Permutation Attack

    Chuang LIN  Jeng-Shyang PAN  Chia-An HUANG  

     
    LETTER-Image

      Vol:
    E91-A No:3
      Page(s):
    911-915

    The letter proposes a novel subsampling-based digital image watermarking scheme resisting the permutation attack. The subsampling-based watermarking schemes have drawn great attention for their convenience and effectiveness in recent years, but the traditional subsampling-based watermarking schemes are very vulnerable to the permutation attack. In this letter, the watermark information is embedded in the average values of the 1-level DWT coefficients to resist the permutation attack. The concrete embedding process is achieved by the quantization-based method. Experimental results show that the proposed scheme can resist not only the permutation attack but also some common image processing attacks.

  • Adaptive Pre-Processing Algorithm to Improve Coding Performance of Seriously Degraded Video Sequences for H.264 Video Coder

    Won-Seon SONG  Min-Cheol HONG  

     
    LETTER-Image

      Vol:
    E91-A No:2
      Page(s):
    713-717

    This paper introduces an adaptive low complexity pre-processing filter to improve the coding performance of seriously degraded video sequences that is caused by the additive noise. The additive noise leads to a decrease in coding performance due to the high frequency components. By incorporating local statistics and quantization parameter into filtering process, the spurious noise is significantly attenuated and coding efficiency is improved for given quantization step size. In order to reduce the complexity of the pre-processing filter, the simplified local statistics and quantization parameter are introduced. The simulation results show the capability of the proposed algorithm.

  • Joint Blind Super-Resolution and Shadow Removing

    Jianping QIAO  Ju LIU  Yen-Wei CHEN  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E90-D No:12
      Page(s):
    2060-2069

    Most learning-based super-resolution methods neglect the illumination problem. In this paper we propose a novel method to combine blind single-frame super-resolution and shadow removal into a single operation. Firstly, from the pattern recognition viewpoint, blur identification is considered as a classification problem. We describe three methods which are respectively based on Vector Quantization (VQ), Hidden Markov Model (HMM) and Support Vector Machines (SVM) to identify the blur parameter of the acquisition system from the compressed/uncompressed low-resolution image. Secondly, after blur identification, a super-resolution image is reconstructed by a learning-based method. In this method, Logarithmic-wavelet transform is defined for illumination-free feature extraction. Then an initial estimation is obtained based on the assumption that small patches in low-resolution space and patches in high-resolution space share a similar local manifold structure. The unknown high-resolution image is reconstructed by projecting the intermediate result into general reconstruction constraints. The proposed method simultaneously achieves blind single-frame super-resolution and image enhancement especially shadow removal. Experimental results demonstrate the effectiveness and robustness of our method.

  • Hiding Secret Information Using Adaptive Side-Match VQ

    Chin-Chen CHANG  Wen-Chuan WU  Chih-Chiang TSOU  

     
    PAPER-Application Information Security

      Vol:
    E90-D No:10
      Page(s):
    1678-1686

    The major application of digital data hiding techniques is to deliver confidential data secretly via public but unreliable computer networks. Most of the existing data hiding schemes, however, exploit the raw data of cover images to perform secret communications. In this paper, a novel data hiding scheme was presented with the manipulation of images based on the compression of side-match vector quantization (SMVQ). This proposed scheme provided adaptive alternatives for modulating the quantized indices in the compressed domain so that a considerable quantity of secret data could be artfully embedded. As the experimental results demonstrated, the proposed scheme indeed provided a larger payload capacity without making noticeable distortions in comparison with schemes proposed in earlier works. Furthermore, this scheme also presented a satisfactory compression performance.

  • A Statistical Approach to Error Compensation in Spectral Quantization

    Seung Ho CHOI  Hong Kook KIM  

     
    LETTER-Speech and Hearing

      Vol:
    E90-D No:9
      Page(s):
    1460-1464

    In this paper, we propose a statistical approach to improve the performance of spectral quantization of speech coders. The proposed techniques compensate for the distortion in a decoded line spectrum pair (LSP) vector based on a statistical mapping function between a decoded LSP vector and its corresponding original LSP vector. We first develop two codebook-based probabilistic matching (CBPM) methods by investigating the distribution of LSP vectors. In addition, we propose an iterative procedure for the two CBPMs. Next, the proposed techniques are applied to the predictive vector quantizer (PVQ) used for the IS-641 speech coder. The experimental results show that the proposed techniques reduce average spectral distortion by around 0.064 dB and the percentage of outliers compared with the PVQ without any compensation, resulting in transparent quality of spectral quantization. Finally, the comparison of speech quality using the perceptual evaluation of speech quality (PESQ) measure is performed and it is shown that the IS-641 speech coder employing the proposed techniques has better decoded speech quality than the standard IS-641 speech coder.

  • Reversible Data Hiding in the VQ-Compressed Domain

    Chin-Chen CHANG  Yung-Chen CHOU  Chih-Yang LIN  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E90-D No:9
      Page(s):
    1422-1429

    Steganographic methods usually produce distortions in cover images due to the process of embedding secret bits. These distortions are hard to remove, and thus the cover image cannot be recovered. Although the distortions are always small, they cannot be allowed for some sensitive applications. In this paper, we propose a reversible embedding scheme for VQ-compressed images, which allows the original cover image to be completely recovered after the extraction of the secret bits. The embedded payload in the proposed method comprises the secret bits plus the restoration information. In order to reduce the size of payload, we utilized the spatial correlations in the image as the restoration information and then compressed the correlations by a lossless compression method. In addition, an alternative pairing method for codewords was proposed to improve the stegoed image quality and control the embedding capacity. Experimental results showed that the proposed method has the benefit of high efficiency of the steganographic process, high image quality, and adaptive embedding capacity compared with other schemes.

  • Predictive Trellis-Coded Quantization of the Cepstral Coefficients for the Distributed Speech Recognition

    Sangwon KANG  Joonseok LEE  

     
    LETTER-Multimedia Systems for Communications

      Vol:
    E90-B No:6
      Page(s):
    1570-1572

    In this paper, we propose a predictive block-constrained trellis-coded quantization (BC-TCQ) to quantize cepstral coefficients for distributed speech recognition. For prediction of the cepstral coefficients, the first order auto-regressive (AR) predictor is used. To quantize the prediction error signal effectively, we use the BC-TCQ. The quantization is compared to the split vector quantizers used in the ETSI standard, and is shown to lower cepstral distance and bit rates.

  • Phase Delay Quantization Error Analysis at a Focal Plane for an Ultrasonic Annular Arrays Imaging System

    Jongtaek OH  

     
    LETTER-Ultrasonics

      Vol:
    E90-A No:5
      Page(s):
    1105-1106

    The quantization error of phase delay in an ultrasonic annular arrays imaging system is analyzed which impairs image resolution, and proper sampling rate is considered to reduce system complexity.

  • Required Number of Quantization Bits for CIE XYZ Signals Applied to Various Transforms in Digital Cinema Systems

    Junji SUZUKI  Isao FURUKAWA  

     
    PAPER-Image

      Vol:
    E90-A No:5
      Page(s):
    1072-1084

    To keep in step with the rapid progress of high quality imaging systems, the Digital Cinema Initiative (DCI) has been issuing digital cinema standards that cover all processes from production to distribution and display. Various evaluation measurements are used in the assessment of image quality, and, of these, the required number of quantization bits is one of the most important factors in realizing the very high quality images needed for cinema. While DCI defined 12 bits for the bit depth by applying Barten's model to just the luminance signal, actual cinema applications use color signals, so we can say that this value has an insufficient theoretical basis. This paper, first of all, investigates the required number of quantization bits by computer simulations in discrete 3-D space for the color images defined using CIE's XYZ signal. Next, the required number of quantization bits is formulated by applying Taylor's development in the continuous value region. As a result, we show that 13.04 bits, 11.38 bits, and 10.16 bits are necessary for intensity, density, and gamma-corrected signal quantization, respectively, for digital cinema applications. As these results coincide with those from calculations in the discrete value region, the proposed analysis method enables a drastic reduction in the computer simulation time needed for obtaining the required number of quantization bits for color signals.

81-100hit(223hit)

FlyerIEICE has prepared a flyer regarding multilingual services. Please use the one in your native language.