Yong-Seok SEO Sanghyun JOO Ho-Youl JUNG
A new method for blind watermarking based on quantization is proposed. The proposed scheme embeds a watermark on the lowest wavelet subband in order to be robust. Experimental results demonstrate the robustness of the algorithm against compression and other image processing attacks.
A decentralized estimation system usually contains a number of remotely located local sensors that can pre-process observed signal and convey the processed data to a fusion center that makes a final estimation. The local sensors are linked to the data fusion center by transmission channels. When the observation (or estimate of parameter) is quantized at the peripheral sensors and an assumption of conditionally independent sensor data is made, due to potential communication constraints on the channels, the problem of quantization design and bandwidth allocation among the channels linking local sensors to the fusion center is studied in this letter.
Zhe-Ming LU Dian-Guo XU Sheng-He SUN
This Letter presents a fast codeword search algorithm based on ordered Hadamard transform. Before encoding, the ordered Hadamard transform is performed offline on all codewords. During the encoding process, the ordered Hadamard transform is first performed on the input vector, and then a new inequality based on characteristic values of transformed vectors is used to reject the unlikely transformed codewords. Experimental results show that the algorithm outperforms many newly presented algorithms in the case of high dimensionality, especially for high-detail images.
In this paper, we propose an efficient requantization method for transcoding of MPEG video. Transcoding is the process of converting a compressed video format to another different compressed video format. We propose an simple and efficient transcoding by requantization in which MPEG coded video at high bit-rate is converted into MPEG bitstream at lower bit-rate. To reduce a image quality degradation, we use HVS (Human Visual System). By using this effect, the part of image in high activity region is coarsely quantized without seriously degrading the image quality. Experimental results show that the proposed method can provide good performance.
Hiroyuki TAKIZAWA Taira NAKAJIMA Kentaro SANO Hiroaki KOBAYASHI Tadao NAKAMURA
The equidistortion principle[1] has recently been proposed as a basic principle for design of an optimal vector quantization (VQ) codebook. The equidistortion principle adjusts all codebook vectors such that they have the same contribution to quantization error. This paper introduces a novel VQ codebook design algorithm based on the equidistortion principle. The proposed algorithm is a variant of the law-of-the-jungle algorithm (LOJ), which duplicates useful codebook vectors and removes useless vectors. Due to the LOJ mechanism, the proposed algorithm can establish the equidistortion condition without wasting learning steps. This is significantly effective in preventing performance degradation caused when initial states of codebook vectors are improper to find an optimal codebook. Therefore, even in the case of improper initialization, the proposed algorithm can achieve minimization of quantization error based on the equidistortion principle. Performance of the proposed algorithm is discussed through experimental results.
Hyun Joo SO Young Jun JUNG Jong Seog KOH Nam Chul KIM
In this paper, we analyze wavelet-based coding in a rate-distortion (R-D) sense by using Laplacian and Markov models and verify the results with the performance of the typical embedded coders, EZW and SPIHT, and the non-embedded coder implemented here. Laplacian represents the probability density function (pdf) of wavelet coefficients and Markov statistical dependency within and among subbands. The models allow us to easily understand the behavior of a thresholding and quantization part and a lossless coding part and associate the embedded coders with the nonembedded coder, which is the point the paper approaches. The analytic results are shown to coincide well with the actual coding results.
Shinfeng D. LIN Shih-Chieh SHIE Kuo-Yuan LEE
A wavelet-based vector quantization scheme for image compression is introduced here. The proposed scheme obtains a better compression efficiency by the following three methods. (1) Utilizing the correlation among wavelet coefficients. (2) Placing different emphasis on wavelet coefficients at different levels. (3) Preserving the most important information of the image. In our experiments, simulation results show that this technique outperforms the recent SMVQ-ABC [1] and WTC-NIVQ [2] techniques.
The wavelet transform (WT) has recently emerged as a powerful tool for image compression. In this paper, a new image compression technique combining the genetic algorithm (GA) and grey-based competitive learning network (GCLN) in the wavelet transform domain is proposed. In the GCLN, the grey theory is applied to a two-layer modified competitive learning network in order to generate optimal solution for VQ. In accordance with the degree of similarity measure between training vectors and codevectors, the grey relational analysis is used to measure the relationship degree among them. The GA is used in an attempt to optimize a specified objective function related to vector quantizer design. The physical processes of competition, selection and reproduction operating in populations are adopted in combination with GCLN to produce a superior genetic grey-based competitive learning network (GGCLN) for codebook design in image compression. The experimental results show that a promising codebook can be obtained using the proposed GGCLN and GGCLN with wavelet decomposition.
A fast nearest neighbor codeword search algorithm for vector quantization (VQ) is introduced. The algorithm uses three significant features of a vector, that is, the mean, the variance and the norm, to reduce the search space. It saves a great deal of computational time while introducing no more memory units than the equal-average equal-variance codeword search algorithm. With two extra elimination criteria based on the mean and the variance, the proposed algorithm is also more efficient than so-called norm-ordered search algorithm. Experimental results confirm the effectiveness of the proposed algorithm.
Newaz M. S. RAHIM Takashi YAHAGI
Finite-state vector quantization (FSVQ) is a well-known block encoding technique for digital image compression at low bit rate application. In this paper, an improved feature map finite-state vector quantization (IFMFSVQ) algorithm using three-sided side-match prediction is proposed for image coding. The new three-sided side-match improves the prediction quality of input blocks. Precoded blocks are used to alleviate the error propagation of side-match. An edge threshold is used to classify the blocks into nonedge or edge blocks to improve bit rate performance. Furthermore, an adaptive method is also obtained. Experimental results reveal that the new IFMFSVQ reduces bit rate significantly maintaining the same subjective quality, as compared to the basic FMFSVQ method.
Tatsuya YOSHIDA Shirmila MOHOTTALA Masataka KAGESAWA Katsushi IKEUCHI
This paper describes our vehicle classification system, which is based on local-feature configuration. We have already demonstrated that our system works very well for vehicle recognition in outdoor environments. The algorithm is based on our previous work, which is a generalization of the eigen-window method. This method has the following three advantages: (1) It can detect even if parts of the vehicles are occluded. (2) It can detect even if vehicles are translated due to veering out of the lanes. (3) It does not require segmentation of vehicle areas from input images. However, this method does have a problem. Because it is view-based, our system requires model images of the target vehicles. Collecting real images of the target vehicles is generally a time consuming and difficult task. To ease the task of collecting images of all target vehicles, we apply our system to computer graphics (CG) models to recognize vehicles in real images. Through outdoor experiments, we have confirmed that using CG models is effective than collecting real images of vehicles for our system. Experimental results show that CG models can recognize vehicles in real images, and confirm that our system can classify vehicles.
Kwang-Deok SEO Kook-Yeol YOO Jae-Kyoon KIM
In this paper, we propose an efficient requantization method for INTRA-frames in MPEG-1/MPEG-4 transcoding. The quantizer for an MPEG-1 INTRA block usually uses a quantization weighting matrix, while the quantizer for an MPEG-4 simple profile does not. As a result, the quantization step sizes of the two quantizers may not be the same even for the same quantization parameter. Due to this mismatch in the quantization step size, a transcoded MPEG-4 sequence can suffer from serious quality degradation and the number of bits produced by transcoding increases from the original MPEG-1 video sequence. To solve these problems, an efficient method is proposed to identify a near-optimum reconstruction level in the transcoder. In addition, a Laplacian-model based PDF (probability distribution function) estimation for the original DCT coefficients from an input MPEG-1 bitstream is presented, which is required for the proposed requantization. Experimental results show that the proposed method provides a 0.3-0.7 dB improvement in the PSNR over the conventional method, even at a reduced bit-rate of 3-7%.
Zhe-Ming LU Bian YANG Sheng-He SUN
Vector quantization (VQ) is an attractive image compression technique. VQ utilizes the high correlation between neighboring pixels in a block, but disregards the high correlation between the adjacent blocks. Unlike VQ, side-match VQ (SMVQ) exploits codeword information of two encoded adjacent blocks, the upper and left blocks, to encode the current input vector. However, SMVQ is a fixed bit rate compression technique and doesn't make full use of the edge characteristics to predict the input vector. Classified side-match vector quantization (CSMVQ) is an effective image compression technique with low bit rate and relatively high reconstruction quality. It exploits a block classifier to decide which class the input vector belongs to using the variances of neighboring blocks' codewords. As an alternative, this paper proposes three algorithms using gradient values of neighboring blocks' codewords to predict the input block. The first one employs a basic gradient-based classifier that is similar to CSMVQ. To achieve lower bit rates, the second one exploits a refined two-level classifier structure. To reduce the encoding time further, the last one employs a more efficient classifier, in which adaptive class codebooks are defined within a gradient-ordered master codebook according to various prediction results. Experimental results prove the effectiveness of the proposed algorithms.
Michiharu NIIMI Richard O. EASON Hideki NODA Eiji KAWAGUCHI
In previous work we have proposed a steganographic technique for gray scale images called BPCS-Steganography. We also apply this technique to full color images by decomposing the image into its three color component images and treating each as a gray scale image. This paper proposes a method to apply BPCS-Steganography to palette-based images. In palette-based images, the image data can be decomposed into color component images similar to those of full color images. We can then embed into one or more of the color component images. However, even if only one of the color component images is used for embedding, the number of colors in the palette after embedding can be over the maximum number allowed. In order to represent the image data in palette-based format, color quantization is therefore needed. We cannot change the pixel values of the color component image that contains the embedded information, but can only change the pixel values of the other color component images. We assume that the degrading of the color component2 image with information embedded is smaller than that of the color component images that are used for color reduction. We therefore embed secret information into the G component image, because the human visual system is more sensitive to changes the luminance of a color, and G has the largest contribution to luminance of the three color components. In order to reduce the number of colors, the R and B component images are then changed in a way that minimizes the square error.
A quasi-periodic signal is a periodic signal with period and amplitude variations. Several physiological signals, including the electrocardiogram (ECG), can be treated as quasi-periodic. Vector quantization (VQ) is a valuable and universal tool for signal compression. However, compressing quasi-periodic signals using VQ presents several problems. First, a pre-trained codebook has little adaptation to signal variations, resulting in no quality control of reconstructed signals. Secondly, the periodicity of the signal causes data redundancy in the codebook, where many codevectors are highly correlated. These two problems are solved by the proposed codebook replenishment VQ (CRVQ) scheme based on a bar-shaped (BS) codebook structure. In the CRVQ, codevectors can be updated online according to signal variations, and the quality of reconstructed signals can be specified. With the BS codebook structure, the codebook redundancy is reduced significantly and great codebook storage space is saved; moreover variable-dimension (VD) codevectors can be used to minimize the coding bit rate subject to a distortion constraint. The theoretic rationale and implementation scheme of the VD-CRVQ is given. The ECG data from the MIT/BIH arrhythmic database are tested, and the result is substantially better than that of using other VQ compression methods.
Hsiang-Cheh HUANG Feng-Hsing WANG Jeng-Shyang PAN
New methods for digital image watermarking based on the characteristics of vector quantization (VQ) are proposed. In contrast with conventional watermark embedding algorithms to embed only one watermark at a time into the original source, we present one algorithm to embed multiple watermarks for copyright protection. The embedding and extraction processes are efficient for implementing with conventional VQ techniques, and they can be accomplished in parallel to shorten the processing time. After embedding, embedder would output one watermarked reconstruction image and several secret keys associated with the embedded watermarks. These secret keys are then registered to the third party to preserve the ownership of the original source in order to prevent the attackers from inserting counterfeit watermarks. Simulation results show that under no attacks, the embedded watermarks could be perfectly extracted. If there are some intentional attacks in our simulation, all the watermarks could survive to protect the copyrights. Therefore, we are able to claim the robustness, usefulness, and ease of implementation of our algorithm.
Miwa MUTOH Hiroyuki FUKUYAMA Toshihiro ITOH Takatomo ENOKI Tsugumichi SHIBATA
A novel delta-sigma modulator that utilizes a resonant-tunneling diode (RTD) quantizer is proposed and its operation is investigated by HSPICE simulations. In order to eliminate the signal-to-noise-and-distortion ratio (SINAD) degradation caused from the poor isolation of a single-stage quantizer (1SQ), a three-stage quantizer (3SQ), which consists of three cascoded RTD quantizers, is introduced. At a sample rate of 10 Gsps (samples per a second) and a signal bandwidth of 40 MHz (oversampling ratio of 128), the modulator demonstrates a SINAD of 56 dB, which corresponds to the effective number of bits of 9.3.
In this paper, we discuss digital watermarking techniques besed on modifying the spectral coefficient of an image, classified into quantization-based and correlation-based watermarking techniques. We first present a model of the watermark embedding and extracting processes and examine the robustness of the watermarking system against common image processing. Based on the result, we clarify the reason why detection errors occur in the watermark extracting process and give a method for evaluating the performance of the watermarking system. And then we study an improvement of the watermark extracting process using the deconvolution technique and present some concluding remarks in the last section.
Despite the enormous power of present-day computers, digital systems cannot respond to real-world events in real time. Biological systems, however, while being built with very slow chemical transistors, are very fast in such tasks like seeing, recognizing, and taking immediate actions. This paper discusses the issue of how we can build real-time intelligent systems directly on silicon. An intelligent VLSI system inspired by a psychological brain model is proposed. The system stores the past experience in the on-chip vast memory and recalls the maximum likelihood event to the current input based on the associative processor architecture. Although the system can be implemented in a CMOS digital technology, we are proposing here to implement the system using circuits operating in the analog/digital-merged decision making principle. Low-level processing is done in the analog domain in a fully parallel manner, which is immediately followed by a binary decision to yield answers in digital formats. Such a scheme would be very advantageous in achieving a high throughput computation under limited memory and computational resources usually encountered in mobile applications. Hardware-friendly algorithms have been developed for real-time image recognition using the associative processor architecture and some experimental results are demonstrated.
Chao-Tang YU Pramod K. VARSHNEY
In this letter, sampling and quantizer design for the Gaussian detection problem are considered. A constraint on the transmission rate from the remote sensor to the optimal discrete detector is assumed. The trade-off between sampling rate and the number of quantization levels is studied and illustrated by means of an example.