Jinfeng GAO Bilan ZHU Masaki NAKAGAWA
The paper describes how a robust and compact on-line handwritten Japanese text recognizer was developed by compressing each component of an integrated text recognition system including a SVM classifier to evaluate segmentation points, an on-line and off-line combined character recognizer, a linguistic context processor, and a geometric context evaluation module to deploy it on hand-held devices. Selecting an elastic-matching based on-line recognizer and compressing MQDF2 via a combination of LDA, vector quantization and data type transformation, have contributed to building a remarkably small yet robust recognizer. The compact text recognizer covering 7,097 character classes just requires about 15 MB memory to keep 93.11% accuracy on horizontal text lines extracted from the TUAT Kondate database. Compared with the original full-scale Japanese text recognizer, the memory size is reduced from 64.1 MB to 14.9 MB while the accuracy loss is only 0.5% from 93.6% to 93.11%. The method is scalable so even systems of less than 11 MB or less than 6 MB still remain 92.80% or 90.02% accuracy, respectively.
Takema SATOH Kazuyoshi ITOH Tsuyoshi KONISHI
We report a trial of 100-GS/s optical quantization with 5-bit resolution using soliton self-frequency shift (SSFS) and spectral compression. We confirm that 100-GS/s 5-bit optical quantization is realized to quantize a 5.0-GHz sinusoid electrical signal in simulation. In order to experimentally verify the possibility of 100-GS/s 5-bit optical quantization, we execute 5-bit optical quantization by using two sampled signals with 10-ps intervals.
Chi-Jung HUANG Shaw-Hwa HWANG Cheng-Yu YEH
This study proposes an improvement to the Triangular Inequality Elimination (TIE) algorithm for vector quantization (VQ). The proposed approach uses recursive and intersection (RI) rules to compensate and enhance the TIE algorithm. The recursive rule changes reference codewords dynamically and produces the smallest candidate group. The intersection rule removes redundant codewords from these candidate groups. The RI-TIE approach avoids over-reliance on the continuity of the input signal. This study tests the contribution of the RI rules using the VQ-based, G.729 standard LSP encoder and some classic images. Results show that the RI rules perform excellently in the TIE algorithm.
In this letter, the problem of feature quantization in robust hashing is studied from the perspective of approximate nearest neighbor (ANN). We model the features of perceptually identical media as ANNs in the feature set and show that ANN indexing can well meet the robustness and discrimination requirements of feature quantization. A feature quantization algorithm is then developed by exploiting the random-projection based ANN indexing. For performance study, the distortion tolerance and randomness of the quantizer are analytically derived. Experimental results demonstrate that the proposed work is superior to state-of-the-art quantizers, and its random nature can provide robust hashing with security against hash forgery.
This study presents an adaptive quantization index modulation scheme applicable on a small audio segment, which in turn allows the watermarking technique to withstand time-shifting and cropping attacks. The exploitation of auditory masking further ensures the robustness and imperceptibility of the embedded watermark. Experimental results confirmed the efficacy of this scheme against common signal processing attacks.
Misako KOTANI Shingo KAWAMOTO Motohiko ISAKA
Granular gain of low-dimensional lattices based on binary linear codes is estimated using a quantization algorithm which is equivalently a soft-decision decoding of the underlying code. It is shown that substantial portion of the ultimate granular gain is achieved even in limited dimensions.
In this paper, a block-constrained trellis coded vector quantization (BC-TCVQ) algorithm is combined with an algebraic codebook to produce an algebraic trellis vector code (ATVC) to be used in ACELP coding. ATVC expands the set of allowed algebraic codebook pulse position, and the trellis branches are labeled with these subsets. The Viterbi algorithm is used to select the excitation codevector. A fast codebook search method using an efficient non-exhaustive search technique is also proposed to reduce the complexity of the ATVC search procedure while maintaining the quality of the reconstructed speech. The ATVC block code is used as the fixed codebook of AMR-NB (12.2 kbps), which reduces the computational complexity compared to the conventional algebraic codebook.
Go TANAKA Noriaki SUETAKE Eiji UCHINO
A method obtaining a monochrome image which can rebuild colors is proposed. In this method, colors in an input image are quantized under a lightness constraint and a palette, which represents relationship between quantized colors and gray-levels, is generated. Using the palette, an output monochrome image is obtained. Experiments show that the proposed method obtains good monochrome and rebuilt color images.
Lin-Lin TANG Jeng-Shyang PAN Hao LUO Junbao LI
A novel watermarked MDC system based on the SFQ algorithm and the sub-sampling method is proposed in this paper. Sub-sampling algorithm is applied onto the transformed image to introduce some redundancy between different channels. Secret information is embedded into the preprocessed sub-images. Good performance of the new system to defense the noise and the compression attacks is shown in the experimental results.
Yanzhi SUN Muqing WU Jianming LIU Chaoyi ZHANG
In this letter, a quantization error-aware Tomlinson-Harashinma Precoding (THP) is proposed based on the equivalent zero-forcing (ZF) criterion in Multiuser Multiple-Input Single-Output (MU-MISO) systems with limited feedback, where the transmitter has only quantized channel direction information (CDI). This precoding scheme is robust to the channel uncertainties arising from the quantization error and the lack of channel magnitude information (CMI). Our simulation results show that the new THP scheme outperforms the conventional precoding scheme in limited feedback systems with respect to Bit Error Ratio (BER).
Toru KITAYABU Mao HAGIWARA Hiroyasu ISHIKAWA Hiroshi SHIRAI
A novel delta-sigma modulator that employs a non-uniform quantizer whose spacing is adjusted by reference to the statistical properties of the input signal is proposed. The proposed delta-sigma modulator has less quantization noise compared to the one that uses a uniform quantizer with the same number of output values. With respect to the quantizer on its own, Lloyd proposed a non-uniform quantizer that is best for minimizing the average quantization noise power. The applicable condition of the method is that the statistical properties of the input signal, the probability density, are given. However, the procedure cannot be directly applied to the quantizer in the delta-sigma modulator because it jeopardizes the modulator's stability. In this paper, a procedure is proposed that determine the spacing of the quantizer with avoiding instability. Simulation results show that the proposed method reduces quantization noise by up to 3.8 dB and 2.8 dB with the input signal having a PAPR of 16 dB and 12 dB, respectively, compared to the one employing a uniform quantizer. Two alternative types of probability density function (PDF) are used in the proposed method for the calculation of the output values. One is the PDF of the input signal to the delta-sigma modulator and the other is an approximated PDF of the input signal to the quantizer inside the delta-sigma modulator. Both approaches are evaluated to find that the latter gives lower quantization noise.
Xuemin ZHAO Yuhong GUO Jian LIU Yonghong YAN Qiang FU
In this paper, a logarithmic adaptive quantization projection (LAQP) algorithm for digital watermarking is proposed. Conventional quantization index modulation uses a fixed quantization step in the watermarking embedding procedure, which leads to poor fidelity. Moreover, the conventional methods are sensitive to value-metric scaling attack. The LAQP method combines the quantization projection scheme with a perceptual model. In comparison to some conventional quantization methods with a perceptual model, the LAQP only needs to calculate the perceptual model in the embedding procedure, avoiding the decoding errors introduced by the difference of the perceptual model used in the embedding and decoding procedure. Experimental results show that the proposed watermarking scheme keeps a better fidelity and is robust against the common signal processing attack. More importantly, the proposed scheme is invariant to value-metric scaling attack.
This paper provides an overview on the recent research on networked control with an emphasis on the tight relation between the two fields of control and communication. In particular, we present several results focusing on data rate constraints in networked control systems, which can be modeled as quantization of control-related signals. The motivation is to reduce the amount of data rate as much as possible in obtaining control objectives such as stabilization and control performance under certain measures. We also discuss some approaches towards control problems based on techniques from signal processing and information theory.
This paper demonstrates a pulse width controlled PLL without using an LPF. A pulse width controlled oscillator accepts the PFD output where its pulse width controls the oscillation frequency. In the pulse width controlled oscillator, the input pulse width is converted into soft thermometer code through a time to soft thermometer code converter and the code controls the ring oscillator frequency. By using this scheme, our PLL realizes LPF-less as well as quantization noise free operation. The prototype chip achieves 60 µm 20 µm layout area using 65 nm CMOS technology along with 1.73 ps rms jitter while consuming 2.81 mW under a 1.2 V supply with 3.125 GHz output frequency.
Yusuke UCHIDA Koichi TAKAGI Ryoichi KAWADA
Nearest neighbor search (NNS) among large-scale and high-dimensional vectors plays an important role in recent large-scale multimedia search applications. This paper proposes an optimized multiple codebook construction method for an approximate NNS scheme based on product quantization, where sets of residual sub-vectors are clustered according to their distribution and the codebooks for product quantization are constructed from these clusters. Our approach enables us to adaptively select the number of codebooks to be used by trading between the search accuracy and the amount of memory available.
Xu YANG De XU Songhe FENG Yingjun TANG Shuoyan LIU
This paper presents an efficient yet powerful codebook model, named classified codebook model, to categorize natural scene category. The current codebook model typically resorts to large codebook to obtain higher performance for scene categorization, which severely limits the practical applicability of the model. Our model formulates the codebook model with the theory of vector quantization, and thus uses the famous technique of classified vector quantization for scene-category modeling. The significant feature in our model is that it is beneficial for scene categorization, especially at small codebook size, while saving much computation complexity for quantization. We evaluate the proposed model on a well-known challenging scene dataset: 15 Natural Scenes. The experiments have demonstrated that our model can decrease the computation time for codebook generation. What is more, our model can get better performance for scene categorization, and the gain of performance becomes more pronounced at small codebook size.
Maduranga LIYANAGE Iwao SASASE
Quantization is an important operation in digital communications systems. It not only introduces quantization noise but also changes the statistical properties of the quantized signal. Furthermore, quantization noise cannot be always considered as an additive source of Gaussian noise as it depends on the input signal probability density function. In orthogonal-frequency-division-multiplexing transmission the signal undergoes different operations which change its statistical properties. In this paper we analyze the statistical transformations of the signal from the transmitter to the receiver and determine how these effect the quantization. The discussed process considers the transceiver parameters and the channel properties to model the quantization noise. Simulation results show that the model agrees well with the simulated transmissions. The effect of system and channel properties on the quantization noise and its effect on bit-error-rate are shown. This enables the design of a quantizer with an optimal resolution for the required performance metrics.
In this paper, we propose a novel coding scheme for the geometry of the triangular mesh model. The geometry coding schemes can be classified into two groups: schemes with perfect reconstruction property that maintains their connectivity, and schemes without it in which the remeshing procedure is performed to change the mesh to semi-regular or regular mesh. The former schemes have good coding performance at higher coding rate, while the latter give excellent coding performance at lower coding rate. We propose a geometry coding scheme that maintains the connectivity and has a perfect reconstruction property. We apply a method that successively structures on 2-D plane the surrounding vertices obtained by expanding vertex sequences neighboring the previous layer. Non-separable component decomposition is applied, in which 2-D structured data are decomposed into four components depending on whether their location was even or odd on the horizontal and vertical axes in the 2-D plane. And a prediction and update are performed for the decomposed components. In the prediction process the predicted value is obtained from the vertices, which were not processed, neighboring the target vertex in the 3-D space. And the zero-tree coding is introduced in order to remove the redundancies between the coefficients at similar positions in different resolution levels. SFQ (Space-Frequency Quantization) is applied, which gives the optimal combination of coefficient pruning for the descendant coefficients of each tree element and a uniform quantization for each coefficient. Experiments applying the proposed method to several polygon meshes of different resolutions show that the proposed method gives a better coding performance at lower bit rate when compared to the conventional schemes.
Zhenyu XIAO Li SU Depeng JIN Lieguang ZENG
The influence of quantization scaling is seldom considered in narrow band (NB) communications, because a high resolution analogue-to-digital converter (ADC) can be generally employed. In ultra-wideband (UWB) systems, however, the resolution of ADC is required to be low to reduce complexity, cost and power consumption. Consequently, the influence of quantization scaling is significant and should be taken into account. In this letter, effects of quantization scaling are analyzed in terms of signal to noise ratio (SNR) loss based on an uniformly distributed random signal model. For the effects of quantization scaling on bit error rate (BER) performance, however, theoretical analysis is too complicated since quantization is a nonlinear operation, hence we employ here a simulation method. The simulation results show there exists an optimum scaling to minimize BER performance for a fixed-resolution receiver; the optimum scaling power is related to the SNR of input noisy signal and the resolution of ADC.
Ali MOQISEH Mahdi HADAVI Mohammad M. NAYEBI
In this paper, the inherent problem of the Hough transform when applied to search radars is considered. This problem makes the detection probability of a target depend on the length of the target line in the data space in addition to the received SNR from it. It is shown that this problem results in a non-uniform distribution of noise power in the parameter space. In other words, noise power in some regions of the parameter space is greater than in others. Therefore, the detection probability of the targets covered by these regions will decrease. Our solution is to modify the Hough detector to remove the problem. This modification uses non-uniform quantization in the parameter space based on the Maximum Entropy Quantization method. The details of implementing the modified Hough detector in a search radar are presented according to this quantization method. Then, it is shown that by using this method the detection performance of the target will not depend on its length in the data space. The performance of the modified Hough detector is also compared with the standard Hough detector by considering their probability of detection and probability of false alarm. This comparison shows the performance improvement of the modified detector.