IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E96-D No.5  (Publication Date:2013/05/01)

    Special Section on Data Engineering and Information Management
  • FOREWORD Open Access

    Koji ZETTSU  

     
    FOREWORD

      Page(s):
    1005-1005
  • MPI/OpenMP Hybrid Parallel Inference Methods for Latent Dirichlet Allocation – Approximation and Evaluation

    Shotaro TORA  Koji EGUCHI  

     
    PAPER-Advanced Search

      Page(s):
    1006-1015

    Recently, probabilistic topic models have been applied to various types of data, including text, and their effectiveness has been demonstrated. Latent Dirichlet allocation (LDA) is a well known topic model. Variational Bayesian inference or collapsed Gibbs sampling is often used to estimate parameters in LDA; however, these inference methods incur high computational cost for large-scale data. Therefore, highly efficient technology is needed for this purpose. We use parallel computation technology for efficient collapsed Gibbs sampling inference for LDA. We assume a symmetric multiprocessing (SMP) cluster, which has been widely used in recent years. In prior work on parallel inference for LDA, either MPI or OpenMP has often been used alone. For an SMP cluster, however, it is more suitable to adopt hybrid parallelization that uses message passing for communication between SMP nodes and loop directives for parallelization within each SMP node. We developed an MPI/OpenMP hybrid parallel inference method for LDA, and evaluated the performance of the inference under various settings of an SMP cluster. We further investigated the approximation that controls the inter-node communications, and found out that it achieved noticeable increase in inference speed while maintaining inference accuracy.

  • Efficient Top-k Document Retrieval for Long Queries Using Term-Document Binary Matrix – Pursuit of Enhanced Informational Search on the Web –

    Etsuro FUJITA  Keizo OYAMA  

     
    PAPER-Advanced Search

      Page(s):
    1016-1028

    With the successful adoption of link analysis techniques such as PageRank and web spam filtering, current web search engines well support “navigational search”. However, due to the use of a simple conjunctive Boolean filter in addition to the inappropriateness of user queries, such an engine does not necessarily well support “informational search”. Informational search would be better handled by a web search engine using an informational retrieval model combined with enhancement techniques such as query expansion and relevance feedback. Moreover, the realization of such an engine requires a method to prosess the model efficiently. In this paper we propose a novel extension of an existing top-k query processing technique to improve search efficiency. We add to it the technique utilizing a simple data structure called a “term-document binary matrix,” resulting in more efficient evaluation of top-k queries even when the queries have been expanded. We show on the basis of experimental evaluation using the TREC GOV2 data set and expanded versions of the evaluation queries attached to this data set that the proposed method can speed up evaluation considerably compared with existing techniques especially when the number of query terms gets larger.

  • Satisfiability of Simple XPath Fragments under Duplicate-Free DTDs

    Nobutaka SUZUKI  Yuji FUKUSHIMA  Kosetsu IKEDA  

     
    PAPER-XML DB

      Page(s):
    1029-1042

    In this paper, we consider the XPath satisfiability problem under restricted DTDs called “duplicate free”. For an XPath expression q and a DTD D, q is satisfiable under D if there exists an XML document t such that t is valid against D and that the answer of q on t is nonempty. Evaluating an unsatisfiable XPath expression is meaningless, since such an expression can always be replaced by an empty set without evaluating it. However, it is shown that the XPath satisfiability problem is intractable for a large number of XPath fragments. In this paper, we consider simple XPath fragments under two restrictions: (i) only a label can be specified as a node test and (ii) operators such as qualifier ([]) and path union (∪) are not allowed. We first show that, for some small XPath fragments under the above restrictions, the satisfiability problem is NP-complete under DTDs without any restriction. Then we show that there exist XPath fragments, containing the above small fragments, for which the satisfiability problem is in PTIME under duplicate-free DTDs.

  • Incremental Single-Source Multi-Target A* Algorithm for LBS Based on Road Network Distance

    Htoo HTOO  Yutaka OHSAWA  Noboru SONEHARA  Masao SAKAUCHI  

     
    PAPER-Spatial DB

      Page(s):
    1043-1052

    Searching for the shortest paths from a query point to several target points on a road network is an essential operation for several types of queries in location-based services. This search can be performed using Dijkstra's algorithm. Although the A* algorithm is faster than Dijkstra's algorithm for finding the shortest path from a query point to a target point, the A* algorithm is not so fast to find all paths between each point and the query point when several target points are given. In this case, the search areas on road network overlap for each search, and the total number of operations at each node is increased, especially when the number of query points increases. In the present paper, we propose the single-source multi-target A* (SSMTA*) algorithm, which is a multi-target version of the A* algorithm. The SSMTA* algorithm guarantees at most one operation for each road network node, and the searched area on road network is smaller than that of Dijkstra's algorithm. Deng et al. proposed the LBC approach with the same objective. However, several heaps are used to manage the search area on the road network and the contents in each heap must always be kept the same in their method. This operation requires much processing time. Since the proposed method uses only one heap, such content synchronization is not necessary. The present paper demonstrates through empirical evaluations that the proposed method outperforms other similar methods.

  • Context-Aware Dynamic Event Processing Using Event Pattern Templates

    Pablo Rosales TEJADA  Jae-Yoon JUNG  

     
    PAPER-Event DB

      Page(s):
    1053-1062

    A variety of ubiquitous computing devices, such as radio frequency identification (RFID) and wireless sensor network (WSN), are generating huge and significant events that should be rapidly processed for business excellence. In this paper, we describe how complex event processing (CEP) technology can be applied to ubiquitous process management based on context-awareness. To address the issue, we propose a method for context-aware event processing using event processing language (EPL) statement. Specifically, the semantics of a situation drive the transformation of EPL statement templates into executable EPL statements. The proposed method is implemented in the domain of ubiquitous cold chain logistics management. With the proposed method, context-aware event processing can be realized to enhance business performance and excellence in ubiquitous computing environments.

  • Image Retrieval with Scale Invariant Visual Phrases

    Deying FENG  Jie YANG  Cheng YANG  Congxin LIU  

     
    LETTER-Multimedia DB

      Page(s):
    1063-1067

    We propose a retrieval method using scale invariant visual phrases (SIVPs). Our method encodes spatial information into the SIVPs which capture translation, rotation and scale invariance, and employs the SIVPs to determine the spatial correspondences between query image and database image. To compute the spatial correspondences efficiently, the SIVPs are introduced into the inverted index, and SIVP verification is investigated to refine the candidate images returned from inverted index. Experimental results demonstrate that our method improves the retrieval accuracy while increasing the retrieval efficiency.

  • Regular Section
  • Deterministic Message Passing for Distributed Parallel Computing

    Xu ZHOU  Kai LU  Xiaoping WANG  Wenzhe ZHANG  Kai ZHANG  Xu LI  Gen LI  

     
    PAPER-Fundamentals of Information Systems

      Page(s):
    1068-1077

    The nondeterminism of message-passing communication brings challenges to program debugging, testing and fault-tolerance. This paper proposes a novel deterministic message-passing implementation (DMPI) for parallel programs in the distributed environment. DMPI is compatible with the standard MPI in user interface, and it guarantees the reproducibility of message with high performance. The basic idea of DMPI is to use logical time to solve message races and control asynchronous transmissions, and thus we could eliminate the nondeterministic behaviors of the existing message-passing mechanism. We apply a buffering strategy to alleviate the performance slowdown caused by mismatch of logical time and physical time. To avoid deadlocks introduced by deterministic mechanisms, we also integrate DMPI with a lightweight deadlock checker to dynamically detect and solve these deadlocks. We have implemented DMPI and evaluated it using NPB benchmarks. The results show that DMPI could guarantee determinism with incurring modest runtime overhead (14% on average).

  • Power Failure Protection Scheme for Reliable High-Performance Solid State Disks

    Kwanhu BANG  Kyung-Il IM  Dong-gun KIM  Sang-Hoon PARK  Eui-Young CHUNG  

     
    PAPER-Computer System

      Page(s):
    1078-1085

    Solid-state disks (SSDs) have received much attention as replacements for hard disk drives (HDDs). One of their noticeable advantages is their high-speed read/write operation. To achieve good performance, SSDs have an internal memory hierarchy which includes several volatile memories, such as DRAMs and SRAMs. Furthermore, many SSDs adopt aggressive memory management schemes under the assumption of stable power supply. Unfortunately, the data stored in the volatile memories are lost when the power supplied to SSDs is abruptly shut off. Such power failure is often observed in portable devices. For this reason, it is critical to provide a power failure protection scheme for reliable SSDs. In this work, we propose a power-failure protection scheme for SSDs to increase their reliability. The contribution of our work is three-fold. First, we design a power failure protection circuit which incorporates super-capacitors as well as rechargeable batteries. Second, we provide a method to determine the capacity of backup power sources. Third, we propose a data backup procedure when the power failure occurs. We implemented our method on a real board and applied it to a notebook PC with a contemporary SSD. The board measurement and simulation results prove that our method is robust in cases of sudden power failure.

  • On the Numbers of Products in Prefix SOPs for Interval Functions

    Infall SYAFALNI  Tsutomu SASAO  

     
    PAPER-Computer System

      Page(s):
    1086-1094

    First, this paper derives the prefix sum-of-products expression (PreSOP) and the number of products in a PreSOP for an interval function. Second, it derives Ψ(n,τp), the number of n-variable interval functions that can be represented with τp products. Finally, it shows that more than 99.9% of the n-variable interval functions can be represented with ⌈ n - 1 ⌉ products, when n is sufficiently large. These results are useful for a fast PreSOP generator and for estimating the size of ternary content addressable memories (TCAMs) for packet classification.

  • A System-Level Network Virtual Platform for IPsec Processor Development

    Chen-Chieh WANG  Chung-Ho CHEN  

     
    PAPER-Software Engineering

      Page(s):
    1095-1104

    Developing a complex network accelerator like an IPsec processor is a great challenge. To this end, we propose a Network Virtual Platform ( NetVP ) that consists of one or more virtual host (vHOST) modules and virtual local area network (vLAN) modules to support electronic system level (ESL) top-down design flow as well as provide the on-line verification throughout the entire development process. The on-line verification capability of NetVP enables the designed target to communicate with a real network for system validation. For ESL top-down design flow, we also propose untimed and timed interfaces to support hardware/software co-simulation. In addition, the NetVP can be used in conjunction with any ESL development platform through the untimed/timed interface. System development that uses this NetVP is efficient and flexible since it allows the designer to explore design spaces such as the network bandwidth and system architecture easily. The NetVP can also be applied to the development of other kinds of network accelerators.

  • Robust Hashing of Vector Data Using Generalized Curvatures of Polyline

    Suk-Hwan LEE  Seong-Geun KWON  Ki-Ryong KWON  

     
    PAPER-Information Network

      Page(s):
    1105-1114

    With the rapid expansion of vector data model application to digital content such as drawings and digital maps, the security and retrieval for vector data models have become an issue. In this paper, we present a vector data-hashing algorithm for the authentication, copy protection, and indexing of vector data models that are composed of a number of layers in CAD family formats. The proposed hashing algorithm groups polylines in a vector data model and generates group coefficients by the curvatures of the first and second type of polylines. Subsequently, we calculate the feature coefficients by projecting the group coefficients onto a random pattern, and finally generate the binary hash from binarization of the feature coefficients. Based on experimental results using a number of drawings and digital maps, we verified the robustness of the proposed hashing algorithm against various attacks and the uniqueness and security of the random key.

  • The Implications of Overlay Routing for ISPs' Peering Strategies

    Xun SHAO  Go HASEGAWA  Yoshiaki TANIGUCHI  Hirotaka NAKANO  

     
    PAPER-Information Network

      Page(s):
    1115-1124

    The Internet is composed of many distinct networks, operated by independent Internet Service Providers (ISPs). The traffic and economic relationships of ISPs are mainly decided by their routing policies. However, in today's Internet, overlay routing, which changes traffic routing at the application layer, is rapidly increasing and this challenges the validity of ISPs' existing agreements. We study here the economic implications of overlay routing for ISPs, using an ISP interconnection business model based on a simple network. We then study the overlay traffic patterns in the network under various conditions. Combining the business model and traffic patterns, we study the ISPs' cost reductions with Bill-and-Keep peering and paid peering. We also discuss the ISPs' incentive to upgrade the network under each peering strategy.

  • Generalized Feed Forward Shift Registers and Their Application to Secure Scan Design

    Katsuya FUJIWARA  Hideo FUJIWARA  

     
    PAPER-Dependable Computing

      Page(s):
    1125-1133

    In this paper, we introduce generalized feed-forward shift registers (GF2SR) to apply them to secure and testable scan design. Previously, we introduced SR-equivalents and SR-quasi-equivalents which can be used in secure and testable scan design, and showed that inversion-inserted linear feed-forward shift registers (I2LF2SR) are useful circuits for the secure and testable scan design. GF2SR is an extension of I2LF2SR and the class is much wider than that of I2LF2SR. Since the cardinality of the class of GF2SR is much larger than that of I2LF2SR, the security level of scan design with GF2SR is much higher than that of I2LF2SR. We consider how to control/observe GF2SR to guarantee easy scan-in/out operations, i.e., state-justification and state-identification problems are considered. Both scan-in and scan-out operations can be overlapped in the same way as the conventional scan testing, and hence the test sequence for the proposed scan design is of the same length as the conventional scan design. A program called WAGSR (Web Application for Generalized feed-forward Shift Registers) is presented to solve those problems.

  • Artist Agent: A Reinforcement Learning Approach to Automatic Stroke Generation in Oriental Ink Painting

    Ning XIE  Hirotaka HACHIYA  Masashi SUGIYAMA  

     
    PAPER-Artificial Intelligence, Data Mining

      Page(s):
    1134-1144

    Oriental ink painting, called Sumi-e, is one of the most distinctive painting styles and has attracted artists around the world. Major challenges in Sumi-e simulation are to abstract complex scene information and reproduce smooth and natural brush strokes. To automatically generate such strokes, we propose to model the brush as a reinforcement learning agent, and let the agent learn the desired brush-trajectories by maximizing the sum of rewards in the policy search framework. To achieve better performance, we provide elaborate design of actions, states, and rewards specifically tailored for a Sumi-e agent. The effectiveness of our proposed approach is demonstrated through experiments on Sumi-e simulation.

  • A Novel Discriminative Method for Pronunciation Quality Assessment

    Junbo ZHANG  Fuping PAN  Bin DONG  Qingwei ZHAO  Yonghong YAN  

     
    PAPER-Speech and Hearing

      Page(s):
    1145-1151

    In this paper, we presented a novel method for automatic pronunciation quality assessment. Unlike the popular “Goodness of Pronunciation” (GOP) method, this method does not map the decoding confidence into pronunciation quality score, but differentiates the different pronunciation quality utterances directly. In this method, the student's utterance need to be decoded for two times. The first-time decoding was for getting the time points of each phone of the utterance by a forced alignment using a conventional trained acoustic model (AM). The second-time decoding was for differentiating the pronunciation quality for each triphone using a specially trained AM, where the triphones in different pronunciation qualities were trained as different units, and the model was trained in discriminative method to ensure the model has the best discrimination among the triphones whose names were same but pronunciation quality scores were different. The decoding network in the second-time decoding included different pronunciation quality triphones, so the phone-level scores can be obtained from the decoding result directly. The phone-level scores were combined into the sentence-level scores using maximum entropy criterion. The experimental results shows that the scoring performance was increased significantly compared to the GOP method, especially in sentence-level.

  • Noise Reduction Method for Image Signal Processor Based on Unified Image Sensor Noise Model

    Yeul-Min BAEK  Whoi-Yul KIM  

     
    PAPER-Image Processing and Video Processing

      Page(s):
    1152-1161

    The noise in digital images acquired by image sensors has complex characteristics due to the variety of noise sources. However, most noise reduction methods assume that an image has additive white Gaussian noise (AWGN) with a constant standard deviation, and thus such methods are not effective for use with image signal processors (ISPs). To efficiently reduce the noise in an ISP, we estimate a unified noise model for an image sensor that can handle shot noise, dark-current noise, and fixed-pattern noise (FPN) together, and then we adaptively reduce the image noise using an adaptive Smallest Univalue Segment Assimilating Nucleus ( SUSAN ) filter based on the unified noise model. Since our noise model is affected only by image sensor gain, the parameters for our noise model do not need to be re-configured depending on the contents of image. Therefore, the proposed noise model is suitable for use in an ISP. Our experimental results indicate that the proposed method reduces image sensor noise efficiently.

  • Statistical Edge Detection in CT Image by Kernel Density Estimation and Mean Square Error Distance

    Xu XU  Yi CUI  Shuxu GUO  

     
    PAPER-Image Processing and Video Processing

      Page(s):
    1162-1170

    In this paper, we develop a novel two-sample test statistic for edge detection in CT image. This test statistic involves the non-parametric estimate of the samples' probability density functions (PDF's) based on the kernel density estimator and the calculation of the mean square error (MSE) distance of the estimated PDF's. In order to extract single-pixel-wide edges, a generic detection scheme cooperated with the non-maximum suppression is also proposed. This new method is applied to a variety of noisy images, and the performance is quantitatively evaluated with edge strength images. The experiments show that the proposed method provides a more effective and robust way of detecting edges in CT image compared with other existing methods.

  • Extraction and Tracking Moving Objects in Detail Considering Visual Feature Constraint and Structure Constraint

    Zhu LI  Yoichi TOMIOKA  Hitoshi KITAZAWA  

     
    PAPER-Image Recognition, Computer Vision

      Page(s):
    1171-1181

    Detailed tracking is required for many vision applications. A visual feature-based constraint underlies most conventional motion estimation methods. For example, optical flow methods assume that the brightness of each pixel is constant in two consecutive frames. However, it is difficult to realize accurate extraction and tracking using only visual feature information, because viewpoint changes and inconsistent illumination cause the visual features of some regions of objects to appear different in consecutive frames. A structure-based constraint of objects is also necessary for tracking. In the proposed method, both visual feature matching and structure matching are formulated as a linear assignment problem and then integrated.

  • Link Analysis Based on Rhetorical Relations for Multi-Document Summarization

    Nik Adilah Hanin BINTI ZAHRI  Fumiyo FUKUMOTO  Suguru MATSUYOSHI  

     
    PAPER-Natural Language Processing

      Page(s):
    1182-1191

    This paper presents link analysis based on rhetorical relations with the aim of performing extractive summarization for multiple documents. We first extracted sentences with salient terms from individual document using statistical model. We then ranked the extracted sentences by measuring their relative importance according to their connectivity among the sentences in the document set using PageRank based on the rhetorical relations. The rhetorical relations were examined beforehand to determine which relations are crucial to this task, and the relations among sentences from documents were automatically identified by SVMs. We used the relations to emphasize important sentences during sentence ranking by PageRank and eliminate redundancy from the summary candidates. Our framework omits fully annotated sentences by humans and the evaluation results show that the combination of PageRank along with rhetorical relations does help to improve the quality of extractive summarization.

  • Dictionary Learning with Incoherence and Sparsity Constraints for Sparse Representation of Nonnegative Signals

    Zunyi TANG  Shuxue DING  

     
    PAPER-Biocybernetics, Neurocomputing

      Page(s):
    1192-1203

    This paper presents a method for learning an overcomplete, nonnegative dictionary and for obtaining the corresponding coefficients so that a group of nonnegative signals can be sparsely represented by them. This is accomplished by posing the learning as a problem of nonnegative matrix factorization (NMF) with maximization of the incoherence of the dictionary and of the sparsity of coefficients. By incorporating a dictionary-incoherence penalty and a sparsity penalty in the NMF formulation and then adopting a hierarchically alternating optimization strategy, we show that the problem can be cast as two sequential optimal problems of quadratic functions. Each optimal problem can be solved explicitly so that the whole problem can be efficiently solved, which leads to the proposed algorithm, i.e., sparse hierarchical alternating least squares (SHALS). The SHALS algorithm is structured by iteratively solving the two optimal problems, corresponding to the learning process of the dictionary and to the estimating process of the coefficients for reconstructing the signals. Numerical experiments demonstrate that the new algorithm performs better than the nonnegative K-SVD (NN-KSVD) algorithm and several other famous algorithms, and its computational cost is remarkably lower than the compared algorithms.

  • Hash-Based Linked-List Histogram Construction

    Yan-Tsung PENG  Fan-Chieh CHENG  Shanq-Jang RUAN  Chang-Hong LIN  

     
    LETTER-Fundamentals of Information Systems

      Page(s):
    1204-1205

    A histogram is a common graphical descriptor to represent features of distribution of pixels in an image. However, for most of the applications that apply histograms, the time complexity of histogram construction is much higher than that of the other parts of the applications. Hence, column histograms had been presented to construct the local histogram in constant time. In order to increase its performance, this letter proposes a linked-list histogram to avoid generating empty bins, further using hash tables with bin entries to map pixels. Experimental results demonstrate the effectiveness of the proposed method and its superiority to the state-of-the-art method.

  • Partitioned-Tree Nested Loop Join: An Efficient Join for Spatio-Temporal Interval Join

    Jinsoo LEE  Wook-Shin HAN  Jaewha KIM  Jeong-Hoon LEE  

     
    LETTER-Data Engineering, Web Information Systems

      Page(s):
    1206-1210

    A predictive spatio-temporal interval join finds all pairs of moving objects satisfying a join condition on future time interval and space. In this paper, we propose a method called PTJoin. PTJoin partitions the inner index into small sub-trees and performs the join process for each sub-tree to reduce the number of disk page accesses for each window search. Furthermore, to reduce the number of pages accessed by consecutive window searches, we partition the index so that overlapping index pages do not belong to the same partition. Our experiments show that PTJoin reduces the number of page accesses by up to an order of magnitude compared to Interval_STJoin [9], which is the state-of-the-art solution, when the buffer size is small.

  • Flash-Aware Page Management Policy of a Navigation-Specialized Mobile DBMS for an Incremental Map Update

    KyoungWook MIN  JeongDan CHOI  

     
    LETTER-Data Engineering, Web Information Systems

      Page(s):
    1211-1214

    The performance of a mobile database management system (DBMS) in which most queries are made up of random data accesses if the NAND flash memory is used as storage media of the DBMS is degraded. The reason for this is that the performance of NAND flash memory is good for writing sequentially but poor when writing randomly. Thus, a new storage structure and querying policies are needed in mobile DBMS when flash memory is used as the storage media. In this letter, we propose a new policy of database page management to enhance the frequent random update performance, and then evaluate the performance experimentally.

  • Random Walks on Stochastic and Deterministic Small-World Networks

    Zi-Yi WANG  Shi-Ze GUO  Zhe-Ming LU  Guang-Hua SONG  Hui LI  

     
    LETTER-Information Network

      Page(s):
    1215-1218

    Many deterministic small-world network models have been proposed so far, and they have been proven useful in describing some real-life networks which have fixed interconnections. Search efficiency is an important property to characterize small-world networks. This paper tries to clarify how the search procedure behaves when random walks are performed on small-world networks, including the classic WS small-world network and three deterministic small-world network models: the deterministic small-world network created by edge iterations, the tree-structured deterministic small-world network, and the small-world network derived from the deterministic uniform recursive tree. Detailed experiments are carried out to test the search efficiency of various small-world networks with regard to three different types of random walks. From the results, we conclude that the stochastic model outperforms the deterministic ones in terms of average search steps.

  • Improving Test Coverage by Measuring Path Delay Time Including Transmission Time of FF

    Wenpo ZHANG  Kazuteru NAMBA  Hideo ITO  

     
    LETTER-Dependable Computing

      Page(s):
    1219-1222

    As technology scales to 45 nm and below, the reliability of VLSI declines due to small delay defects, which are hard to detect by functional clock frequency. To detect small delay defects, a method which measures the delay time of path in circuit under test (CUT) was proposed. However, because a large number of FFs exist in recent VLSI, the probability that the resistive defect occurs in the FFs is increased. A test method measuring path delay time including the transmission time of FFs is necessary. However, the path measured by the conventional on-chip path delay time measurement method does not include a part of a master latch. Thus, testing using the conventional measurement method cannot detect defects occurring on the part. This paper proposes an improved on-chip path delay time measurement method. Test coverage is improved by measuring the path delay time including transmission time of a master latch. The proposed method uses a duty-cycle-modified clock signal. Evaluation results show that, the proposed method improves test coverage 5.2511.28% with the same area overhead as the conventional method.

  • Pegasos Algorithm for One-Class Support Vector Machine

    Changki LEE  

     
    LETTER-Artificial Intelligence, Data Mining

      Page(s):
    1223-1226

    Training one-class support vector machines (one-class SVMs) involves solving a quadratic programming (QP) problem. By increasing the number of training samples, solving this QP problem becomes intractable. In this paper, we describe a modified Pegasos algorithm for fast training of one-class SVMs. We show that this algorithm is much faster than the standard one-class SVM without loss of performance in the case of linear kernel.

  • Finger Vein Verification Based on Neighbor Pattern Coding

    Wenming YANG  Guoli MA  Weifeng LI  Qingmin LIAO  

     
    LETTER-Pattern Recognition

      Page(s):
    1227-1229

    We propose a neighbor pattern coding (NPC) scheme with the aim of exploiting the structural feature fully to improve the performance of finger vein verification. First, one-pixel-wide edge is obtained to represent the direction of the binary vein pattern. Second, based on 8-neighbor pattern analysis, we design a feature-coding strategy to characterize the vein edge. Finally, the edge code flooding operation is defined to characterize all of other vein pixels according to the nearest neighbor principle. Experimental results demonstrate the effectiveness of the proposed method.

  • Image Retrieval Based on Structured Local Binary Kirsch Pattern

    Guang-Yu KANG  Shi-Ze GUO  De-Chen WANG  Long-Hua MA  Zhe-Ming LU  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    1230-1232

    This Letter presents a new feature named structured local binary Kirsch pattern (SLBKP) for image retrieval. Each input color image is decomposed into Y, Cb and Cr components. For each component image, eight 33 Kirsch direction templates are first performed pixel by pixel, and thus each pixel is characterized by an 8-dimenional edge-strength vector. Then a binary operation is performed on each edge-strength vector to obtain its integer-valued SLBKP. Finally, three SLBKP histograms are concatenated together as the final feature of each input colour image. Experimental results show that, compared with the existing structured local binary Haar pattern (SLBHP)-based feature, the proposed feature can greatly improve retrieval performance.

  • Improvement of JPEG Compression Efficiency Using Information Hiding and Image Restoration

    Kazumi YAMAWAKI  Fumiya NAKANO  Hideki NODA  Michiharu NIIMI  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    1233-1237

    The application of information hiding to image compression is investigated to improve compression efficiency for JPEG color images. In the proposed method, entropy-coded DCT coefficients of chrominance components are embedded into DCT coefficients of the luminance component. To recover an image in the face of the degradation caused by compression and embedding, an image restoration method is also applied. Experiments show that the use of both information hiding and image restoration is most effective to improve compression efficiency.

  • Self-Similarities in Difference Images: A New Cue for Single-Person Oriented Action Recognition

    Guoliang LU  Mineichi KUDO  

     
    LETTER-Image Recognition, Computer Vision

      Page(s):
    1238-1242

    Temporal Self-Similarity Matrix (SSM) based action recognition is one of the important approaches of single-person oriented action analysis in computer vision. In this study, we propose a new kind of SSM and a fast computation method. The computation method does not require time-consuming pre-processing to find bounding boxes of the human body, instead it processes difference images to obtain action patterns which can be done very quickly. The proposed SSM is experimentally confirmed to have high power/capacity to achieve a better classification performance than four typical kinds of SSMs.

  • Saliency Density and Edge Response Based Salient Object Detection

    Huiyun JING  Qi HAN  Xin HE  Xiamu NIU  

     
    LETTER-Image Recognition, Computer Vision

      Page(s):
    1243-1246

    We propose a novel threshold-free salient object detection approach which integrates both saliency density and edge response. The salient object with a well-defined boundary can be automatically detected by our approach. Saliency density and edge response maximization is used as the quality function to direct the salient object discovery. The global optimal window containing a salient object is efficiently located through the proposed saliency density and edge response based branch-and-bound search. To extract the salient object with a well-defined boundary, the GrabCut method is applied, initialized by the located window. Experimental results show that our approach outperforms the methods only using saliency or edge response and achieves a comparable performance with the best state-of-the-art method, while being without any threshold or multiple iterations of GrabCut.

  • An Adaptive Model for Particle Fluid Surface Reconstruction

    Fengquan ZHANG  Xukun SHEN  Xiang LONG  

     
    LETTER-Computer Graphics

      Page(s):
    1247-1250

    In this letter, we present an efficient method for high quality surface reconstruction from simulation data of smoothed particles hydrodynamics (SPH). For computational efficiency, instead of computing scalar field in overall particle sets, we only construct scalar field around fluid surfaces. Furthermore, an adaptive scalar field model is proposed, which adaptively adjusts the smoothing length of ellipsoidal kernel by a constraint-correction rule. Then the isosurfaces are extracted from the scalar field data. The proposed method can not only effectively preserve fluid details, such as splashes, droplets and surface wave phenomena, but also save computational costs. The experimental results show that our method can reconstruct the realistic fluid surfaces with different particle sets.

FlyerIEICE has prepared a flyer regarding multilingual services. Please use the one in your native language.