IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E94-D No.11  (Publication Date:2011/11/01)

    Special Section on Information and Communication System Security
  • FOREWORD Open Access

    Yutaka MIYAKE  

     
    FOREWORD

      Page(s):
    2067-2068
  • Secure Key Transfer Protocol Based on Secret Sharing for Group Communications Open Access

    Chia-Yin LEE  Zhi-Hui WANG  Lein HARN  Chin-Chen CHANG  

     
    INVITED PAPER

      Page(s):
    2069-2076

    Group key establishment is an important mechanism to construct a common session key for group communications. Conventional group key establishment protocols use an on-line trusted key generation center (KGC) to transfer the group key for each participant in each session. However, this approach requires that a trusted server be set up, and it incurs communication overhead costs. In this article, we address some security problems and drawbacks associated with existing group key establishment protocols. Besides, we use the concept of secret sharing scheme to propose a secure key transfer protocol to exclude impersonators from accessing the group communication. Our protocol can resist potential attacks and also reduce the overhead of system implementation. In addition, comparisons of the security analysis and functionality of our proposed protocol with some recent protocols are included in this article.

  • Overview of Traceback Mechanisms and Their Applicability Open Access

    Heung-Youl YOUM  

     
    INVITED PAPER

      Page(s):
    2077-2086

    As an increasing number of businesses and services depend on the Internet, protecting them against DDoS (Distributed Denial of Service) attacks becomes a critical issue. A traceback is used to discover technical information concerning the ingress points, paths, partial paths or sources of a packet or packets causing a problematic network event. The traceback mechanism is a useful tool to identify the attack source of the (DDoS) attack, which ultimately leads to preventing against the DDoS attack. There are numerous traceback mechanisms that have been proposed by many researchers. In this paper, we analyze the existing traceback mechanisms, describe the common security capabilities of traceback mechanisms, and evaluate them in terms of the various criteria. In addition, we identify typical application of traceback mechanisms.

  • Cryptanalysis for RC4 and Breaking WEP/WPA-TKIP Open Access

    Masakatu MORII  Yosuke TODO  

     
    INVITED PAPER

      Page(s):
    2087-2094

    In recent years, wireless LAN systems are widely used in campuses, offices, homes and so on. It is important to discuss the security aspect of wireless LAN networks in order to protect data confidentiality and integrity. The IEEE Standards Association formulated some security protocols, for example, Wired Equivalent Privacy (WEP) and Wi-Fi Protected Access Temporal Key Integrity Protocol (WPA-TKIP). However, these protocols have vulnerability for secure communication. In 2008, we proposed an efffective key recovery attack against WEP and it is called the TeAM-OK attack. In this paper, first, we present a different interpretation and the relation between other attacks and the TeAM-OK attack against WEP. Second, we present some existing attacks against WPA-TKIP and these attacks are not executable in a realistic environment. Then we propose an attack that is executable in a realistic environment against WPA-TKIP. This attack exploits the vulnerability implementation in the QoS packet processing feature of IEEE 802.11e. The receiver receives a falsification packet constructed as part of attack regardless of the setting of IEEE 802.11e. This vulnerability removes the attacker's condition that access points support IEEE 802.11e. We confirm that almost all wireless LAN implementations have this vulnerability. Therefore, almost all WPA-TKIP implementations cannot protect a system against the falsification attack in a realistic environment.

  • Threshold Anonymous Password-Authenticated Key Exchange Secure against Insider Attacks

    SeongHan SHIN  Kazukuni KOBARA  Hideki IMAI  

     
    PAPER

      Page(s):
    2095-2110

    An anonymous password-authenticated key exchange (PAKE) protocol is designed to provide both password-only authentication and client anonymity against a semi-honest server, who honestly follows the protocol. In INDOCRYPT2008, Yang and Zhang [26] proposed a new anonymous PAKE (NAPAKE) protocol and its threshold (D-NAPAKE) which they claimed to be secure against insider attacks. In this paper, we first show that the D-NAPAKE protocol [26] is completely insecure against insider attacks unlike their claim. Specifically, only one legitimate client can freely impersonate any subgroup of clients (the threshold t > 1) to the server. After giving a security model that captures insider attacks, we propose a threshold anonymous PAKE (called, TAP+) protocol which provides security against insider attacks. Moreover, we prove that the TAP+ protocol has semantic security of session keys against active attacks as well as insider attacks under the computational Diffie-Hellman problem, and provides client anonymity against a semi-honest server, who honestly follows the protocol. Finally, several discussions are followed: 1) We also show another threshold anonymous PAKE protocol by applying our RATIONALE to the non-threshold anonymous PAKE (VEAP) protocol [23]; and 2) We give the efficiency comparison, security consideration and implementation issue of the TAP+ protocol.

  • Detailed Cost Estimation of CNTW Forgery Attack against EMV Signature Scheme

    Tetsuya IZU  Yumi SAKEMI  Masahiko TAKENAKA  

     
    PAPER

      Page(s):
    2111-2118

    EMV signature is one of specifications for authenticating credit and debit card data, which is based on ISO/IEC 9796-2 signature scheme. At CRYPTO 2009, Coron, Naccache, Tibouchi, and Weinmann proposed a new forgery attack against the signature ISO/IEC 9796-2 (CNTW attack) [2]. They also briefly discussed the possibility when the attack is applied to the EMV signatures. They showed that the forging cost is $45,000 and concluded that the attack could not forge them for operational reason. However their results are derived from not fully analysis under only one condition. The condition they adopt is typical case. For security evaluation, fully analysis and an estimation in worst case are needed. This paper shows cost-estimation of CNTW attack against EMV signature in detail. We constitute an evaluate model and show cost-estimations under all conditions that Coron et al. do not estimate. As results, this paper contribute on two points. One is that our detailed estimation reduced the forgery cost from $45,000 to $35,200 with same condition as [2]. Another is to clarify a fact that EMV signature can be forged with less than $2,000 according to a condition. This fact shows that CNTW attack might be a realistic threat.

  • Rethinking Business Model in Cloud Computing: Concept and Example

    Ping DU  Akihiro NAKAO  

     
    PAPER

      Page(s):
    2119-2128

    In cloud computing, a cloud user pays proportionally to the amount of the consumed resources (bandwidth, memory, and CPU cycles etc.). We posit that such a cloud computing system is vulnerable to DDoS (Distributed Denial-of-Service) attacks against quota. Attackers can force a cloud user to pay more and more money by exhausting its quota without crippling its execution system or congesting links. In this paper, we address this issue and claim that cloud should enable users to pay only for their admitted traffic. We design and prototype such a charging model in a CoreLab testbed infrastructure and show an example application.

  • Embedded TaintTracker: Lightweight Run-Time Tracking of Taint Data against Buffer Overflow Attacks

    Yuan-Cheng LAI  Ying-Dar LIN  Fan-Cheng WU  Tze-Yau HUANG  Frank C. LIN  

     
    PAPER

      Page(s):
    2129-2138

    A buffer overflow attack occurs when a program writes data outside the allocated memory in an attempt to invade a system. Approximately forty percent of all software vulnerabilities over the past several years are attributed to buffer overflow. Taint tracking is a novel technique to prevent buffer overflow. Previous studies on taint tracking ran a victim's program on an emulator to dynamically instrument the code for tracking the propagation of taint data in memory and checking whether malicious code is executed. However, the critical problem of this approach is its heavy performance overhead. Analysis of this overhead shows that 60% of the overhead is from the emulator, and the remaining 40% is from dynamic instrumentation and taint information maintenance. This article proposes a new taint-style system called Embedded TaintTracker to eliminate the overhead in the emulator and dynamic instrumentation by compressing a checking mechanism into the operating system (OS) kernel and moving the instrumentation from runtime to compilation time. Results show that the proposed system outperforms the previous work, TaintCheck, by at least 8 times on throughput degradation, and is about 17.5 times faster than TaintCheck when browsing 1 KB web pages.

  • Analysis on the Sequential Behavior of Malware Attacks

    Nur Rohman ROSYID  Masayuki OHRUI  Hiroaki KIKUCHI  Pitikhate SOORAKSA  Masato TERADA  

     
    PAPER

      Page(s):
    2139-2149

    Overcoming the highly organized and coordinated malware threats by botnets on the Internet is becoming increasingly difficult. A honeypot is a powerful tool for observing and catching malware and virulent activity in Internet traffic. Because botnets use systematic attack methods, the sequences of malware downloaded by honeypots have particular forms of coordinated pattern. This paper aims to discover new frequent sequential attack patterns in malware automatically. One problem is the difficulty in identifying particular patterns from full yearlong logs because the dataset is too large for individual investigations. This paper proposes the use of a data-mining algorithm to overcome this problem. We implement the PrefixSpan algorithm to analyze malware-attack logs and then show some experimental results. Analysis of these results indicates that botnet attacks can be characterized either by the download times or by the source addresses of the bots. Finally, we use entropy analysis to reveal how frequent sequential patterns are involved in coordinated attacks.

  • A Novel Malware Clustering Method Using Frequency of Function Call Traces in Parallel Threads

    Junji NAKAZATO  Jungsuk SONG  Masashi ETO  Daisuke INOUE  Koji NAKAO  

     
    PAPER

      Page(s):
    2150-2158

    With the rapid development and proliferation of the Internet, cyber attacks are increasingly and continually emerging and evolving nowadays. Malware – a generic term for computer viruses, worms, trojan horses, spywares, adwares, and bots – is a particularly lethal security threat. To cope with this security threat appropriately, we need to identify the malwares' tendency/characteristic and analyze the malwares' behaviors including their classification. In the previous works of classification technologies, the malwares have been classified by using data from dynamic analysis or code analysis. However, the works have not been succeeded to obtain efficient classification with high accuracy. In this paper, we propose a new classification method to cluster malware more effectively and more accurately. We firstly perform dynamic analysis to automatically obtain the execution traces of malwares. Then, we classify malwares into some clusters using their characteristics of the behavior that are derived from Windows API calls in parallel threads. We evaluated our classification method using 2,312 malware samples with different hash values. The samples classified into 1,221 groups by the result of three types of antivirus softwares were classified into 93 clusters. 90% of the samples used in the experiment were classified into 20 clusters at most. Moreover, it ensured that 39 malware samples had characteristics different from other samples, suggesting that these may be new types of malware. The kinds of Windows API calls confirmed the samples classified into the same cluster had the same characteristics. We made clear that antivirus softwares named different name to malwares that have same behavior.

  • A Step towards Static Script Malware Abstraction: Rewriting Obfuscated Script with Maude

    Gregory BLANC  Youki KADOBAYASHI  

     
    PAPER

      Page(s):
    2159-2166

    Modern web applications incorporate many programmatic frameworks and APIs that are often pushed to the client-side with most of the application logic while contents are the result of mashing up several resources from different origins. Such applications are threatened by attackers that often attempts to inject directly, or by leveraging a stepstone website, script codes that perform malicious operations. Web scripting based malware proliferation is being more and more industrialized with the drawbacks and advantages that characterize such approach: on one hand, we are witnessing a lot of samples that exhibit the same characteristics which make these easy to detect, while on the other hand, professional developers are continuously developing new attack techniques. While obfuscation is still a debated issue within the community, it becomes clear that, with new schemes being designed, this issue cannot be ignored anymore. Because many proposed countermeasures confess that they perform better on unobfuscated contents, we propose a 2-stage technique that first relieve the burden of obfuscation by emulating the deobfuscation stage before performing a static abstraction of the analyzed sample's functionalities in order to reveal its intent. We support our proposal with evidence from applying our technique to real-life examples and provide discussion on performance in terms of time, as well as possible other applications of proposed techniques in the areas of web crawling and script classification. Additionally, we claim that such approach can be generalized to other scripting languages similar to JavaScript.

  • Cryptanalysis of Group Key Agreement Protocol Based on Chaotic Hash Function

    Eun-Jun YOON  Kee-Young YOO  

     
    LETTER

      Page(s):
    2167-2170

    In 2010, Guo and Zhang proposed a group key agreement protocol based on the chaotic hash function. This letter points out that Guo-Zhang's protocol is still vulnerable to off-line password guessing attacks, stolen-verifier attacks and reflection attacks.

  • An Improved Authenticated Encryption Scheme

    Fagen LI  Jiang DENG  Tsuyoshi TAKAGI  

     
    LETTER

      Page(s):
    2171-2172

    Authenticated encryption schemes are very useful for private and authenticated communication. In 2010, Rasslan and Youssef showed that the Hwang et al.'s authenticated encryption scheme is not secure by presenting a message forgery attack. However, Rasslan and Youssef did not give how to solve the security issue. In this letter, we give an improvement of the Hwang et al.'s scheme. The improved scheme not only solves the security issue of the original scheme, but also maintains its efficiency.

  • Regular Section
  • FPGA-Specific Custom VLIW Architecture for Arbitrary Precision Floating-Point Arithmetic

    Yuanwu LEI  Yong DOU  Jie ZHOU  

     
    PAPER-Computer System

      Page(s):
    2173-2183

    Many scientific applications require efficient variable-precision floating-point arithmetic. This paper presents a special-purpose Very Large Instruction Word (VLIW) architecture for variable precision floating-point arithmetic (VV-Processor) on FPGA. The proposed processor uses a unified hardware structure, equipped with multiple custom variable-precision arithmetic units, to implement various variable-precision algebraic and transcendental functions. The performance is improved through the explicitly parallel technology of VLIW instruction and by dynamically varying the precision of intermediate computation. We take division and exponential function as examples to illustrate the design of variable-precision elementary algorithms in VV-Processor. Finally, we create a prototype of VV-Processor unit on a Xilinx XC6VLX760-2FF1760 FPGA chip. The experimental results show that one VV-Processor unit, running at 253 MHz, outperforms the approach of a software-based library running on an Intel Core i3 530 CPU at 2.93 GHz by a factor of 5X-37X for basic variable-precision arithmetic operations and elementary functions.

  • Modeling and Analysis for Universal Plug and Play Using PIPE2

    Cheng-Min LIN  Shyi-Shiou WU  Tse-Yi CHEN  

     
    PAPER-Computer System

      Page(s):
    2184-2190

    Universal Plug and Play (UPnP) allows devices automatic discovery and control of services available in those devices connected to a Transmission Control Protocol/ Internet Protocol (TCP/IP) network. Although many products are designed using UPnP, little attention has been given to UPnP related to modeling and performance analysis. This paper uses a framework of Generalized Stochastic Petri Net (GSPN) to model and analyze the behavior of UPnP systems. The framework includes modeling UPnP, reachability decomposition, GSPN analysis, and reward assignment. Then, the Platform Independent Petri net Editor 2 (PIPE2) tool is used to model and evaluate the controllers in terms of power consumption, system utilization and network throughput. Through quantitative analysis, the steady states in the operation and notification stage dominate the system performance, and the control point is better than the device in power consumption but the device outperforms the control point in evaluating utilization. The framework and numerical results are useful to improve the quality of services provided in UPnP devices.

  • Parameter Tuning of the Protocol Interference Model Using SINR for Time Slot Assignment in Wireless Mesh Networks

    Gyeongyeon KANG  Yoshiaki TANIGUCHI  Go HASEGAWA  Hirotaka NAKANO  

     
    PAPER-Information Network

      Page(s):
    2191-2200

    In time division multiple access (TDMA)-based wireless mesh networks, interference relationships should be considered when time slots are assigned to links. In graph theory-based time slot assignment algorithms, the protocol interference model is widely used to determine radio interference information, although it is an inaccurate model of actual radio interference. On the other hand, the signal-to-interference-plus-noise-ratio model (SINR model) gives more accurate interference relationships but is difficult to apply to time slot assignment algorithms since the radio interference information cannot be determined before time slot assignment. In this paper, we investigate the effect of the parameters of the protocol interference model on the accuracy of the interference relationships determined using this model. Specifically, after assigning time slots to links based on the protocol interference model with various interference ratios, which is the major parameter of the protocol interference model, we compare the interference relationship among links in the protocol interference and SINR models. Through simulation experiments, we show that accuracy of the protocol interference model is improved by up to 15% by adjusting the interference ratios of the protocol interference model.

  • Indoor Positioning System Using Digital Audio Watermarking

    Yuta NAKASHIMA  Ryosuke KANETO  Noboru BABAGUCHI  

     
    PAPER-Information Network

      Page(s):
    2201-2211

    Recently, a number of location-based services such as navigation and mobile advertising have been proposed. Such services require real-time user positions. Since a global positioning system (GPS), which is one of the most well-known techniques for real-time positioning, is unsuitable for indoor uses due to unavailability of GPS signals, many indoor positioning systems (IPSs) using WLAN, radio frequency identification tags, and so forth have been proposed. However, most of them suffer from high installation costs. In this paper, we propose a novel IPS for real-time positioning that utilizes a digital audio watermarking technique. The proposed IPS first embeds watermarks into an audio signal to generate watermarked signals, each of which is then emitted from a corresponding speaker installed in a target environment. A user of the proposed IPS receives the watermarked signals with a mobile device equipped with a microphone, and the watermarks are detected in the received signal. For positioning, we model various effects upon watermarks due to propagation in the air, i.e., delays, attenuation, and diffraction. The model enables the proposed IPS to accurately locate the user based on the watermarks detected in the received signal. The proposed IPS can be easily deployed with a low installation cost because the IPS can work with off-the-shelf speakers that have been already installed in most of the indoor environments such as department stores, amusement arcades, and airports. We experimentally evaluate the accuracy of positioning and show that the proposed IPS locates the user in a 6 m by 7.5 m room with root mean squared error of 2.25 m on average. The results also demonstrate the potential capability of real-time positioning with the proposed IPS.

  • Analyzing Emergence in Complex Adaptive System: A Sign-Based Model of Stigmergy

    Chuanjun REN  Xiaomin JIA  Hongbing HUANG  Shiyao JIN  

     
    PAPER-Artificial Intelligence, Data Mining

      Page(s):
    2212-2218

    The description and analysis of emergence in complex adaptive system has recently become a topic of great interest in the field of systems, and lots of ideas and methods have been proposed. A Sign-based model of Stigmergy is proposed in this paper. Stigmergy is widely used in complex systems. We pick up “Sign” as a key notion to understand it. A definition of “Sign” is given, which reveals the Sign's nature and exploit the significations and relationships carried by the “Sign”. Then, a Sign-based model of Stigmergy is consequently developed, which captures the essential characteristics of Stigmergy. The basic architecture of Stigmergy as well as its constituents are presented and then discussed. The syntax and operational semantics of Stigmergy configurations are given. We illustrate the methodology of analyzing emergence in CAS by using our model.

  • Patent Registration Prediction Methodology Using Multivariate Statistics

    Won-Gyo JUNG  Sang-Sung PARK  Dong-Sik JANG  

     
    PAPER-Artificial Intelligence, Data Mining

      Page(s):
    2219-2226

    Whether a patent is registered or not is usually based on the subjective judgment of the patent examiners. However, the patent examiners may determine whether the patent is registered or not according to their personal knowledge, backgrounds etc. In this paper, we propose a novel patent registration method based on patent data. The method estimates whether a patent is registered or not by utilizing the objective past history of patent data instead of existing methods of subjective judgments. The proposed method constructs an estimation model by applying multivariate statistics algorithm. In the prediction model, the application date, activity index, IPC code and similarity of registration refusal are set to the input values, and patent registration and rejection are set to the output values. We believe that our method will contribute to improved reliability of patent registration in that it achieves highly reliable estimation results through the past history of patent data, contrary to most previous methods of subjective judgments by patent agents.

  • A Supervised Classification Approach for Measuring Relational Similarity between Word Pairs

    Danushka BOLLEGALA  Yutaka MATSUO  Mitsuru ISHIZUKA  

     
    PAPER-Artificial Intelligence, Data Mining

      Page(s):
    2227-2233

    Measuring the relational similarity between word pairs is important in numerous natural language processing tasks such as solving word analogy questions, classifying noun-modifier relations and disambiguating word senses. We propose a supervised classification method to measure the similarity between semantic relations that exist between words in two word pairs. First, each pair of words is represented by a vector of automatically extracted lexical patterns. Then a binary Support Vector Machine is trained to recognize word pairs with similar semantic relations to a given word pair. To train and evaluate the proposed method, we use a benchmark dataset that contains 374 SAT multiple-choice word-analogy questions. To represent the relations that exist between two word pairs, we experiment with 11 different feature functions, including both symmetric and asymmetric feature functions. Our experimental results show that the proposed method outperforms several previously proposed relational similarity measures on this benchmark dataset, achieving an SAT score of 46.9.

  • A Support Vector and K-Means Based Hybrid Intelligent Data Clustering Algorithm

    Liang SUN  Shinichi YOSHIDA  Yanchun LIANG  

     
    PAPER-Artificial Intelligence, Data Mining

      Page(s):
    2234-2243

    Support vector clustering (SVC), a recently developed unsupervised learning algorithm, has been successfully applied to solving many real-life data clustering problems. However, its effectiveness and advantages deteriorate when it is applied to solving complex real-world problems, e.g., those with large proportion of noise data points and with connecting clusters. This paper proposes a support vector and K-Means based hybrid algorithm to improve the performance of SVC. A new SVC training method is developed based on analysis of a Gaussian kernel radius function. An empirical study is conducted to guide better selection of the standard deviation of the Gaussian kernel. In the proposed algorithm, firstly, the outliers which increase problem complexity are identified and removed by training a global SVC. The refined data set is then clustered by a kernel-based K-Means algorithm. Finally, several local SVCs are trained for the clusters and then each removed data point is labeled according to the distance from it to the local SVCs. Since it exploits the advantages of both SVC and K-Means, the proposed algorithm is capable of clustering compact and arbitrary organized data sets and of increasing robustness to outliers and connecting clusters. Experiments are conducted on 2-D data sets generated by mixture models and benchmark data sets taken from the UCI machine learning repository. The cluster error rate is lower than 3.0% for all the selected data sets. The results demonstrate that the proposed algorithm compared favorably with existing SVC algorithms.

  • The Lower Bound for the Nearest Neighbor Estimators with (p,C)-Smooth Regression Functions

    Takanori AYANO  

     
    PAPER-Artificial Intelligence, Data Mining

      Page(s):
    2244-2249

    Let (X,Y) be a Rd R-valued random vector. In regression analysis one wants to estimate the regression function m(x):=E(Y|X=x) from a data set. In this paper we consider the convergence rate of the error for the k nearest neighbor estimators in case that m is (p,C)-smooth. It is known that the minimax rate is unachievable by any k nearest neighbor estimator for p > 1.5 and d=1. We generalize this result to any d ≥ 1. Throughout this paper, we assume that the data is independent and identically distributed and as an error criterion we use the expected L2 error.

  • Decision Tree-Based Acoustic Models for Speech Recognition with Improved Smoothness

    Masami AKAMINE  Jitendra AJMERA  

     
    PAPER-Speech and Hearing

      Page(s):
    2250-2258

    This paper proposes likelihood smoothing techniques to improve decision tree-based acoustic models, where decision trees are used as replacements for Gaussian mixture models to compute the observation likelihoods for a given HMM state in a speech recognition system. Decision trees have a number of advantageous properties, such as not imposing restrictions on the number or types of features, and automatically performing feature selection. This paper describes basic configurations of decision tree-based acoustic models and proposes two methods to improve the robustness of the basic model: DT mixture models and soft decisions for continuous features. Experimental results for the Aurora 2 speech database show that a system using decision trees offers state-of-the-art performance, even without taking advantage of its full potential and soft decisions improve the performance of DT-based acoustic models with 16.8% relative error rate reduction over hard decisions.

  • Amortized Linux Ext3 File System with Fast Writing after Editing for WinXP-Based Multimedia Application

    Seung-Wan JUNG  Young Jin NAM  Dae-Wha SEO  

     
    PAPER-Image Processing and Video Processing

      Page(s):
    2259-2270

    Recently, the need for multimedia devices, such as mobile phones, digital TV, PMP, digital camcorders, digital cameras has increased. These devices provide various services for multimedia file manipulation, allowing multimedia contents playback, multimedia file editing, etc. Additionally, digital TV provides a recorded multimedia file copy to a portable USB disk. However, Linux Ext3 file system, as employed by these devices, has a lot of drawbacks, as it required a considerable amount of time and disk I/Os to store large-size edited multimedia files, and it is hard to access for typical PC users. Therefore, in this paper a design and implementation of an amortized Ext3 with FWAE (Fast Writing-After-Editing) for WinXP-based multimedia applications is described. The FWAE is a fast and efficient multimedia file editing/storing technique for the Ext3 that exploits inode block pointer re-setting and shared data blocks by simply modifying metadata information. Individual experiments in this research show that the amortized Ext3 with FWAE for WinXP not only dramatically improves written performance of the Ext3 by 16 times on average with various types of edited multimedia files but also notably reduces the amount of consumed disk space through data block sharing. Also, it provides ease and comfort to use for typical PC users unfamiliar with Linux OS.

  • Compression of Dynamic 3D Meshes and Progressive Displaying

    Bin-Shyan JONG  Chi-Kang KAO  Juin-Ling TSENG  Tsong-Wuu LIN  

     
    PAPER-Computer Graphics

      Page(s):
    2271-2279

    This paper introduces a new dynamic 3D mesh representation that provides 3D animation support of progressive display and drastically reduces the amount of storage space required for 3D animation. The primary purpose of progressive display is to allow viewers to get animation as quickly as possible, rather than having to wait until all data has been downloaded. In other words, this method allows for the simultaneous transmission and playing of 3D animation. Experiments show that coarser 3D animation could be reconstructed with as little as 150 KB of data transferred. Using the sustained transmission of refined operators, viewers feel that resolution approaches that of the original animation. The methods used in this study are based on a compression technique commonly used in 3D animation - clustered principle component analysis, using the linearly independent rules of principle components, so that animation can be stored using smaller amounts of data. This method can be coupled with streaming technology to reconstruct animation through iterative updating. Each principle component is a portion of the streaming data to be stored and transmitted after compression, as well as a refined operator during the animation update process. This paper considers errors and rate-distortion optimization, and introduces weighted progressive transmitting (WPT), using refined sequences from optimized principle components, so that each refinement yields an increase in quality. In other words, with identical data size, this method allows each principle component to reduce allowable error and provide the highest quality 3D animation.

  • Multi-Dimensional Channel Management Mechanism to Avoid Reader Collision in Dense RFID Networks

    Haoru SU  Sunshin AN  

     
    LETTER-Information Network

      Page(s):
    2280-2283

    To solve the RFID reader collision problem, a Multi-dimensional Channel Management (MCM) mechanism is proposed. A reader selects an idle channel which has the maximum distance with the used channels. A backoff scheme is used before channel acquisition. The simulation results show MCM has better performance than other mechanisms.

  • Strength-Strength and Strength-Degree Correlation Measures for Directed Weighted Complex Network Analysis

    Shi-Ze GUO  Zhe-Ming LU  Zhe CHEN  Hao LUO  

     
    LETTER-Artificial Intelligence, Data Mining

      Page(s):
    2284-2287

    This Letter defines thirteen useful correlation measures for directed weighted complex network analysis. First, in-strength and out-strength are defined for each node in the directed weighted network. Then, one node-based strength-strength correlation measure and four arc-based strength-strength correlation measures are defined. In addition, considering that each node is associated with in-degree, out-degree, in-strength and out-strength, four node-based strength-degree correlation measures and four arc-based strength-degree correlation measures are defined. Finally, we use these measures to analyze the world trade network and the food web. The results demonstrate the effectiveness of the proposed measures for directed weighted networks.

  • Concept Drift Detection for Evolving Stream Data

    Jeonghoon LEE  Yoon-Joon LEE  

     
    LETTER-Artificial Intelligence, Data Mining

      Page(s):
    2288-2292

    In processing stream data, time is one of the most significant facts not only because the size of data is dramatically increased but because the characteristics of data is varying over time. To learn stream data evolving over time effectively, it is required to detect the drift of concept. We present a window adaptation function on domain value (WAV) to determine the size of windowed batch for learning algorithms of stream data and a method to detect the change of data characteristics with a criterion function utilizing correlation. When applying our adaptation function to a clustering task on a multi-stream data model, the result of learning synopsis of windowed batch determined by it shows its effectiveness. Our criterion function with correlation information of value distribution over time can be the reasonable threshold to detect the change between windowed batches.

FlyerIEICE has prepared a flyer regarding multilingual services. Please use the one in your native language.