In this paper, to make asynchronous circuit design easy, we propose a conversion method from synchronous Register Transfer Level (RTL) models to asynchronous RTL models with bundled-data implementation. The proposed method consists of the generation of an intermediate representation from a given synchronous RTL model and the generation of an asynchronous RTL model from the intermediate representation. This allows us to deal with different representation styles of synchronous RTL models. We use the eXtensible Markup Language (XML) as the intermediate representation. In addition to the asynchronous RTL model, the proposed method generates a simulation model when the target implementation is a Field Programmable Gate Array and a set of non-optimization constraints for the control circuit used in logic synthesis and layout synthesis. In the experiment, we demonstrate that the proposed method can convert synchronous RTL models specified manually and obtained by a high-level synthesis tool to asynchronous ones.
Kenji HASHIMOTO Ryunosuke TAKAYAMA Hiroyuki SEKI
One of the most promising compression methods for XML documents is the one that translates a given document to a tree grammar that generates it. A feature of this compression is that the internal structures are kept in production rules of the grammar. This enables us to directly manipulate the tree structure without decompression. However, previous studies assume that a given XML document does not have data values because they focus on direct retrieval and manipulation of the tree structure. This paper proposes a direct update method for XML documents with data values and shows the effectiveness of the proposed method based on experiments conducted on our implemented tool.
Bongjin OH Jongyoul PARK Sunggeun JIN Youngguk HA
We propose simple but efficient encapsulation architecture. In the architecture, clients can better decode Extensible Markup Language (XML) based service information for TV contents with schema digest. Our experimental results show the superiority of the proposed architecture by comparing the compression ratios and decoding times of the proposed architecture and the existing architectures.
Kazuki MIYAHARA Kenji HASHIMOTO Hiroyuki SEKI
This paper discusses the decidability of node query preservation problems for tree transducers. We assume a transformation given by a deterministic linear top-down data tree transducer (abbreviated as DLTV) and an n-ary query based on runs of a tree automaton. We say that a DLTV Tr strongly preserves a query Q if there is a query Q' such that for every tree t, the answer set of Q' for Tr(t) is equal to the answer set of Q for t. We also say that Tr weakly preserves Q if there is a query Q' such that for every t, the answer set of Q' for Tr(t) includes the answer set of Q for t. We show that the weak preservation problem is coNP-complete and the strong preservation problem is in 2-EXPTIME. We also show that the problems are decidable when a given transducer is a functional extended linear top-down data tree transducer with regular look-ahead, which is a more expressive transducer than DLTV.
Hao HAN Yinxing XUE Keizo OYAMA Yang LIU
The rendering mechanism plays an indispensable role in browser-based Web application. It generates active webpages dynamically and provides human-readable layout through template engines, which are used as a standard programming model to separate the business logic and data computations from the webpage presentation. The client-side rendering mechanism, owing to the advances of rich application technologies, has been widely adopted. The adoption of client side rendering brings not only various merits but also new problems. In this paper, we propose and construct “pagelet”, a segment-based template engine for developing flexible and extensible Web applications. By presenting principles, practice and usage experience of pagelet, we conduct a comprehensive analysis of possible advantages and disadvantages brought by client-side rendering mechanism from the viewpoints of both developers and end-users.
Ahmad Iqbal Hakim SUHAIMI Yuichi GOTO Jingde CHENG
Information Security Management Systems (ISMSs) play important roles in helping organizations to manage their information securely. However, establishing, managing, and maintaining ISMSs is not an easy task for most organizations because an ISMS has many participants and tasks, and requires many kinds of documents. Therefore, organizations with ISMSs demand tools that can support them to perform all tasks in ISMS lifecycle processes consistently and continuously. To realize such support tools, a database system that manages ISO/IEC 27000 series, which are international standards for ISMSs, and ISMS documents, which are the products of tasks in ISMS lifecycle processes, is indispensable. The database system should manage data of the standards and documents for all available versions and translations, relationship among the standards and documents, authorization to access the standards and documents, and metadata of the standards and documents. No such database system has existed until now. This paper presents an information security management database system (ISMDS) that manages ISO/IEC 27000 series and ISMS documents. ISMDS is a meta-database system that manages several databases of standards and documents. ISMDS is used by participants in ISMS as well as tools supporting the participants to perform tasks in ISMS lifecycle processes. The users or tools can retrieve data from all versions and translations of the standards and documents. The paper also presents some use cases to show the effectiveness of ISMDS.
Chittaphone PHONHARATH Kenji HASHIMOTO Hiroyuki SEKI
We study a static analysis problem on k-secrecy, which is a metric for the security against inference attacks on XML databases. Intuitively, k-secrecy means that the number of candidates of sensitive data of a given database instance or the result of unauthorized query cannot be narrowed down to k-1 by using available information such as authorized queries and their results. In this paper, we investigate the decidability of the schema k-secrecy problem defined as follows: for a given XML database schema, an authorized query and an unauthorized query, decide whether every database instance conforming to the given schema is k-secret. We first show that the schema k-secrecy problem is undecidable for any finite k>1 even when queries are represented by a simple subclass of linear deterministic top-down tree transducers (LDTT). We next show that the schema ∞-secrecy problem is decidable for queries represented by LDTT. We give an algorithm for deciding the schema ∞-secrecy problem and analyze its time complexity. We show the schema ∞-secrecy problem is EXPTIME-complete for LDTT. Moreover, we show similar results LDTT with regular look-ahead.
Nobutaka SUZUKI Yuji FUKUSHIMA Kosetsu IKEDA
In this paper, we consider the XPath satisfiability problem under restricted DTDs called “duplicate free”. For an XPath expression q and a DTD D, q is satisfiable under D if there exists an XML document t such that t is valid against D and that the answer of q on t is nonempty. Evaluating an unsatisfiable XPath expression is meaningless, since such an expression can always be replaced by an empty set without evaluating it. However, it is shown that the XPath satisfiability problem is intractable for a large number of XPath fragments. In this paper, we consider simple XPath fragments under two restrictions: (i) only a label can be specified as a node test and (ii) operators such as qualifier ([]) and path union (∪) are not allowed. We first show that, for some small XPath fragments under the above restrictions, the satisfiability problem is NP-complete under DTDs without any restriction. Then we show that there exist XPath fragments, containing the above small fragments, for which the satisfiability problem is in PTIME under duplicate-free DTDs.
Hsu-Kuang CHANG King-Chu HUNG I-Chang JOU
Compiling documents in extensible markup language (XML) increasingly requires access to data services which provide both rapid response and the precise use of search engines. Efficient data service should be based on a skillful representation that can support low complexity and high precision search capabilities. In this paper, a novel complete path representation (CPR) associated with a modified inverted index is presented to provide efficient XML data services, where queries can be versatile in terms of predicates. CPR can completely preserve hierarchical information, and the new index is used to save semantic information. The CPR approach can provide template-based indexing for fast data searches. An experiment is also conducted for the evaluation of the CPR approach.
Kenji HASHIMOTO Hiroto KAWAI Yasunori ISHIHARA Toru FUJIWARA
This paper discusses verification of the security against inference attacks on XML databases in the presence of a functional dependency. So far, we have provided the verification method for k-secrecy, which is a metric for the security against inference attacks on databases. Intuitively, k-secrecy means that the number of candidates of sensitive data (i.e., the result of unauthorized query) of a given database instance cannot be narrowed down to k-1 by using available information such as authorized queries and their results. In this paper, we consider a functional dependency on database instances as one of the available information. Functional dependencies help attackers to reduce the number of the candidates for the sensitive information. The verification method we have provided cannot be naively extended to the k-secrecy problem with a functional dependency. The method requires that the candidate set can be captured by a tree automaton, but the candidate set when a functional dependency is considered cannot be always captured by any tree automaton. We show that the ∞-secrecy problem in the presence of a functional dependency is decidable when a given unauthorized query is represented by a deterministic topdown tree transducer, without explicitly computing the candidate set.
Chang-Sup PARK Jun Pyo PARK Yon Dohn CHUNG
Wireless broadcasting of heterogeneous XML data has become popular in many applications, where energy-efficient processing of user queries at the mobile client is a critical issue. This paper proposes a new index structure for wireless stream of heterogeneous XML data to enhance tuning time performance in processing path queries on the stream. The index called PrefixSummary stores for each location path in the XML data the address of a bucket in the stream which contains an XML node satisfying the location path and appearing first in the stream. We present algorithms to generate broadcast stream with the proposed index and to process a path query on the stream efficiently by exploiting the index. We also suggest a replication scheme of PrefixSummary within a broadcast cycle to reduce latency in query processing. By analysis and experiment we show the proposed PrefixSummary approach can reduce tuning time for processing path queries significantly while it can also achieve reasonable access time performance by means of replication of the index over the broadcast stream.
DTDs are continuously updated according to changes in the real world. Let t be an XML document valid against a DTD D, and suppose that D is updated by an update script s. In general, we cannot uniquely "infer" a transformation of t from s, i.e., we cannot uniquely determine the elements in t that should be deleted and/or the positions in t that new elements should be inserted into. In this paper, we consider inferring K optimum transformations of t from s so that a user finds the most desirable transformation more easily. We first show that the problem of inferring K optimum transformations of an XML document from an update script is NP-hard even if K = 1. Then, assuming that an update script is of length one, we show an algorithm for solving the problem, which runs in time polynomial of |D|, |t|, and K.
Katsuya MASUDA Jun'ichi TSUJII
This paper presents algorithms for searching text regions with specifying annotated information in tag-annotated text by using Region Algebra. The original algebra and its efficient algorithms are extended to handle both nested regions and crossed regions. The extensions are necessary for text search by using rich linguistic annotations. We first assign a depth number to every nested tag region to order these regions and write efficient algorithms using the depth number for the containment operations which can treat nested tag regions. Next, we introduce variables for attribute values of tags into the algebra to treat annotations in which attributes indicate another tag regions, and propose an efficient method of treating re-entrancy by incrementally determining values for variables. Our algorithms have been implemented in a text search engine for MEDLINE, which is a large textbase of abstracts in medical science. Experiments in tag-annotated MEDLINE abstracts demonstrate the effectiveness of specifying annotations and the efficiency of our algorithms. The system is made publicly accessible at http://www-tsujii.is.s.u-tokyo.ac.jp/medie/.
Kenji HASHIMOTO Kimihide SAKANO Fumikazu TAKASUKA Yasunori ISHIHARA Toru FUJIWARA
This paper discusses verification of the security against inference attacks on XML databases. First, a security definition called k-secrecy against inference attacks on XML databases is proposed. k-secrecy with an integer k > 1 (or k = ∞) means that attackers cannot narrow down the candidates for the value of the sensitive information to k - 1 (or finite), using the results of given authorized queries and schema information. Secondly, an XML query model such that verification can be performed straightforwardly according to the security definition is presented. The query model can represent practical queries which extract some nodes according to any of their neighboring nodes such as ancestors, descendants, and siblings. Thirdly, another refinement of the verification method is presented, which produces much smaller intermediate results if a schema contains no arbitrarily recursive element. The correctness of the refinement is proved, and the effect of the refinement in time and space efficiency has been confirmed by experiment.
Umaporn SUPASITTHIMETHEE Toshiyuki SHIMIZU Masatoshi YOSHIKAWA Kriengkrai PORKAEW
One of the most convenient ways to query XML data is a keyword search because it does not require any knowledge of XML structure or learning a new user interface. However, the keyword search is ambiguous. The users may use different terms to search for the same information. Furthermore, it is difficult for a system to decide which node is likely to be chosen as a return node and how much information should be included in the result. To address these challenges, we propose an XML semantic search based on keywords called XSemantic. On the one hand, we give three definitions to complete in terms of semantics. Firstly, the semantic term expansion, our system is robust from the ambiguous keywords by using the domain ontology. Secondly, to return semantic meaningful answers, we automatically infer the return information from the user queries and take advantage of the shortest path to return meaningful connections between keywords. Thirdly, we present the semantic ranking that reflects the degree of similarity as well as the semantic relationship so that the search results with the higher relevance are presented to the users first. On the other hand, in the LCA and the proximity search approaches, we investigated the problem of information included in the search results. Therefore, we introduce the notion of the Lowest Common Element Ancestor (LCEA) and define our simple rule without any requirement on the schema information such as the DTD or XML Schema. The first experiment indicated that XSemantic not only properly infers the return information but also generates compact meaningful results. Additionally, the benefits of our proposed semantics are demonstrated by the second experiment.
Nobutaka SUZUKI Yuji FUKUSHIMA
Finding an appropriate data transformation between two schemas has been an important problem. In this paper, assuming that an update script between original and updated DTDs is available, we consider inferring a transformation algorithm from the original DTD and the update script such that the algorithm transforms each document valid against the original DTD into a document valid against the updated DTD. We first show a transformation algorithm inferred from a DTD and an update script. We next show a sufficient condition under which the transformation algorithm inferred from a DTD d and an update script is unambiguous, i.e., for any document t valid against d, elements to be deleted/inserted can unambiguously be determined. Finally, we show a polynomial-time algorithm for testing the sufficient condition.
Jae-Ho CHOI Sang-Hyun PARK Myong-Soo LEE SangKeun LEE
With the growth of wireless computing and the popularity of eXtensible Markup Language (XML), wireless XML data management is emerging as an important research area. In this paper, cache invalidation methodology with XML update is addressed in wireless computing environments. A family of XML cache invalidation strategies, called S-XIR, D-XIR and E-XIR, is suggested. Using S-XIR and D-XIR, the unchanged part of XML data, only its structure changes, can be effectively reused in client caching. E-XIR, which uses prefetching, can further improve access time. Simulations are carried out to evaluate the proposed methodology; they show that the proposed strategies improve both tuning time and access time significantly. In particular, the proposed strategies are on average about 4 to 12 times better than the previous approach in terms of tuning time.
Jaehoon KIM Youngsoo KIM Seog PARK
Recently, for more efficient filtering of XML data, YFilter system has been suggested to exploit the prefix commonalities that exist among path expressions. Sharing the prefix commonality gives the benefit of improving filtering performance through the tremendous reduction in filtering machine size. However, exploiting the postfix commonality can also be useful for an XML filtering situation. For example, when a stream of XML messages does not have any defined schema, or users cannot remember the defined schema exactly, users often use the partial matching path queries which begins with the descendant axis ("//"), e.g., '//science/article/title', '//entertainment/article/title', and '//title'. If so, the registered XPath queries are most likely to have the postfix commonality, e.g., the sample queries share the partial path expressions 'article/title' and 'title'. Therefore, in this paper, we introduce a bottom-up filtering approach exploiting the postfix commonality against the top-down approach of YFilter exploiting the prefix commonality. Some experimental results show that our method has better filtering performance when registered XPath queries mainly consist of the partial matching path queries with the postfix commonality.
Stanislav STANKOVIC Jaakko ASTOLA
Decision diagrams are often used for efficient representation of discrete functions in terms of needed storage space and processing time. In this paper, we propose an XML (Extensible Markup Language) based standard for the structural description of various types of decision diagrams. The proposed standard describes elements of the structure common to various types of decision diagrams. It also provides facilities for storing additional information, specific to particular types of decision diagrams. Properties of XML enable us to define a standard that is flexible enough to be applicable to various existing types of decision diagrams as well as new types that could be defined in the future. The existence of such a standard permits efficient storage and exchange of data in decision diagram form between various software systems. In this way, it supports benchmarking, testing and verification of various procedures using decision diagrams as a basic data structure.
Katsuhisa MARUYAMA Shinichiro YAMAMOTO
Recent IDEs have become more extensible tool platforms but do not concern themselves with how other tools running on them collaborate with each other. They compel developers to use proprietary representations or the classical abstract syntax tree (AST) to build source code tools. Although these representations contain sufficient information, they are neither portable nor extensible. This paper proposes a tool platform that manages commonly used, fined-grained, information about Java source code by using an XML representation. Our representation is suitable for developing tools which browse and manipulate actual source code, since the original code is annotated with tags based on its structure and retained within the tags. Additionally, it exposes information resulting from global semantic analysis, which is never provided by the typical AST. Our proposed platform allows the developers to extend the representation for the purpose of sharing or exchanging various kinds of information about the source code, and also enables them to build new tools by using existing XML utilities.