Keyword Search Result

[Keyword] algorithm and computational complexity(10hit)

1-10hit
  • Parallel Algorithms for Refutation Tree Problem on Formal Graph Systems

    Tomoyuki UCHIDA  Takayoshi SHOUDAI  Satoru MIYANO  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E78-D No:2
      Page(s):
    99-112

    We define a new framework for rewriting graphs, called a formal graph system (FGS), which is a logic program having hypergraphs instead of terms in first-order logic. We first prove that a class of graphs is generated by a hyperedge replacement grammar if and only if it is defined by an FGS of a special form called a regular FGS. In the same way as logic programs, we can define a refutation tree for an FGS. The classes of TTSP graphs and outerplanar graphs are definable by regular FGSs. Then, we consider the problem of constructing a refutation tree of a graph for these FGSs. For the FGS defining TTSP graphs, we present a refutation tree algorithm of O(log2nlogm) time with O(nm) processors on an EREW PRAM. For the FGS defining outerplanar graphs, we show that the refutation tree problem can be solved in O(log2n) time with O(nm) processors on an EREW PRAM. Here, n and m are the numbers of vertices and edges of an input graph, respectively.

  • Unification-Failure Filter for Natural Language

    Alfredo M. MAEDA  Hideto TOMABECHI  Jun-ichi AOE  

     
    PAPER-Software Systems

      Vol:
    E78-D No:1
      Page(s):
    19-26

    Graph unification is doubtlessly the most expensive process in unification-based grammar parsing since it takes the vast majority of the total parsing time of natural language sentences. A parsing time overload in unification consists in that, in general, no less than 60% of the graph unifications performed actually fail. Thus one way to achieve unification time speed-up is focusing on an efficient, fast way to deal with such unification failures. In this paper, a process, prior to unification itself, capable of filtering or stopping a considerably high percentage of graphs that would fail unification is proposed. This unification-filtering process consists of comparison of signatures that correspond to each one of the graphs to be unified. Unification-filter (hereafter UF) is capable of stopping around 87% of the non-unifiable graphs before unification itself takes place. UF takes significantly less time to detect graphs that do not unify and discard them than it would take to unification to fail the attempt to unify the same graphs. As a result of using UF, unification is performed in an around 71% of the time for the fastest known unification algorithm.

  • Complexity of Finding Alphabet Indexing

    Shinichi SHIMOZONO  Satoru MIYANO  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E78-D No:1
      Page(s):
    13-18

    For two finite disjoint sets P and Q of strings over an alphabet Σ, an alphabet indexing for P, Q by an indexing alphabet Γ with |Γ||Σ| is a mapping :ΣΓ satisfying (P)(Q), where :Σ*Γ* is the homomorphism derived from . We defined this notion through experiments of knowledge acquisition from amino acid sequences of proteins by learning algorithms. This paper analyzes the complexity of finding an alphabet indexing. We first show that the problem is NP-complete. Then we give a local search algorithm for this problem and show a result on PLS-completeness.

  • Exhaustive Computation to Derive the Lower Bound for Sorting 13 Items

    Shusaku SAWATO  Takumi KASAI  Shigeki IWATA  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E77-D No:9
      Page(s):
    1027-1031

    We have made an exhaustive computation to establish that 33 comparisons never sort 13 items. The computation was carried out within 10 days by a workstation. Since merge insertion sort [Ford, et al. A tournament problem, Amer. Math. Monthly, vol. 66, (1959)] uses 34 comparisons for sorting 13 items, our result guarantees the optimality of the sorting procedure to sort 13 items as far as the number of comparisons is concerned. The problem has been open for nearly three decades since Mark Wells discovered that 30 comparisons are required to sort 12 items in 1965.

  • Some Two-Person Game is Complete for ACk Under Many-One NC1 Reducibility

    Shigeki IWATA  

     
    PAPER-Automata, Languages and Theory of Computing

      Vol:
    E77-D No:9
      Page(s):
    1022-1026

    ACk is the class of problems solvable by an alternating Turing machine in space O(log n) and alternation depth O(logk n) [S. A. Cook, A taxonomy of problems with fast parallel algorithms, Inform. Contr. vol. 64]. We consider a game played by two persons: each player alternately moves a marker along an edge of a given digraph, and the first palyer who cannot move loses the game. It is shown that the problem to determine whether the first player can win the game on a digraph with n nodes exactly after logk n moves is complete for ACk nuder NC1 reducibility.

  • Stochastic Relaxation for Continuous Values--Standard Regularization Based on Gaussian MRF--

    Sadayuki HONGO  Isamu YOROIZAWA  

     
    PAPER-Regularization

      Vol:
    E77-D No:4
      Page(s):
    425-432

    We propose a fast computation method of stochastic relaxation for the continuous-valued Markov random field (MRF) whose energy function is represented in the quadratic form. In the case of regularization in visual information processing, the probability density function of a state transition can be transformed to a Gaussian function, therefore, the probablistic state transition is realized with Gaussian random numbers whose mean value and variance are calculated based on the condition of the input data and the neighborhood. Early visual information processing can be represented with a coupled MRF model which consists of continuity and discontinuity processes. Each of the continuity or discontinuity processes represents a visual property, which is like an intensity pattern, or a discontinuity of the continuity process. Since most of the energy function for early visual information processing can be represented by the quadratic form in the continuity process, the probability density of local computation variables in the continuity process is equivalent to the Gaussian function. If we use this characteristic, it is not necessary for the discrimination function computation to calculate the summation of the probabilities corresponding to all possible states, therefore, the computation load for the state transition is drastically decreased. Furthermore, if the continuous-valued discontinuity process is introduced, the MRF model can directly represent the strength of discontinuity. Moreover, the discrimination function of this energy function in the discontinuity process, which is linear, can also be calculated without probability summation. In this paper, a fast method for calculating the state transition probability for the continuous-valued MRF on the visual informtion processing is theoretically explained. Next, initial condition dependency, computation time and dependency on the statistical estimation of the condition are investigated in comparison with conventional methods using the examples of the data restoration for a corrupted square wave and a corrupted one-dimensional slice of a natural image.

  • An Architecture for Parallelism of OPS5 Production Systems

    Tsuyoshi KAWAGUCHI  Etsuro HONDA  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Vol:
    E76-D No:8
      Page(s):
    935-946

    In this paper we propose an architecture and an algorithm for the parallel execution of OPS5 production systems. It is known that current OPS5 production system interpreters spend almost 90% of their execution time in the match step. Thus, in this paper we focus on the speedup of the match step. The match algorithm used in OPS5 is called Rete and the algorithm uses a special kind of a date-flow network compiled from the left hand sides of rules. To achieve the maximum degree of parallelism of a given OPS5 program by as few processors as possible, the proposed parallel machine uses loosely coupled multiprocessors. Parallel machines designed for fine-grain parallelism, such as DADO, also use loosely coupled multiprocessors. However, the proposed machine differs from such machines at the following points: use of powerful processors which have large amounts of memories and small cycle times; use of a shared Rete network (parallel machines designed for fine-grain parallelism use an unshared Rete network); high hardware utilization. Basic ideas of the proposed parallel machine are as follows. (1) Use of a modified Rete network in which node sharing is used only for constant-test nodes and each memory node is lumped with the child two-input node. (2) Static allocation of the nodes of the modified Rete network onto processors. (3) Partition of the set of processors into three subsets: constant-test node processors, two-input node processors and conflict-set processors. (4) Use of a ring network for the interconnection network among two-input node processors. In addition to an architecture for parallel execution of OPS5 production systems, we propose a scheme for mapping the modified Rete network into the proposed architecture. The results of simulation experiments showed that the proposed architecture is promising for parallel execution of OPS5 production systems.

  • A Parallel Algorithm for the Maximal Co-Hitting Set Problem

    Takayoshi SHOUDAI  Satoru MIYANO  

     
    LETTER-Algorithm and Computational Complexity

      Vol:
    E76-D No:2
      Page(s):
    296-298

    Let C{c1, , cm} be a family of subsets of a finite set S{1, , n}, a subset S of S is a co-hitting set if S contains no element of C as a subset. By using an O((log n)2) time EREW PRAM algorithm for a maximal independent set problem (MIS), we show that a maximal co-hitting set for S can be computed on an EREW PRAN in time O(αβ(log(nm))2) using O(n2 m) processors, where αmax{|cii1, , n} and βmax{|djj1, , n} with dj{ci|jci}. This implies that if αβO((log(nm))k) then the problem is solvable in NC.

  • Relationships between PAC-Learning Algorithms and Weak Occam Algorithms

    Eiji TAKIMOTO  Akira MARUOKA  

     
    PAPER

      Vol:
    E75-D No:4
      Page(s):
    442-448

    In the approximate learning model introduced by Valiant, it has been shown by Blumer et al. that an Occam algorithm is immediately a PAC-learning algorithm. An Occam algorithm is a polynomial time algorithm that produces, for any sequence of examples, a simple hypothesis consistent with the examples. So an Occam algorithm is thought of as a procedure that compresses information in the examples. Weakening the compressing ability of Occam algorithms, a notion of weak Occam algorithms is introduced and the relationship between weak Occam algorithms and PAC-learning algorithms is investigated. It is shown that although a weak Occam algorithm is immediately a (probably) consistent PAC-learning algorithm, the converse does not hold. On the other hand, we show how to construct a weak Occam algorithm from a PAC-learning algorithm under some natural conditions. This result implies the equivalence between the existence of a weak Occam algorithm and that of a PAC-learning algorithm. Since the weak Occam algorithms constructed from PAC-learning algorithms are deterministic, our result improves a result of Board and Pitt's that the existence of a PAC-learning algorithm is equivalent to that of a randomized Occam algorithm.

  • Circuit Complexity and Approximation Method

    Akira MARUOKA  

     
    INVITED PAPER

      Vol:
    E75-D No:1
      Page(s):
    5-21

    Circuit complexity of a Boolean function is defined to be the minimum number of gates in circuits computing the function. In general, the circuit complexity is established by deriving two types of bounds on the complexity. On one hand, an upper bound is derived by showing a circuit, of the size given by the bound, to compute a function. On the other hand, a lower bound is established by proving that a function can not be computed by any circuit of the size. There has been much success in obtaining good upper bounds, while in spite of much efforts few progress has been made toward establishing strong lower bounds. In this paper, after surveying general results concerning circuit complexity for Boolean functions, we explain recent results about lower bounds, focusing on the method of approximation.

FlyerIEICE has prepared a flyer regarding multilingual services. Please use the one in your native language.