1-17hit |
Hiroshi NAGAMOCHI Koji MOCHIZUKI Toshihide IBARAKI
We consider a single-vehicle scheduling problem on a tree, where each vertex has a job with a release time and a processing time and each edge has a travel time. There is a single vehicle which starts from a start vertex s and reaches a goal vertex g after finishing all jobs. In particular, s is called a home location if s = g. The objective of the problem is to find a depth-first routing on T so as to minimize the completion time. In this paper, we first show that the minimum completion times of the problem for all home locations s V can be simultaneously computed in O(n) time, once the problem with a specified home location s V has been solved, where n is the number of vertices. We also show that given a specified start vertex s, the minimum completion times for all goal vertices g can be computed in O(n) time.
Toshihide IBARAKI Hiroshi KISE Hisashi MINE
Let J{1, 2, , n} be a set of jobs. Each job j can be processed on one of the m identical machines in one unit time, and has ready and due times. It is shown that the problem of finding an optimal schedule minimizing the sum of costs associated with jobs j, which are monotone functions of completion time of j, can be reduced to the assignment problem with 0(n) vertices; hence it can be solved in 0(n3) steps. As important special cases, this cost includes the weighted number of late jobs, the weighted tardiness, and the weighted flow-time. In addition, it is shown that the problem has an 0(n log n) algorithm if the objective is to minimize the (unweighted) number of late jobs. This problem is critical for having an efficient algorithm in the sense that generalizing it in various directions easily results in NP-complete problems. As an example, it is shown that it becomes NP-complete if precedence constraints are imposed.
Consider a random digraph G=(V,A), where |V|=n and an arc (u,v) is present in A with probability p(n) independent of the existence of the other arcs. We discuss the expected number of vertices reachable from a vertex, the expected size of the transitive closure of G and their related topics based on the properties of reachability, where the reachability from a vertex s to t is defined as the probability that s is reachable to t. Let γn,p(n) denote the reachability s to t (s) in the above random digraph G. (In case of s=t, it requires another definition. ) We first present a method of computing the exact value of γn,p(n) for given n and p(n). Since the computation of γn,p(n) by this method requires O(n3) time, we then derive simple upper and lower bounds γn,p(n)U and γn,p(n)L on γn,p(n), respectively, and in addition, we give an upper bound n,p(n) on γn,p(n)U, which is easier to analyze but is still rather accurate. Then, we discuss the asymptotic behavior of n,p(n) and show that, if p(n)=α/(n-1), limnn,p(n) converges to one of the solutions of the equation 1-x-e-α x=0. Furthermore, as for (n) and (n), which are upper bounds on the expected number of reachable vertices and the expected size of the transitive closure of G, resp. , it turns out that limn(n) =α/(1-α) if p(n)=α/(n-1) for 0<α<1; otherwise either 0 or , and limn(n)=α if p(n)=α/(n-1)2 for α0; otherwise either 0 or .
Toshihide IBARAKI Masafumi YAMASHITA
Kazuya HARAGUCHI Mutsunori YAGIURA Endre BOROS Toshihide IBARAKI
We consider a data set in which each example is an n-dimensional Boolean vector labeled as true or false. A pattern is a co-occurrence of a particular value combination of a given subset of the variables. If a pattern appears frequently in the true examples and infrequently in the false examples, we consider it a good pattern. In this paper, we discuss the problem of determining the data size needed for removing "deceptive" good patterns; in a data set of a small size, many good patterns may appear superficially, simply by chance, independently of the underlying structure. Our hypothesis is that, in order to remove such deceptive good patterns, the data set should contain a greater number of examples than that at which a random data set contains few good patterns. We justify this hypothesis by computational studies. We also derive a theoretical upper bound on the needed data size in view of our hypothesis.
Toshihide IBARAKI Tsunehiko KAMEDA Shunichi TOIDA
Various minimization problems associated with the diagnosis of systems represented by directed graphs are shown to be NP-complete. These problems include () finding the minimum number of test points, test connections and blocking gates on a SEC graph (single entry single exit connected acyclic graph), respectively, to make it distinguishable, and () finding a test set with the minimum number of tests to locate a faulty vertex on a SEC graph with test points, test connections and blocking gates attached, respectively. The NP-completeness results indicate that these problems are all intractable in the sense that it is most unlikely that some algorithm can solve them in polynomial time of the problem size.
Eishi CHIBA Hiroshi FUJIWARA Yoshiyuki SEKIGUCHI Toshihide IBARAKI
Flat Panel Displays (FPDs) are manufactured using many pieces of different processing equipment arranged sequentially in a line. Although the constant inter-arrival time (i.e., the tact time) of glass substrates in the line should be kept as short as possible, the collision probability between glass substrates increases as tact time decreases. Since the glass substrate is expensive and fragile, collisions should be avoided. In this paper, we derive a closed form formula of the approximate collision probability for a model, in which the processing time on each piece of equipment is assumed to follow Erlang distribution. We also compare some numerical results of the closed form and computer simulation results of the collision probability.
Shigeru MASUYAMA Toshihide IBARAKI Toshiharu HASEGAWA
The m-center problem asks to place m objects on the plane so that the distance from a client (a point) to the closest object is at most a given number r. This problem is often encountered in locating facilities or resources of a geographically distributed system. This paper shows that this problem is NP-complete. The NP-complenteness indicates its computational intractability, i.e., it is most unlikely that some algorithm can solve it in polynomial time of the problem size.
Hiroshi NAGAMOCHI Yukihiro NISHIDA Toshihide IBARAKI
Given an edge-weighted graph G, the minimum maximal matching problem asks to find a minimum weight maximal matching. The problem is known to be NP-hard even if the graph is planar and unweighted. In this paper, we consider the problem in planar graphs. First, we prove a strong inapproximability for the problem in weighted planar graphs. Second, in contrast with the first result, we show that a polynomial time approximation scheme (PTAS) for the problem in unweighted planar graphs can be obtained by a divide-and-conquer method based on the planar separator theorem. For a given ε > 0, our scheme delivers in time a solution with size at most (1 + ε) times the optimal value, where n is the number of vertices in G and α is a constant number.
Hiroshi NAGAMOCHI Katsuhiro SEKI Toshihide IBARAKI
We consider the problem of finding a minimum weight k-connected spanning subgraph of a given edge-weighted graph G for k=3. The problem is known to be NP-hard for k 2, and there are an O(n2m) time 3-approximation algorithm due to Nutov and Penn and an O(n8) time 2-approximation algorithm due to Dinitz and Nutov, where n and m are the numbers of vertices and edges in G, respectively. In this paper, we present a 7/3-approximation algorithm which runs in O(n2m) time.
Hiroshi NAGAMOCHI Toshimasa ISHII Toshihide IBARAKI
For the correctness of the minimum cut algorithm proposed in [H. Nagamochi and T. Ibaraki, Computing edge-connectivity of multigraphs and capacitated graphs, SIAM J. Discrete Mathematics, 5, 1992, pp. 54-66], several simple proofs have been presented so far. This paper gives yet another simple proof. As a byproduct, it can provide an O(m log n) time algorithm that outputs a maximum flow between the pair of vertices s and t selected by the algorithm, where n and m are the numbers of vertices and edges, respectively. This algorithm can be used to speed up the algorithm to compute DAGs,t that represents all minimum cuts separating vertices s and t in a graph G, and the algorithm to compute the cactus Γ(G) that represents all minimum cuts in G.
Kazuhisa MAKINO Takashi SUDA Hirotaka ONO Toshihide IBARAKI
Decision trees are used as a convenient means to explain given positive examples and negative examples, which is a form of data mining and knowledge discovery. Standard methods such as ID3 may provide non-monotonic decision trees in the sense that data with larger values in all attributes are sometimes classified into a class with a smaller output value. (In the case of binary data, this is equivalent to saying that the discriminant Boolean function that the decision tree represents is not positive. ) A motivation of this study comes from an observation that real world data are often positive, and in such cases it is natural to build decision trees which represent positive (i. e. , monotone) discriminant functions. For this, we propose how to modify the existing procedures such as ID3, so that the resulting decision tree represents a positive discriminant function. In this procedure, we add some new data to recover the positivity of data, which the original data had but was lost in the process of decomposing data sets by such methods as ID3. To compare the performance of our method with existing methods, we test (1) positive data, which are randomly generated from a hidden positive Boolean function after adding dummy attributes, and (2) breast cancer data as an example of the real-world data. The experimental results on (1) tell that, although the sizes of positive decision trees are relatively larger than those without positivity assumption, positive decision trees exhibit higher accuracy and tend to choose correct attributes, on which the hidden positive Boolean function is defined. For the breast cancer data set, we also observe a similar tendency; i. e. , positive decision trees are larger but give higher accuracy.
Liang ZHAO Hiroshi NAGAMOCHI Toshihide IBARAKI
We consider to design approximation algorithms for the survivable network design problem in hypergraphs (SNDPHG) based on algorithms developed for the survivable network design problem in graphs (SNDP) or the element connectivity problem in graphs (ECP). Given an instance of the SNDPHG, by replacing each hyperedge e={v1,
Toshihide IBARAKI Osamu WATANABE
Kazuya HARAGUCHI Toshihide IBARAKI
We consider the classification problem to construct a classifier c:{0,1}n
Optimizing the computing process of relational databeses is important in order to increase their applicability. The process consists of operations involving many relational tables. Among basic operations, joins are the most important because they require most of the computational time. In this paper, we consider to execute such joins on many relational tables by the merge-scan method, and try to find the optimum join order that minimizes the total size of intermediate tables (including the final answer table). The cost is important in its own right as it represents the memory space requirement of the entire computation. It can be also viewed as an approximate measure of computational time. However, it turns out that the problem is solvable in polynomial time only for very restricted special cases, and in NP-hard in general.
Shunji UMETANI Mutsunori YAGIURA Toshihide IBARAKI
The one dimensional cutting stock problem (1D-CSP) is one of the representative combinatorial optimization problems, which arises in many industries. As the setup costs of cutting patterns become more dominant in recent cutting industry, we consider a variant of 1D-CSP, in which the total number of applications of cutting patterns is minimized under the constraint that the number of different cutting patterns is specified in advance. We propose a local search algorithm that uses the neighborhood obtained by perturbating one cutting pattern in the current set of patterns, where the perturbations are done by utilizing the dual solution of the auxiliary linear programming problem (LP). In this process, in order to solve a large number of LPs, we start the criss-cross variation of the simplex algorithm from the optimal simplex tableau of the previous solution, instead of starting it from scratch. According to our computational experiment, it is observed that the proposed algorithm obtains a wide variety of good solutions which are comparable to the existing heuristic approaches.