Noriyuki MINEGISHI Ken-ichi ASANO Hirokazu SUZUKI Keisuke OKADA Takashi KAN
A debug system for heterogeneous multiple processors in a single chip has been developed. The system consists of the debug interface circuit integrated on the chip, the interface circuit board between the chip and PC, and the debug software implemented on a PC. This debug system has been designed for a multimedia communication processor, which includes an original video processor core, a RISC processor, and a DSP. The RISC processor controls the Video Processing Unit that includes an original video processor and other hardware functions. While in debug mode, the external debugger can control the Video Processing Unit in the same manner as the RISC processor. The JTAG based interface circuit contains registers for bus transaction for command, address, and data to be written, etc. and a bus transaction sequencer. In fact, this system can realize the same bus transaction control as the RISC processor's. By applying proposed debug system, simultaneous debug of the RISC Processing Unit and the Video Processing Unit can be realized. This allows problems to be investigated more quickly and the total time required for debugging is efficiently reduced. Without this technology an estimated 19 weeks is required to debug the chip, whereas use of this technology allowed debugging to be completed in 9 weeks.
Chong-Hyung LEE Kyung-Hyun NAM Dong-Ho PARK
This paper considers a software reliability model which allows for two types of imperfect debuggings at each failure of the software system. For one type of imperfect debugging, a fault that causes the failure is imperfectly debugged without altering the fault contents of the software system. For the other type of imperfect debugging, the fault is not only imperfectly debugged, but also a new fault is generated and introduced into the system. The probability of perfect debugging is assumed to be an increasing function of the number of debuggings performed prior to the current failure of the system. Based on the software reliability model presented, we consider three profit models to determine the optimal software release times which maximize the expected software profit. These models consider: (1) constant life cycle, (2) random life cycle, (3) random life cycle and penalty cost which is imposed when the software is delivered late. The optimal release times are shown to be finite and unique. Numerical examples are provided for illustrative purposes.
In this paper, we construct a software availability model considering the number of restoration actions. We correlate the failure and restoration characteristics of the software system with the cumulative number of corrected faults. Furthermore, we consider an imperfect debugging environment where the detected faults are not always corrected and removed from the system. The time-dependent behavior of the system alternating between up and down states is described by a Markov process. From this model, we can derive quantitative measures for software availability assessment considering the number of restoration actions. Finally, we show numerical examples of software availability analysis.
Exception handling is not only useful for increasing program readability, but also provides an effective means to check and locate errors, so it increases productivity in large-scale program development. Some typical and frequent program errors, such as out-of-range indexing, null dereferencing, and narrowing violations, cause exceptions that are otherwise unlikely to be caught. Moreover, the absence of a catcher for exceptions thrown by API procedures also causes uncaught exceptions. This paper discusses how the exception handling mechanism should be supported by the compiler together with the operating system and debugging facilities. This mechanism is implemented in the compiler by inserting inline check code and accompanying propagation code. One drawback to this approach is the runtime overhead imposed by the inline check code, which should therefore be optimized. However, there has been little discussion of appropriate optimization techniques and efficiency in the literature. Therefore, a new solution is proposed that formulates the optimization problem as a common assertion elimination (CAE). Assertions consist of check code and useful branch conditions. The latter are effective to remove redundant check code. The redundancy can be checked and removed precisely with a forward iterative data flow analysis. Even in performance-sensitive applications such as telecommunications software, figures obtained by a CHILL optimizing compiler indicate that CAE optimizes the code well enough to be competitive with check suppressed code.
Osamu MIZUNO Shinji KUSUMOTO Tohru KIKUNO Yasunari TAKAGI Keishi SAKAMOTO
In this paper, we consider a simple development process consisting of design and debug phases, which is derived from actual concurrent development process for embedded software at a certain company. Then we propose two-phase project control that examines the initial development plan at the end of design phase, updates it to the current status of the development process and executes the debug phase under the new plan. In order to show the usefulness, we define three imaginary projects based on actually executed projects in a certain company: the project that executes debug phase under initial plan, the project that applies the proposed approach, and the project that follows a uniform plan. Moreover, to execute these projects, we use the project simulator, which has already been developed based on GSPN model. Judging from the number of residual faults in all products, we found that project B is the best among them.
Naohisa TAKAHASHI Takeshi MIEI
We present a general framework with which we can evaluate the flexibility and efficiency of various replay systems for parallel programs. In our approach, program monitoring is modeled by making a virtual dataflow program graph, referred to as a VDG, that includes all the instructions executed by the program. The behavior of the program replay is modeled on the parallel interpretation of a VDG based on two basic parallel execution models for dataflow program graphs: a data-driven model and a demand-driven model. Previous attempts to replay parallel programs, known as Instant Replay and P-Sequence, are also modeled as variations of the data-driven replay, i.e. the datadriven interpretation of a VDG. We show that the demand-driven replay, i.e. the demand-driven interpretation of a VDG, is more flexible in program replay than the data-driven replay since it allows better control of parallelism and a more selective replay. We also show that we can implement a demand-driven replay that requires almost the same amount of data to be saved during program monitoring as does the data-driven replay, and which eliminates any centralized bottleneck during program monitoring by optimizing the demand propagation and using an effective data structure.
Jiro NAGANUMA Takeshi OGURA Tamio HOSHINO
This paper proposes a new environment for high-level VLSI design specification validation using "Algorithmic Debugging" and evaluates its benefits on three significant examples (a protocol processor, an 8-bit CPU, and a Prolog processor). A design is specified at a high-level using the structured analysis (SA) method, which is useful for analyzing and understanding the functionality to be realized. The specification written in SA is transformed into a logic programming language and is simulated in it. The errors (which terminate with an incorrect output in the simulation) included in the three large examples are efficiently located by answering junt a few queries from the algorithmic debugger. The number of interactions between the designer and the debugger is reduced by a factor of ten to a hundred compared to conventional simulation based validation methodologies. The correct SA specification can be automatically translated into a Register Transfer Level (RTL) specification suitable for logic synthesis. In this environment, a designer is freed from the tedious task of debugging a RTL specification, and can concentrate on the design itself. This environment promises to be an important step towards efficient high-level VLSI design specification validation.
Akio ANZAI Mikinori KAWAJI Takahiko TAKAHASHI
It has become more important to shorten development periods of high performance computer systems and their LSIs. During debugging of computer prototypes, logic designers request very frequent LSI refabrication to change logic circuits and to add some functions in spite of their extensive logic simulation by several GFLOPS supercomputers. To meet these demands, an automated on-chip direct wiring modification system has been developed, which enables wire-cut and via-digging by a precise focused ion beam machine, and via-filling and jumper-writing by a laser CVD machine, directly on pre-redesign (original) chips. This modification system was applied to LSI reworks during the development of Hitachi large scale computers M-880 and S-3800, and contributed to shorten system debugging period by four to six months.
Existing algorithmic debugging methods which can locate faults under the guidance of a system have a number of shortcomings. For example, some cannot be applied to imperative languages with side effects; some can locate a faulty function but cannot locate a faulty statement; and some cannot detect faults related to missing statements. This paper presents an algorithmic critical slice-based fault-locating method for imperative languages. Program faults are first classified into two categories: wrong-value faults and missing-assignment faults. The critical slice with respect to a variable-value error is a set of statements such that (1) a wrong-value fault contained in any instruction in the critical slice may have caused that variable-value error, and (2) a wrong-value fault contained in any instruction outside the critical slice could never have caused that variable-value error. The paper also classifies errors found during program testing into three categories: wrong-output errors, missing-output errors, and infinite-loop errors with no output. It finally shows that it is possible to algorithmically locate any fault, including missing statements, for each type of error.
Koichi TOKUNOH Shigeru YAMADA Shunji OSAKI
Actual debugging actions during the testing phase in the software development and the operation phase are not always performed perfectly. In other words, all detected software faults are not corrected and removed certainly. Generally, this is called imperfect debugging. In this paper, we discuss a software reliability growth model considering imperfect debugging that faults are not always corrected/removed when they are detected. Defining a random variable representing the cumulative number of faults corrected up to a specified testing time, this model is described by a semi-Markov process. We derive various quantitative measures for software reliability assessment and show their numercal examples.