Serra, M., Dervisoglu, B.I. Testing The electrical Engineering Handbook Ed. Richard C. Dorf Boca Raton CRC Press llc. 2000
Serra, M., Dervisoglu, B.I. “Testing” The Electrical Engineering Handbook Ed. Richard C. Dorf Boca Raton: CRC Press LLC, 2000
85 esting 85.1 Digital IC Testing Micaela serra Taxonomy of Testing. Fault Models. Test Pattern Generation Output Response Analysis Bulent I. Dervisoglu 85.2 Design for Test The Testability Problem. Design for Testability. Future for Design Hewlett-Packard Company for Test 85.1 Digital IC Testing Micaela serra In this section we give an overview of digital testing techniques with appropriate reference to material containing all details of the methodology and algorithms. First, we present a general introduction of terminology and a taxonomy of testing methods. Next, we present a definition of fault models, and finally we discuss the main approaches for test pattern generation and data compaction, respectively Taxonomy of Testing The evaluation of the reliability and quality of a digital IC is commonly called testing, yet it comprises distinct phases that are mostly kept separate both in the research community and in industrial practice 1. Verification is the initial phase in which the first prototype chips are"tested"to ensure that they match their functional specification, that is, to verify the correctness of the design Verification checks that all design rules are adhered to, from layout to electrical parameters; more generally, this type of functional testing checks that the circuit: (a)implements what it is supposed to do and(b)does not do what it is not supposed to do Both conditions are necessary. This type of evaluation is done at the design sta and uses a variety of techniques, induding logic verification with the use of hardware description languages, full functional simulation, and generation of functional test vectors. We do not discuss verification techniques here 2. Testing correctly refers to the phase when one must ensure that only defect-free production chips are ackaged and shipped and detect faults arising from manufacturing and/or wear-out. Testing methods must(a)be fast enough to be applied to large amounts of chips during production,(b) take into consideration whether the industry concerned has access to large expensive external tester machines, and(c)consider whether the implementation of built-in self-test(BIST) proves to be advantageous In BIST, the circuit is designed to include its own self-testing extra circuitry and thus can signal directly, luring testing, its possible failure status. Of course, this involves a certain amount of overhead in area, and trade-offs must be considered. The development of appropriate testing algorithms and their tool support can require a large amount of engineering effort, but one must note that it may need to be done only once per design. The speed of application of the algorithm(applied to many copies of the chips c 2000 by CRC Press LLC
© 2000 by CRC Press LLC 85 Testing 85.1 Digital IC Testing Taxonomy of Testing • Fault Models • Test Pattern Generation • Output Response Analysis 85.2 Design for Test The Testability Problem • Design for Testability • Future for Design for Test 85.1 Digital IC Testing Micaela Serra In this section we give an overview of digital testing techniques with appropriate reference to material containing all details of the methodology and algorithms. First, we present a general introduction of terminology and a taxonomy of testing methods. Next, we present a definition of fault models, and finally we discuss the main approaches for test pattern generation and data compaction, respectively. Taxonomy of Testing The evaluation of the reliability and quality of a digital IC is commonly called testing, yet it comprises distinct phases that are mostly kept separate both in the research community and in industrial practice. 1. Verification is the initial phase in which the first prototype chips are “tested” to ensure that they match their functional specification, that is, to verify the correctness of the design. Verification checks that all design rules are adhered to, from layout to electrical parameters; more generally, this type of functional testing checks that the circuit: (a) implements what it is supposed to do and (b) does not do what it is not supposed to do. Both conditions are necessary. This type of evaluation is done at the design stage and uses a variety of techniques, including logic verification with the use of hardware description languages, full functional simulation, and generation of functional test vectors. We do not discuss verification techniques here. 2. Testing correctly refers to the phase when one must ensure that only defect-free production chips are packaged and shipped and detect faults arising from manufacturing and/or wear-out. Testing methods must (a) be fast enough to be applied to large amounts of chips during production, (b) take into consideration whether the industry concerned has access to large expensive external tester machines, and (c) consider whether the implementation of built-in self-test (BIST) proves to be advantageous. In BIST, the circuit is designed to include its own self-testing extra circuitry and thus can signal directly, during testing, its possible failure status. Of course, this involves a certain amount of overhead in area, and trade-offs must be considered. The development of appropriate testing algorithms and their tool support can require a large amount of engineering effort, but one must note that it may need to be done only once per design. The speed of application of the algorithm (applied to many copies of the chips) can be of more importance. Micaela Serra University of Victoria Bulent I. Dervisoglu Hewlett-Packard Company
3. Parametric testing is done to ensure that components meet design specification for delays, voltages, power, etc. Lately much attention has been given to IpDg testing, a parametric technique for CMOS testing IDDa testing monitors the current Iop that a draws when it is in a quiescent state. It is used to detect faults such as bridging faults, transistor stuck-open faults, or gate oxide leaks, which increase the normal low Ipp Jacomino et al., 1989] The density of circuitry continues to increase, while the number of I/o pins remains small. This causes a serious escalation of complexity, and testing is becoming one of the major costs to industry(estimated up to 0%) ICs should be tested before and after packaging, after mounting on a board, and periodically during operation. Different methods may be necessary for each case. Thus by testing we imply the means by which some qualities or attributes are determined to be fault-free or faulty. The main purpose of testing is the detection of malfunctions( Go/NoGo test), and only subsequently one may be interested in the actual location of the malfunction; this is called fault diagnosis or fault location. Most testing techniques are designed to be applied to combinational circuits only. While this may appear a strong restriction, in practice it is a realistic assumption based on the idea of designing a sequential circuit by partitioning the memory elements from the control functionality such that the circuit can be reconfigured ombinational at testing time. This general approach is one of the methods in design for testability(DFT)(see Section 85.2). DFT encompasses any design strategy aimed at enhancing the testability of a circuit. In particular, an design is the best-known implementation for separating the latches from the combinational gates such that some of the latches can also be reconfigured and used as either tester units or as input generator units (essential for built-in testing). Figure 85. 1(a)shows the general division for algorithms in testing. Test pattern generation implies a fair amount of work in generating an appropriate subset of all input combinations, such that a desired percentage of faults is activated and observed at the outputs. Output response analysis encompasses methods which capture only the output stream, with appropriate transformations, with the assumption that the circuit is stimulated by either an exhaustive or a random set of input combinations. Both methodologies are introduced below. Moreover a further division can be seen between on-line and off-line methods [see Fig. 85.1(b)]. In the former, each output word from the circuit is tested during normal operation. In the latter, the circuit must suspend ormal operation and enter a"test mode, at which time the appropriate method of testing is applied. while off-line testing can be executed either through external testing(a tester machine external to the circuitry)or through the use of BIST, on-line testing(also called concurrent checking)usually implies that the circuit contains ome coding scheme which has been previously embedded in the design of the circuitry If many defects are present during the manufacturing process, the manufacturing yield is lowered, and testing becomes of paramount importance. Some estimation can be given about the relationship between manufac turing yield, effectiveness of testing and defect level remaining after test [Williams, 1986). Let Y denote the yield, where Y is some value between 1(100% defect-free production) and 0(all circuits faulty after testing) Testing metho Testing Methods enter test mode during normal generation ATPG .concurrent checking BIST) or (BIT) pseudo-random FIGURE 85.1 Taxonomy of testing methods. (a) Test pattern generation;(b)on-line and off-line methods e 2000 by CRC Press LLC
© 2000 by CRC Press LLC 3. Parametric testing is done to ensure that components meet design specification for delays, voltages, power, etc. Lately much attention has been given to IDDq testing, a parametric technique for CMOS testing. IDDq testing monitors the current IDD that a circuit draws when it is in a quiescent state. It is used to detect faults such as bridging faults, transistor stuck-open faults, or gate oxide leaks, which increase the normally low IDD [Jacomino et al., 1989]. The density of circuitry continues to increase, while the number of I/O pins remains small. This causes a serious escalation of complexity, and testing is becoming one of the major costs to industry (estimated up to 30%). ICs should be tested before and after packaging, after mounting on a board, and periodically during operation. Different methods may be necessary for each case. Thus by testing we imply the means by which some qualities or attributes are determined to be fault-free or faulty. The main purpose of testing is the detection of malfunctions (Go/NoGo test), and only subsequently one may be interested in the actual location of the malfunction; this is called fault diagnosis or fault location. Most testing techniques are designed to be applied to combinational circuits only. While this may appear a strong restriction, in practice it is a realistic assumption based on the idea of designing a sequential circuit by partitioning the memory elements from the control functionality such that the circuit can be reconfigured as combinational at testing time. This general approach is one of the methods in design for testability (DFT) (see Section 85.2). DFT encompasses any design strategy aimed at enhancing the testability of a circuit. In particular, scan design is the best-known implementation for separating the latches from the combinational gates such that some of the latches can also be reconfigured and used as either tester units or as input generator units (essential for built-in testing). Figure 85.1(a) shows the general division for algorithms in testing. Test pattern generation implies a fair amount of work in generating an appropriate subset of all input combinations, such that a desired percentage of faults is activated and observed at the outputs. Output response analysis encompasses methods which capture only the output stream, with appropriate transformations, with the assumption that the circuit is stimulated by either an exhaustive or a random set of input combinations. Both methodologies are introduced below. Moreover a further division can be seen between on-line and off-line methods [see Fig. 85.1(b)]. In the former, each output word from the circuit is tested during normal operation. In the latter, the circuit must suspend normal operation and enter a “test mode,” at which time the appropriate method of testing is applied. While off-line testing can be executed either through external testing (a tester machine external to the circuitry) or through the use of BIST, on-line testing (also called concurrent checking) usually implies that the circuit contains some coding scheme which has been previously embedded in the design of the circuitry. If many defects are present during the manufacturing process, the manufacturing yield is lowered, and testing becomes of paramount importance. Some estimation can be given about the relationship between manufacturing yield, effectiveness of testing and defect level remaining after test [Williams, 1986]. Let Y denote the yield, where Y is some value between 1 (100% defect-free production) and 0 (all circuits faulty after testing). FIGURE 85.1 Taxonomy of testing methods. (a) Test pattern generation; (b) on-line and off-line methods
Let FC be the fault coverage, calculated as the percentage of detected faults over the total number of detectable modeled faults(see below for fault models). The value of FC ranges from 1(all possible faults detected)to 0 o testing done). We are interested in the final defect level (DL), after test, defined as the probability of shipping a defective product. It has been shown that tests with high fault coverage(for certain fault models, see below also have high defect coverage. The empirical equation is DL=(1-Yl-FC)100% Plotting this equation gives interesting and practical results. Table 85.1 shows only a TABLE 85. 1 Examples few examples of some practical values of Y and FC. The main conclusion to be drawn of Defect Levels is that a very high fault coverage must be achieved to obtain any acceptable defect y level value, and manufacturing yield must be continually improved to maintain reli- 0.15 ability of shipped products Fault models At the defect level, an enormous number of different failures could be present, and it is totally infeasible to alyze them as such. Thus failures are grouped together with regards to their logical fault effect on the functionality of the circuit, and this leads to the construction of logical fault models as the basis for testing algorithms [Abramovici et al., 1992]. More precisely, a fault denotes the physical failure mechanism, the fault effect denotes the logical effect of a fault on a signal-carrying net, and an error is defined as the condition(or state)of a system containing a fault( deviation from correct state). Faults can be further divided into classes, as shown in Fig. 85. 2. Here we discuss only permanent faults, that is, faults in existence long enough to be observed at test time, as opposed to temporary faults(transient or intermittent), which appear and disappear in short intervals of time, or delay faults, which affect the operating speed of the circuit. Moreover we do not discuss sequential faults, which cause a combinational circuit to behave like a sequential one, as they are mainly restricted to certain technologies (e.g, CMOS) The most commonly used fault model is that of a stuck-at fault, which is modeled by having a line segment Ick at logic 0 or 1 (stuck-at 1 or stuck-at 0). One may consider single or multiple stuck-at faults and Fig 85.3 shows an example for a simple circuit. The fault-free function is shown as F while the faulty functions, under Testing physical duration transient Intermittent FIGURE 85.2 Fault characteristics x1 F=x1 x2 +x2 x3 5 1/0:F°=X2x3 FIGURE 85.3 Single stuck-at fault example e 2000 by CRC Press LLC
© 2000 by CRC Press LLC Let FC be the fault coverage, calculated as the percentage of detected faults over the total number of detectable modeled faults (see below for fault models). The value of FC ranges from 1 (all possible faults detected) to 0 (no testing done).We are interested in the final defect level (DL), after test, defined as the probability of shipping a defective product. It has been shown that tests with high fault coverage (for certain fault models, see below) also have high defect coverage. The empirical equation is DL = (1 – Y 1-F C ) 100% Plotting this equation gives interesting and practical results. Table 85.1 shows only a few examples of some practical values of Y and FC. The main conclusion to be drawn is that a very high fault coverage must be achieved to obtain any acceptable defect level value, and manufacturing yield must be continually improved to maintain reliability of shipped products. Fault Models At the defect level, an enormous number of different failures could be present, and it is totally infeasible to analyze them as such. Thus failures are grouped together with regards to their logical fault effect on the functionality of the circuit, and this leads to the construction of logical fault models as the basis for testing algorithms [Abramovici et al., 1992]. More precisely, a fault denotes the physical failure mechanism, the fault effect denotes the logical effect of a fault on a signal-carrying net, and an error is defined as the condition (or state) of a system containing a fault (deviation from correct state). Faults can be further divided into classes, as shown in Fig. 85.2. Here we discuss only permanent faults, that is, faults in existence long enough to be observed at test time, as opposed to temporary faults (transient or intermittent), which appear and disappear in short intervals of time, or delay faults, which affect the operating speed of the circuit. Moreover we do not discuss sequential faults, which cause a combinational circuit to behave like a sequential one, as they are mainly restricted to certain technologies (e.g., CMOS). The most commonly used fault model is that of a stuck-at fault, which is modeled by having a line segment stuck at logic 0 or 1 (stuck-at 1 or stuck-at 0). One may consider single or multiple stuck-at faults and Fig. 85.3 shows an example for a simple circuit. The fault-free function is shown as F, while the faulty functions, under FIGURE 85.2 Fault characteristics. FIGURE 85.3 Single stuck-at fault example. TABLE 85.1 Examples of Defect Levels Y FC DL 0.15 0.90 0.18 0.25 0.00 0.75 0.25 0.90 0.15
0 1c1 c 10l|1110)00{1 [esIs e> Minimal Test Set T: I(01),(10).(11)I FIGURE 85.4 Test set example the occurrence of the single stuck-at faults of either line 1 stuck-at 0(1/0)or of line 2 stuck-at 1(2/1),are Bridging faults occur when two or more lines are shorted together. There are two main nalysis of bridging faults: (1)the theoretical number of possible such faults is extremely operational effect is of a wired logic AND or OR, depending on technology, and it can ever effects in complex CMOS gates CMOS stuck-open faults have been examined recently, as they cannot be modeled from the more classical fault models and are restricted to the CMOS technology. They occur when the path through one of the p-channel or one of the n-channel transistors becomes an open circuit. The main difficulty in detecting this type of fault is that it changes the combinational behavior of a cell into a sequential one. Thus the logical effect is to retain, on a given line, the previous value, introducing a memory state. To detect such a fault, one must apply two stimuli: the first to set a line at a certain value and the second to try and change that value. This, of course, ncreases the complexity of fault detection. Test Pattern generation Test pattern generation is the process of generating a(minimal) set of input patterns to stimulate the inputs of a circuit such that detectable faults can be exercised (if present)[Abramovici et al., 1992]. The process can be divided in two distinct phases: (1)derivation of a test and(2)application of a test. For(1), one must first select appropriate models for the circuit(gate or transistor level)and for faults; one must construct the test such that the output signal from a faulty circuit is different from that of a good circuit. This can be computa- tionally very expensive, but one must remember that the process is done only once at the end of the design age. The generation of a test set can be obtained either by manual methods, by algorithmic methods(with or without heuristics), or by pseudo-random methods On the other hand, for(2), a test is subsequently applied many times to each IC and thus must be efficient both in space(storage requirements for the patterns)and time. Often such a set is not minimal, as near minimality may be sufficient. The main considerations in evaluating a test set are the time to construct a minimal test set; the size of the test pattern generator, le,the load the test patterns; and the equipment required (if external)or the BISt overhead. Most algorithmic test pattern generators are based on the concept of sensitized paths. Given a line in a circuit, one wants to find a sensitized path to take a possible error all the way to an observable output. For example, to sensitize a path that goes through one input of an AND gate, one must set all other inputs of the gate to logic I to permit the sensitized signal to carry through. Figure 85. 4 summarizes the underlying principles of trying to construct a test set. Each column shows the expected output for each input combination of a NAnD gate. Columns 3 to 8 show the output under the presence of a stuck-at fault as per label. The output bits that permit detection of the corresponding fault are shown in a square, and thus at the bottom the minimal test set is listed, comprising the minimal number of distinct patterns necessary to detect all single stuck-at faults e 2000 by CRC Press LLC
© 2000 by CRC Press LLC the occurrence of the single stuck-at faults of either line 1 stuck-at 0 (1/0) or of line 2 stuck-at 1 (2/1), are shown as F*. Bridging faults occur when two or more lines are shorted together. There are two main problems in the analysis of bridging faults: (1) the theoretical number of possible such faults is extremely high and (2) the operational effect is of a wired logic AND or OR, depending on technology, and it can even have different effects in complex CMOS gates. CMOS stuck-open faults have been examined recently, as they cannot be modeled from the more classical fault models and are restricted to the CMOS technology. They occur when the path through one of the p-channel or one of the n-channel transistors becomes an open circuit. The main difficulty in detecting this type of fault is that it changes the combinational behavior of a cell into a sequential one. Thus the logical effect is to retain, on a given line, the previous value, introducing a memory state. To detect such a fault, one must apply two stimuli: the first to set a line at a certain value and the second to try and change that value. This, of course, increases the complexity of fault detection. Test Pattern Generation Test pattern generation is the process of generating a (minimal) set of input patterns to stimulate the inputs of a circuit such that detectable faults can be exercised (if present) [Abramovici et al., 1992]. The process can be divided in two distinct phases: (1) derivation of a test and (2) application of a test. For (1), one must first select appropriate models for the circuit (gate or transistor level) and for faults; one must construct the test such that the output signal from a faulty circuit is different from that of a good circuit. This can be computationally very expensive, but one must remember that the process is done only once at the end of the design stage. The generation of a test set can be obtained either by manual methods, by algorithmic methods (with or without heuristics), or by pseudo-random methods. On the other hand, for (2), a test is subsequently applied many times to each IC and thus must be efficient both in space (storage requirements for the patterns) and in time. Often such a set is not minimal, as near minimality may be sufficient. The main considerations in evaluating a test set are the time to construct a minimal test set; the size of the test pattern generator, i.e., the software or hardware module used to stimulate the circuit under test; the size of the test set itself; the time to load the test patterns; and the equipment required (if external) or the BIST overhead. Most algorithmic test pattern generators are based on the concept of sensitized paths. Given a line in a circuit, one wants to find a sensitized path to take a possible error all the way to an observable output. For example, to sensitize a path that goes through one input of an AND gate, one must set all other inputs of the gate to logic 1 to permit the sensitized signal to carry through. Figure 85.4 summarizes the underlying principles of trying to construct a test set. Each column shows the expected output for each input combination of a NAND gate. Columns 3 to 8 show the output under the presence of a stuck-at fault as per label. The output bits that permit detection of the corresponding fault are shown in a square, and thus at the bottom the minimal test set is listed, comprising the minimal number of distinct patterns necessary to detect all single stuck-at faults. FIGURE 85.4 Test set example
compactor (k) st comparator FIGURE 85.5 Data compaction testing. The best-known algorithms are the D-algorithm(precursor to all), POdEM, and FAn [Abramovici, 1992] Three steps can be identified in most automatic test pattern generation(ATPG)programs: (1)listing the signals on the inputs of a gate controlling the line on which a fault should be detected, (2)determining the primary input conditions necessary to obtain these signals( back propagation)and sensitizing the path to the primary outputs such that the signals and fault can be observed, and(3)repeating this procedure until all detectable faults in a given fault set have been covered PODEM and FAN introduce powerful heuristics to speed the three teps by aiding in the sequential selection of faults to be examined and by cutting the amount of back and forward propagation necessary. Notwithstanding heuristics, algorithmic test pattern generation is very computationally expensive and can encounter numerous difficulties, especially in certain types of networks. Newer alternatives are based on pseudo- random pattern generation[ Bardell et al., 1987] and fault simulation. In this strategy, a large set of patterns is generated pseudo-randomly with the aid of an inexpensive(hardware or software) generator. Typical choices for these are linear feedback shift registers and linear cellular automata registers(see below). The pseudo-random set is used to stimulate a circuit, and, using a fault simulator, one can evaluate the number of faults that are covered by this set. An algorithmic test pattern generator is then applied to find coverage for the remaining faults (hopefully, a small number), and the pseudo-random set is thus augmented. The disadvantages are that the resulting set is very large and fault simulation is also computationally expensive. However, this method presents n alternative for circuits where the application of deterministic algorithms for all faults is infeasible. Output Response analysis Especially when designing a circuit including some BIST, one must decide how to check the correctness of the circuits responses [Bardell et al., 1987 ]. It is infeasible to store on-chip all expected responses, and thus a common solution is to reduce the circuit responses to relatively short sequences: this process is called data compaction and the short, compacted resulting sequence is called a signature. The normal configuration for data compaction testing is shown in Fig. 85.5. The n-input circuit is stimulated by an input pattern generator (pseudo-random or exhaustive if n< 20); the resulting output vector(s), of length up to 2, is compacted to a very short signature of length k < 2(usually k is around 16 to 32 bits). The signature is then compared to a known good value. The main advantages of this method are that(1)the testing can be done at circuit speed, (2)there is no need to generate test patterns, and(3)the testing circuitry involves a very small area, especially if the circuit has been designed using scan techniques(see Section 85.2). The issues revolve around designing very efficient input generators and compactors The main disadvantage of this method is the possibility of aliasing. When the short signature is formed,a loss of information occurs, and it can be the case that a faulty circuit produces the same signature of a fault- free circuit, thus remaining undetected. The design method for data compaction aims at minimizing the probability of aliasing. Using the compactors explained below, the probability of aliasing has been theoretically proven to be 2-f, where k is the length of the compactor(and thus the length of the signature). It is important to note that(1)the result is asymptotically independent of the size and complexity of the circuit under test ) for k= 16, the probability of aliasing is only about 10- and thus quite acceptable; and(3)the empirical results show that in practice this method is even more effective. Most of all, this is the chosen methodology when BIST is required for its effectiveness, speed, and small area overhead A secondary issue in data compaction is in the determination of the expected"good"signature. The best way is to use fault-free simulation for both the circuit and the compactor, and then the appropriate comparator can be built as part of the testing circuitry [Bardell et al., 1987; Abramovici, 1992] e 2000 by CRC Press LLC
© 2000 by CRC Press LLC The best-known algorithms are the D-algorithm (precursor to all), PODEM, and FAN [Abramovici, 1992]. Three steps can be identified in most automatic test pattern generation (ATPG) programs: (1) listing the signals on the inputs of a gate controlling the line on which a fault should be detected, (2) determining the primary input conditions necessary to obtain these signals (back propagation) and sensitizing the path to the primary outputs such that the signals and fault can be observed, and (3) repeating this procedure until all detectable faults in a given fault set have been covered. PODEM and FAN introduce powerful heuristics to speed the three steps by aiding in the sequential selection of faults to be examined and by cutting the amount of back and forward propagation necessary. Notwithstanding heuristics, algorithmic test pattern generation is very computationally expensive and can encounter numerous difficulties, especially in certain types of networks. Newer alternatives are based on pseudorandom pattern generation [Bardell et al., 1987] and fault simulation. In this strategy, a large set of patterns is generated pseudo-randomly with the aid of an inexpensive (hardware or software) generator. Typical choices for these are linear feedback shift registers and linear cellular automata registers (see below). The pseudo-random set is used to stimulate a circuit, and, using a fault simulator, one can evaluate the number of faults that are covered by this set. An algorithmic test pattern generator is then applied to find coverage for the remaining faults (hopefully, a small number), and the pseudo-random set is thus augmented. The disadvantages are that the resulting set is very large and fault simulation is also computationally expensive. However, this method presents an alternative for circuits where the application of deterministic algorithms for all faults is infeasible. Output Response Analysis Especially when designing a circuit including some BIST, one must decide how to check the correctness of the circuit’s responses [Bardell et al., 1987]. It is infeasible to store on-chip all expected responses, and thus a common solution is to reduce the circuit responses to relatively short sequences: this process is called data compaction and the short, compacted resulting sequence is called a signature. The normal configuration for data compaction testing is shown in Fig. 85.5. The n-input circuit is stimulated by an input pattern generator (pseudo-random or exhaustive if n < 20); the resulting output vector(s), of length up to 2², is compacted to a very short signature of length k << 2² (usually k is around 16 to 32 bits). The signature is then compared to a known good value. The main advantages of this method are that (1) the testing can be done at circuit speed, (2) there is no need to generate test patterns, and (3) the testing circuitry involves a very small area, especially if the circuit has been designed using scan techniques (see Section 85.2). The issues revolve around designing very efficient input generators and compactors. The main disadvantage of this method is the possibility of aliasing. When the short signature is formed, a loss of information occurs, and it can be the case that a faulty circuit produces the same signature of a faultfree circuit, thus remaining undetected. The design method for data compaction aims at minimizing the probability of aliasing. Using the compactors explained below, the probability of aliasing has been theoretically proven to be 2–k, where k is the length of the compactor (and thus the length of the signature). It is important to note that (1) the result is asymptotically independent of the size and complexity of the circuit under test; (2) for k = 16, the probability of aliasing is only about 10–6 and thus quite acceptable; and (3) the empirical results show that in practice this method is even more effective. Most of all, this is the chosen methodology when BIST is required for its effectiveness, speed, and small area overhead. A secondary issue in data compaction is in the determination of the expected “good” signature. The best way is to use fault-free simulation for both the circuit and the compactor, and then the appropriate comparator can be built as part of the testing circuitry [Bardell et al., 1987; Abramovici, 1992]. FIGURE 85.5 Data compaction testing
x3+x+1 FIGURE 85.6 Autonomous LfSR The most important issue is in the choice of a compactor. Although no "perfect"compactor can be found, several have been shown to be very effective. Several compaction techniques have been researched: counting chniques, as in one's count, syndrome testing, transition count, and Walsh spectra coefficients; and signature analysis techniques based on linear feedback shift registers(LFSRs)or linear cellular automata registers(LCARs) Only these latter ones are discussed here. LFSRs and LCARs are also the preferred implementation for the input pattern generators. L FSRs as Pseudo Random Pattern generators An autonomous LFSR is a clocked synchronous shift register augmented with appropriate feedback taps and receiving no external input [Bardell et al, 1987; Abramovici, 1992]. It is an example of a general linear finite state machine, where the memory cells are simple D flip-flops and the next state operations are implemented EXOR gates only Figure 85.6 shows an example of an autonomous LFSR of length k=3. An LFSR of length k an be described by a polynomial with binary coefficients of degree k, where the nonzero coefficients of the polynomial denote the positions of the respective feedback taps. In Fig. 85.6, the high-order coefficient for x' 1, and thus there is a feedback tap from the rightmost cell s; the coefficient for x' is 0, and thus no feedback tap exists after cell s; however, taps are present from cell s and to the leftmost stage since x and x have nonzero coefficients. Since this is an autonomous LFSR, there is no external input to the leftmost cell The state of the LFSR is denoted by the binary state of its cells. In Fig. 85.6, the next state of each cell is determined by the implementation given by its polynomial and can be summarized as follows: $* =52, st so E s,*=S, where the s, denotes the next state of cell s, at each clock cycle. If the LFSR is initialized in a nonzero state, it cycles through a sequence of states and eventually comes back to the initial state, following the functionality of the next-state rules implemented by its polynomial description. An LFSR that goes througl all possible 2- 1 nonzero states is said to be described by a primitive polynomial(see theory of galois fields for the definition of primitive), and such polynomials can be found from tables [Bardell et al., 1987 By connecting the output of each cell to an input of a circuit under test, the LFSR implements an ideal input generator, as it is inexpensive in its implementation and it provides the stimuli in pseudo-random order for LFSRs as Signature analyzer If the leftmost cell of an LFSR is connected to an external input, as shown in Fig. 85.7, the LFSR can be used a compactor[ Bardell et al, 1987; Abramovici, 1992]. In general, the underlying operation of the LFSR is to mute polynomial division over a finite field, and the theoretical analysis of the effectiveness of signature analysis is based on this functionality. The polynomial describing the LFSR implementation is seen to be the divisor polynomial. The binary input stream can be seen to represent the coefficients(high order first)of a dividend polynomial. For example, if the input stream is 1001011(bits are input left to right in time), the dividend polynomial is x6+x+x+ 1. After seven clock cycles for all the input bits to have entered the LFSR, the binary output stream exiting from the right denotes the quotient polynomial, while the last state of the cells in the lfSr denotes the remainder polynomial. In the process of computing a signature for testing the circuit, the input stream to the LFSR used as a compactor is the output stream from the circuit under test. At the end of the testing cycles, only the last state of the LfSR is examined and considered to be the compacted signature of the circuit. In most real cases, circuits have many outputs, and the LFSR is converted into a multiple-input shift register(MISr). A MISR is constructed by adding EXOR gates to the input of some or all the flip-flop cells; the outputs of the circuit are then fed through these gates into the compactor. The probability of aliasing for a MISR is the same as that of an LFSR; e 2000 by CRC Press LLC
© 2000 by CRC Press LLC The most important issue is in the choice of a compactor. Although no “perfect” compactor can be found, several have been shown to be very effective. Several compaction techniques have been researched: counting techniques, as in one’s count, syndrome testing, transition count, and Walsh spectra coefficients; and signature analysis techniques based on linear feedback shift registers (LFSRs) or linear cellular automata registers (LCARs). Only these latter ones are discussed here. LFSRs and LCARs are also the preferred implementation for the input pattern generators. LFSRs as Pseudo-Random Pattern Generators An autonomous LFSR is a clocked synchronous shift register augmented with appropriate feedback taps and receiving no external input [Bardell et al., 1987; Abramovici, 1992]. It is an example of a general linear finite state machine, where the memory cells are simple D flip-flops and the next state operations are implemented by EXOR gates only. Figure 85.6 shows an example of an autonomous LFSR of length k = 3.An LFSR of length k can be described by a polynomial with binary coefficients of degree k, where the nonzero coefficients of the polynomial denote the positions of the respective feedback taps. In Fig. 85.6, the high-order coefficient for x3 is 1, and thus there is a feedback tap from the rightmost cell s2 ; the coefficient for x2 is 0, and thus no feedback tap exists after cell s1 ; however, taps are present from cell s0 and to the leftmost stage since x and x0 have nonzero coefficients. Since this is an autonomous LFSR, there is no external input to the leftmost cell. The state of the LFSR is denoted by the binary state of its cells. In Fig. 85.6, the next state of each cell is determined by the implementation given by its polynomial and can be summarized as follows: s0 + = s2 , s1 + = s0 % s2 , s2 + = s1, where the si + denotes the next state of cell si at each clock cycle. If the LFSR is initialized in a nonzero state, it cycles through a sequence of states and eventually comes back to the initial state, following the functionality of the next-state rules implemented by its polynomial description. An LFSR that goes through all possible 2k - 1 nonzero states is said to be described by a primitive polynomial (see theory of Galois fields for the definition of primitive), and such polynomials can be found from tables [Bardell et al., 1987]. By connecting the output of each cell to an input of a circuit under test, the LFSR implements an ideal input generator, as it is inexpensive in its implementation and it provides the stimuli in pseudo-random order for either exhaustive or pseudo-exhaustive testing. LFSRs as Signature Analyzer If the leftmost cell of an LFSR is connected to an external input, as shown in Fig. 85.7, the LFSR can be used as a compactor [Bardell et al., 1987; Abramovici, 1992]. In general, the underlying operation of the LFSR is to compute polynomial division over a finite field, and the theoretical analysis of the effectiveness of signature analysis is based on this functionality. The polynomial describing the LFSR implementation is seen to be the divisor polynomial. The binary input stream can be seen to represent the coefficients (high order first) of a dividend polynomial. For example, if the input stream is 1001011 (bits are input left to right in time), the dividend polynomial is x6 + x3 + x + 1. After seven clock cycles for all the input bits to have entered the LFSR, the binary output stream exiting from the right denotes the quotient polynomial, while the last state of the cells in the LFSR denotes the remainder polynomial. In the process of computing a signature for testing the circuit, the input stream to the LFSR used as a compactor is the output stream from the circuit under test. At the end of the testing cycles, only the last state of the LFSR is examined and considered to be the compacted signature of the circuit. In most real cases, circuits have many outputs, and the LFSR is converted into a multiple-input shift register (MISR).A MISR is constructed by adding EXOR gates to the input of some or all the flip-flop cells; the outputs of the circuit are then fed through these gates into the compactor. The probability of aliasing for a MISR is the same as that of an LFSR; FIGURE 85.6 Autonomous LFSR
→國!四四 FIGURE 85.7 LFSR for signature ana 1∞kLs0kxsk2m FIGURE 85. 8 LCAR for signature analys however, some errors are missed due to cancellation. This is the case when an error in one output at time t is canceled by the EXOR operation with the error in another output at time t+ 1. Given an equally likely probability of errors occurring, the probability of error cancellation has been shown to be 2l-m-N, where m is the number of outputs compacted and N is the length of the output streams Given that the normal length of signatures used varies between k= 16 and k= 32, the probability of aliasing is minimal and considered acceptable in practice. In MISR, the length of the compactor also depends on the number of outputs tested. If the number of outputs is greater than the length of the MISR, algorithms or heuristics exist for combining outputs with EXOR trees before feeding them to the compactor. If the number of outputs is much smaller, various choices can be evaluated. The amount of aliasing that actually occurs in a particular circuit can be computed by full fault simulation, that is, by injecting each possible fault into a simulated circuit and computing the resulting signature. Changes in aliasing can be achieved by changing the polynomial used to define the compactor. It has been shown that primitive polynomials, essential for the generation of exhaustive input generators(see above), also possess better aliasing characteristics Data Compaction with Linear Cellular Automata Registers LCARs are one-dimensional arrays composed of two types of cells: rule 150 and rule 90 cells [Cattell et al, 996. Each cell is composed of a flip-flop that saves the current state of the cell and an EXOR gate used to compute the next state of the cell. a rule 150 cell computes its next state as the eXor of its present state and of the states of its two(left and right) neighbors. A rule 90 cell computes its next state as the EXOR of the states of its two neighbors only. As can be seen in Fig. 85.8, all connections in an LCAR are near-neighbor connections, thus saving routing area and delays(common for long LFSRs Up to two inputs can be trivially connected to an LCAR, or it can be easily converted to accept multiple inputs fed through the cell rules. There are some advantages of using LCARs instead of LFSRs: first, the localization of all connections, and second, and most importantly, it has been shown that LCARs are much better" pseudo-random pattern generators when used in autonomous mode, as they do not show the corre lation of bits due to the shifting of the LFSRs. Finally, the better pattern distribution provided by lCars input stimuli has been shown to provide better detection for delay faults and open faults, normally very difficult to test As for LFSRs, L CARs are fully described by a characteristic polynomial, and through it any linear finite state machine can be built either as an LFSR or as an LCAR. It is, however, more difficult, given a polynomial,to derive the corresponding LCAR, and tables are now used. The main disadvantage of LCARs is in the area verhead incurred by the extra EXOR gates necessary for the implementation of the cell rules. This is offset by their better performance. The corresponding multiple-output compactor is called a MICA. Accessibility to internal dense circuitry is becoming a greater problem, and thus it is essential that a designer consider how the IC will be tested and incorporate structures in the design. Formal DFT techniques are e 2000 by CRC Press LLC
© 2000 by CRC Press LLC however, some errors are missed due to cancellation. This is the case when an error in one output at time t is canceled by the EXOR operation with the error in another output at time t + 1. Given an equally likely probability of errors occurring, the probability of error cancellation has been shown to be 21–m–N, where m is the number of outputs compacted and N is the length of the output streams. Given that the normal length of signatures used varies between k = 16 and k = 32, the probability of aliasing is minimal and considered acceptable in practice. In MISR, the length of the compactor also depends on the number of outputs tested. If the number of outputs is greater than the length of the MISR, algorithms or heuristics exist for combining outputs with EXOR trees before feeding them to the compactor. If the number of outputs is much smaller, various choices can be evaluated. The amount of aliasing that actually occurs in a particular circuit can be computed by full fault simulation, that is, by injecting each possible fault into a simulated circuit and computing the resulting signature. Changes in aliasing can be achieved by changing the polynomial used to define the compactor. It has been shown that primitive polynomials, essential for the generation of exhaustive input generators (see above), also possess better aliasing characteristics. Data Compaction with Linear Cellular Automata Registers LCARs are one-dimensional arrays composed of two types of cells: rule 150 and rule 90 cells [Cattell et al., 1996]. Each cell is composed of a flip-flop that saves the current state of the cell and an EXOR gate used to compute the next state of the cell. A rule 150 cell computes its next state as the EXOR of its present state and of the states of its two (left and right) neighbors. A rule 90 cell computes its next state as the EXOR of the states of its two neighbors only.As can be seen in Fig. 85.8, all connections in an LCAR are near-neighbor connections, thus saving routing area and delays (common for long LFSRs). Up to two inputs can be trivially connected to an LCAR, or it can be easily converted to accept multiple inputs fed through the cell rules. There are some advantages of using LCARs instead of LFSRs: first, the localization of all connections, and second, and most importantly, it has been shown that LCARs are much “better” pseudo-random pattern generators when used in autonomous mode, as they do not show the correlation of bits due to the shifting of the LFSRs. Finally, the better pattern distribution provided by LCARs as input stimuli has been shown to provide better detection for delay faults and open faults, normally very difficult to test. As for LFSRs, LCARs are fully described by a characteristic polynomial, and through it any linear finite state machine can be built either as an LFSR or as an LCAR. It is, however, more difficult, given a polynomial, to derive the corresponding LCAR, and tables are now used. The main disadvantage of LCARs is in the area overhead incurred by the extra EXOR gates necessary for the implementation of the cell rules. This is offset by their better performance. The corresponding multiple-output compactor is called a MICA. Summary Accessibility to internal dense circuitry is becoming a greater problem, and thus it is essential that a designer consider how the IC will be tested and incorporate structures in the design. Formal DFT techniques are FIGURE 85.7 LFSR for signature analysis. FIGURE 85.8 LCAR for signature analysis
concerned with providing access points for testing(see controllability and observability in Section 85. 2).As test pattern generation becomes even more prohibitive, probabilistic solutions based on compaction and using fault simulation are more widespread, especially if they are supported by dFt techniques and they can avoid the major expense of dedicated external testers. However, any technique chosen must be incorporated within the framework of a powerful CAD system providing semiautomatic analysis and feedback, such that the rule of ten can be kept under control: if one does not find a failure at a particular stage, then detection at the next stage will cost 10 times as much! Defining Terms Aliasing: Whenever the faulty output produces the same signature as a fault-free output. Built-in self-test(BIST): The inclusion of on-chip circuitry to provide testing Fault coverage: The fraction of possible failures that the test technique can detect. Fault simulation: An empirical method used to determine how faults affect the operation of the circuit and also how much testing is required to obtain the desired fault coverage. IpDs testing: A parametric technique to monitor the current Ipp that a circuit draws when it is in a quiescent LFSR: A shift register formed by D flip-flops and EXOR gates, chained together, with a synchronous clock, used either as input pattern generator or as signature analyzer. MISR: Multiple-input LFSR. Off-line testing: Testing process carried out while the tested circuit is not in use. On-line testing: Concurrent testing to detect errors while circuit is in operation Pseudo-random pattern generator: Generates a binary sequence of patterns where the bits appear to be random in the local sense(1 and 0 are equally likely), but they are repeatable(hence only pseudo-random). Random testing: The process of testing using a set of pseudo-randomly generated patterns Sequential fault: A fault that causes a combinational circuit to behave like a sequential one. Signature analysis: A test where the responses of a device over time are compacted into a characteristic value called a signature, which is then compared to a known good one. Stuck-at fault: A fault model represented by a signal stuck at a fixed logic value(0 or 1) est pattern(test vector): Input vector such that the faulty output is different from the fault-free output(the fault is stimulated and detected) Related Topic 23.2 Testing References M. Abramovici, M.A. Breuer and A D. Friedman, Digital Systems Testing and Testable Design, Rockville, Md: IEEE Press, 1992. P.H. Bardell, W.H. McAnney, and J Savir, Built-In Test for VLSI: Pseudorandom Techniques, New York: John K. Cattell and J.C. Muzio, Synthesis of one-dimensional linear hybrid cellular automata, IEEE Trans. Computer Aided Design, vol. 15, no. 3, Pp 325-335, 1996. N H.E. Weste and K. Eshraghian, Principles of CMOS VLSI Design, Addison-Wesley, 1993 T.W. Williams(Ed ) VLSI Testing, Amsterdam: North-Holland, 198 Further Information The author would like to recommend reading the book by abramovici et al. [ 1992] that, at the present time, gives the most comprehensive view of testing methods and design for testability. More information on deter- ministic pattern generation can also be found in Fault Tolerant Computing, edited by D K. Pradhan, and for e 2000 by CRC Press LLC
© 2000 by CRC Press LLC concerned with providing access points for testing (see controllability and observability in Section 85.2). As test pattern generation becomes even more prohibitive, probabilistic solutions based on compaction and using fault simulation are more widespread, especially if they are supported by DFT techniques and they can avoid the major expense of dedicated external testers. However, any technique chosen must be incorporated within the framework of a powerful CAD system providing semiautomatic analysis and feedback, such that the rule of ten can be kept under control: if one does not find a failure at a particular stage, then detection at the next stage will cost 10 times as much! Defining Terms Aliasing: Whenever the faulty output produces the same signature as a fault-free output. Built-in self-test (BIST): The inclusion of on-chip circuitry to provide testing. Fault coverage: The fraction of possible failures that the test technique can detect. Fault simulation: An empirical method used to determine how faults affect the operation of the circuit and also how much testing is required to obtain the desired fault coverage. IDD q testing: A parametric technique to monitor the current IDD that a circuit draws when it is in a quiescent state. It is used to detect faults which increase the normally low IDD . LFSR: A shift register formed by D flip-flops and EXOR gates, chained together, with a synchronous clock, used either as input pattern generator or as signature analyzer. MISR: Multiple-input LFSR. Off-line testing: Testing process carried out while the tested circuit is not in use. On-line testing: Concurrent testing to detect errors while circuit is in operation. Pseudo-random pattern generator: Generates a binary sequence of patterns where the bits appear to be random in the local sense (1 and 0 are equally likely), but they are repeatable (hence only pseudo-random). Random testing: The process of testing using a set of pseudo-randomly generated patterns. Sequential fault: A fault that causes a combinational circuit to behave like a sequential one. Signature analysis: A test where the responses of a device over time are compacted into a characteristic value called a signature, which is then compared to a known good one. Stuck-at fault: A fault model represented by a signal stuck at a fixed logic value (0 or 1). Test pattern (test vector): Input vector such that the faulty output is different from the fault-free output (the fault is stimulated and detected). Related Topic 23.2 Testing References M. Abramovici, M.A. Breuer and A.D. Friedman, Digital Systems Testing and Testable Design, Rockville, Md.: IEEE Press, 1992. P.H. Bardell, W.H. McAnney, and J. Savir, Built-In Test for VLSI: Pseudorandom Techniques, New York: John Wiley and Sons, 1987. K. Cattell and J.C. Muzio, “Synthesis of one-dimensional linear hybrid cellular automata,” IEEE Trans. Computer Aided Design, vol. 15, no. 3, pp. 325–335, 1996. N.H.E. Weste and K. Eshraghian, Principles of CMOS VLSI Design, Addison-Wesley, 1993. T.W. Williams (Ed.), VLSI Testing, Amsterdam: North-Holland, 1986. Further Information The author would like to recommend reading the book by Abramovici et al. [1992] that, at the present time, gives the most comprehensive view of testing methods and design for testability. More information on deterministic pattern generation can also be found in Fault Tolerant Computing, edited by D.K. Pradhan, and for
newer approaches of random testing the book by Bardell et al. contains basic information. The latest state-of- the-art research is to be found mainly in proceedings of the IEEE International Test Conference. 85.2 Design for Test Bulent I. Dervisoglu Testing of electronic circuits, which has long been pursued as an activity that follows the design and manufacture of (at least)the prototype product, has currently become a topic of up-front investigation and commitment. Today, it is not uncommon to list the design for testability(dFt) features of a product among the so-called functional requirements in the definition of a new product to be developed. Just how such a major transfor- mation has occurred can be understood by examining the testability problems faced by manufacturing orga- nizations and considering their impact on time to market(TTM) The Testability Problem The primary objective of testing digital circuits at chip, board, or system level is to detect the presence of hardware failures induced by faults in the manufacturing processes or by operating stress or wearout mecha sms. Furthermore, during manufacturing, a secondary but equally important objective is to accurately determine which component or physical element(e.g, connecting wire) is faulty so that quick diagnosis/repair of the product becomes possible. These objectives are necessary due to imperfections in the manufacturing processes used in building digital electronic components/systems. All digital circuits must undergo appropriate level testing to avoid shipping faulty components/systems to the customer. Analog circuits may have minimum and maximum allowable input signal values(e.g, input voltage)as well as infinitely many values in between these that the component has to be able to respond to. Testing of analog circuits is often achieved by checking the circuit response at the specified upper and lower bounds as well as observing/quantifying the change of the output response with varying input signal values. On the other hand, the behavior of a digital system is characterized by discrete(as opposed to continuous)responses to discrete operating state/input signal permu- tations such that testing of digital circuits may be achieved by checking their behavior under every operating de and input signal permutation. In principle this approach is valid. However, in practice, most digital ircuits are too complex to be tested using such a brute force technique. Instead, test methods have been developed to test digital circuits using only a fraction of all possible test conditions without sacrificing test coverage. Here, test coverage is used to refer to the ratio of faults that can be detected to all faults which are taken into consideration, expressed as a percentage. At the present time the most popular fault model is the so- called stuck-at fault model that refers to individual nets being considered to be fault-free (i.e. good network) or considered to be permanently stuck at either one of the logic I or logic 0 values. For example, if the device under test(DUT) contains several components(or building blocks), where the sum of all input and output terminals(nodes)of the components is k, there are said to be 2k possible stuck-at faults, corresponding to each of the circuit nodes being permanently stuck at one of the two possible logic states. In general, a larger number of possible stuck-at faults leads to increased difficulty of testing the digital circuit For the purpose of test pattern(i.e, input stimulus)generation it is often assumed that the circuit under test (CUT) is either fault-free or it contains only one node which is permanently stuck at a particular logic state Thus, the most widely used fault model is the so-called single stuck-at fault model. Using this model each fault is tested by applying a test pattern that, in a good circuit, drives the particular node to the logic state lich has the opposite value from the state of the fault assumed to be present in the faulty circuit. For example, to test if node v is stuck at logic state x(denoted by v/x or v-x), a test pattern must be used that would cause node v to be driven to the opposite of logic state x if the circuit is not faulty. Thus, the test pattern attempts to show that node v is not stuck at x by driving the node to a value other than x, which for a two-valued digital circuit must be the opposite of x(denoted by -x). This leads to the requirement that to detect any stuck-at fault wx, it is necessary to be able to control the logic value at node v so that it can be set to -v. If the signal value at node v can be observed directly by connecting it to a test equipment, the particular fault v/x can be detected readily. However, in most cases, node v may be an internal node, which is inaccessible for direct e 2000 by CRC Press LLC
© 2000 by CRC Press LLC newer approaches of random testing the book by Bardell et al. contains basic information. The latest state-ofthe-art research is to be found mainly in proceedings of the IEEE International Test Conference. 85.2 Design for Test Bulent I. Dervisoglu Testing of electronic circuits, which has long been pursued as an activity that follows the design and manufacture of (at least) the prototype product, has currently become a topic of up-front investigation and commitment. Today, it is not uncommon to list the design for testability (DFT) features of a product among the so-called functional requirements in the definition of a new product to be developed. Just how such a major transformation has occurred can be understood by examining the testability problems faced by manufacturing organizations and considering their impact on time to market (TTM). The Testability Problem The primary objective of testing digital circuits at chip, board, or system level is to detect the presence of hardware failures induced by faults in the manufacturing processes or by operating stress or wearout mechanisms. Furthermore, during manufacturing, a secondary but equally important objective is to accurately determine which component or physical element (e.g., connecting wire) is faulty so that quick diagnosis/repair of the product becomes possible. These objectives are necessary due to imperfections in the manufacturing processes used in building digital electronic components/systems. All digital circuits must undergo appropriate level testing to avoid shipping faulty components/systems to the customer. Analog circuits may have minimum and maximum allowable input signal values (e.g., input voltage) as well as infinitely many values in between these that the component has to be able to respond to. Testing of analog circuits is often achieved by checking the circuit response at the specified upper and lower bounds as well as observing/quantifying the change of the output response with varying input signal values. On the other hand, the behavior of a digital system is characterized by discrete (as opposed to continuous) responses to discrete operating state/input signal permutations such that testing of digital circuits may be achieved by checking their behavior under every operating mode and input signal permutation. In principle this approach is valid. However, in practice, most digital circuits are too complex to be tested using such a brute force technique. Instead, test methods have been developed to test digital circuits using only a fraction of all possible test conditions without sacrificing test coverage. Here, test coverage is used to refer to the ratio of faults that can be detected to all faults which are taken into consideration, expressed as a percentage. At the present time the most popular fault model is the socalled stuck-at fault model that refers to individual nets being considered to be fault-free (i.e., good network) or considered to be permanently stuck at either one of the logic 1 or logic 0 values. For example, if the device under test (DUT) contains several components (or building blocks), where the sum of all input and output terminals (nodes) of the components is k, there are said to be 2k possible stuck-at faults, corresponding to each of the circuit nodes being permanently stuck at one of the two possible logic states. In general, a larger number of possible stuck-at faults leads to increased difficulty of testing the digital circuit. For the purpose of test pattern (i.e., input stimulus) generation it is often assumed that the circuit under test (CUT) is either fault-free or it contains only one node which is permanently stuck at a particular logic state. Thus, the most widely used fault model is the so-called single stuck-at fault model. Using this model each fault is tested by applying a specific test pattern that, in a good circuit, drives the particular node to the logic state which has the opposite value from the state of the fault assumed to be present in the faulty circuit. For example, to test if node v is stuck at logic state x (denoted by v/x or v-x), a test pattern must be used that would cause node v to be driven to the opposite of logic state x if the circuit is not faulty. Thus, the test pattern attempts to show that node v is not stuck at x by driving the node to a value other than x, which for a two-valued digital circuit must be the opposite of x (denoted by ~x). This leads to the requirement that to detect any stuck-at fault v/x, it is necessary to be able to control the logic value at node v so that it can be set to ~v. If the signal value at node v can be observed directly by connecting it to a test equipment, the particular fault v/x can be detected readily. However, in most cases, node v may be an internal node, which is inaccessible for direct