Stuck-at fault
A stuck-at fault is a widely used fault model in digital circuit testing that assumes a signal line or net is permanently fixed at a logic value of 0 (stuck-at-0) or 1 (stuck-at-1), regardless of the intended input or control signals driving it.[1] This model represents common manufacturing defects, such as opens, shorts, or contact failures, that cause the affected line to exhibit a constant logical behavior at the gate level.[2] In practice, stuck-at faults are categorized into single stuck-at faults, where only one line is affected, and multiple stuck-at faults, involving any combination of lines stuck at 0 or 1 across the circuit (totaling $3^k - 1 possible faults for k potential sites).[3] The model assumes faults occur at interconnections between Boolean gates like AND, OR, NAND, NOR, and NOT, and that a single fanout branch may be independently faulty.[1] Its importance lies in enabling structural testing approaches that avoid exhaustive input combinations—impractical for complex integrated circuits with billions of transistors—by instead applying targeted test patterns to propagate faulty values to observable outputs.[2] Stuck-at faults play a central role in automatic test pattern generation (ATPG) tools and fault simulation software, where test vectors are derived to detect faults by sensitizing paths and measuring coverage, often aiming for over 99% for single faults.[1] Key properties enhance efficiency: fault equivalence groups indistinguishable faults (e.g., collapsing up to 62.5% of the fault set in simple circuits), while dominance further reduces the set by identifying tests that cover multiple faults (e.g., to 47% collapse ratio).[3] The checkpoint theorem states that tests targeting faults at primary inputs and fanout branches detect all single and multiple stuck-at faults in combinational circuits.[3] Introduced by R.D. Eldred in his 1959 paper "Test Routines Based on Symbolic Logical Statements" as a practical alternative to full enumeration testing, the stuck-at model has become a cornerstone of design-for-testability (DFT) methodologies, though it has limitations in capturing timing-related or dynamic defects in modern technologies like CMOS.[2][4] Despite these, its simplicity and tool support make it the primary target for ensuring reliability in VLSI and ASIC designs.[3]Fundamentals
Definition and Basic Concepts
A stuck-at fault is a fault model used in digital circuit testing, where a signal line or node in a combinational or sequential circuit is permanently fixed to a constant logic value of either 0 or 1, irrespective of the inputs applied to the circuit.[5] This abstraction represents various physical manufacturing defects, such as open circuits, shorts between lines, or transistor failures, by capturing their effect at the logical level rather than modeling the precise physical mechanism. Unlike physical defect models, which operate at the analog or device level, the stuck-at model is a behavioral and logical abstraction designed to facilitate efficient simulation and automatic test pattern generation for verifying circuit functionality.[5] The model employs binary logic values, with a stuck-at-0 (s-a-0) fault indicating the node is fixed at logic low (ground or 0V), and a stuck-at-1 (s-a-1) fault indicating it is fixed at logic high (Vdd or supply voltage). In fault simulation tools, an undefined state (X) may be used to represent unknown or uninitialized values during propagation analysis, but the primary focus remains on binary s-a-0 and s-a-1 faults to model realistic defect behaviors. This terminology applies to nodes in digital circuits, which include primary inputs, primary outputs, internal wires connecting components, and storage elements in sequential logic. Stuck-at faults occur within the context of digital circuits composed of Boolean logic gates, such as AND, OR, and NAND, where signals propagate through interconnected nodes to compute outputs based on input combinations.[5] For instance, consider a two-input AND gate with inputs A and B, normally producing output 0 for input pairs (0,0) or (0,1). If the output node experiences an s-a-1 fault, it will incorrectly output 1 for these inputs, mimicking a short to Vdd that overrides the gate's logic.Historical Development
The stuck-at fault model originated in the late 1950s as a response to the growing complexity of digital circuit testing, predating the widespread adoption of integrated circuits. In 1959, R. D. Eldred introduced foundational concepts for structural testing in his paper on symbolic logic statements for verifying digital systems, emphasizing the need for efficient test routines to detect logical inconsistencies without exhaustive enumeration.[4] This work laid the groundwork for abstracting physical defects into logical faults, focusing on signal lines fixed at constant values (0 or 1), which addressed the impracticality of complete functional testing for increasingly large circuits. The model was formally defined and named in 1961 by J. M. Galey, R. E. Norby, and J. P. Roth.[5] During the 1960s and 1970s, the model gained prominence with the rise of transistor-transistor logic (TTL) circuits, where its simplicity in representing manufacturing defects made it a standard for test generation. J. P. Roth's 1966 D-algorithm formalized path sensitization techniques specifically for stuck-at faults, enabling systematic automatic test pattern generation (ATPG) and integrating the model into early testing tools. By the 1970s and into the 1980s, TTL manufacturers routinely advertised integrated circuit reliability in terms of stuck-at fault coverage percentages, often exceeding 99%, which solidified the model's industry acceptance due to its correlation with observed defect detection rates. The transition to complementary metal-oxide-semiconductor (CMOS) technology in the 1980s retained the stuck-at model's core utility but highlighted its limitations for emerging defect types, such as stuck-open faults where transistor paths fail to conduct without altering steady-state logic levels. Despite these gaps, the model persisted into the 21st century for very-large-scale integration (VLSI) testing, valued for its abstraction from physical details and effectiveness across design styles. In recent years, design-for-testability (DFT) methodologies continue to emphasize stuck-at faults as a baseline, increasingly alongside hybrid approaches incorporating delay and bridging models to address nanoscale challenges.[6]Types of Stuck-at Faults
Single Stuck-at Fault
The single stuck-at fault model assumes that exactly one line—whether an input, output, node, or interconnect—in the digital circuit is permanently fixed at a logic value of 0 (stuck-at-0) or 1 (stuck-at-1), while all other lines and components operate correctly under normal logic functions.[7] This isolation to a single fault simplifies analysis and modeling, enabling efficient test generation without considering interactions among multiple defects.[3] The model abstracts physical defects such as opens, shorts, or transistor failures into logical behaviors, focusing on observability at primary outputs. In a circuit with n lines (including inputs, internal nodes, and outputs), there are 2n possible single stuck-at faults, as each line can exhibit either a stuck-at-0 or stuck-at-1 condition.[3] However, not all are unique due to structural properties of logic gates. Fault equivalence arises when two or more faults produce identical output responses for every possible input vector, meaning the same set of tests detects all equivalent faults.[7] This reduces the effective number of faults to consider during testing; for instance, in a 2-input NAND gate, a stuck-at-0 on input A, a stuck-at-0 on input B, and a stuck-at-1 on the output are all equivalent, as each forces the output to logic 1 for all input combinations.[7]| Fault Location | Stuck-at Value | Equivalent Behavior in 2-Input NAND Gate |
|---|---|---|
| Input A | 0 | Output fixed at 1 for all inputs |
| Input B | 0 | Output fixed at 1 for all inputs |
| Output | 1 | Output fixed at 1 for all inputs |