Combinational logic
Combinational logic is a fundamental concept in digital electronics, referring to circuits where the output values are determined solely by the current combination of input values, without any dependence on previous inputs or internal memory elements.[1][2] These circuits operate based on Boolean algebra principles, implementing logical functions through interconnected logic gates such as AND, OR, NOT, NAND, NOR, XOR, and XNOR, which process binary signals (0 or 1 representing false or true).[2][3] Key characteristics of combinational logic include a finite number of inputs (n) and outputs (m), resulting in 2^n possible input combinations, each producing a unique output defined by m Boolean functions.[1] Unlike sequential logic circuits, which incorporate memory elements like flip-flops to retain state and allow outputs to depend on both current and past inputs, combinational circuits have no feedback loops or storage, ensuring deterministic and instantaneous responses limited only by propagation delays in the gates.[1][2] Common examples of combinational logic circuits include arithmetic units like half-adders and full-adders for binary addition, multiplexers and demultiplexers for data routing, encoders and decoders for binary-to-decimal conversion, comparators for equality checks, and subtractors for binary subtraction.[1] These building blocks are essential in digital system design, forming subsystems in processors, memory addressing, and signal processing, and are typically implemented using integrated circuits ranging from small-scale (SSI) to very large-scale (VLSI) integration for efficiency in cost, speed, and complexity.[1][2] The design process for combinational logic begins with problem specification, followed by defining inputs and outputs, constructing truth tables to map all input combinations, simplifying expressions using Karnaugh maps or Boolean algebra theorems, and finally drawing the logic diagram or implementing in hardware description languages.[1] Optimization focuses on minimizing the number of gates, literals, and propagation delays to enhance performance, making combinational logic a cornerstone for reliable and scalable digital technologies.[1]Fundamentals
Definition and Characteristics
Combinational logic refers to a class of digital circuits in which the output values are determined solely by the current input values, without any reliance on prior states or memory elements. These circuits implement Boolean functions, where each output is a direct logical combination of the inputs at any given moment, ensuring that identical inputs always produce identical outputs—a property known as determinism. Unlike systems with storage, combinational logic operates instantaneously in an ideal sense, though real implementations involve physical constraints that affect timing.[4] Key characteristics of combinational logic include its memoryless nature, meaning there are no feedback loops or storage components that retain information from previous input configurations; outputs depend exclusively on the present inputs, making the behavior predictable and free from historical dependencies. Additionally, these circuits exhibit finite propagation delay, the time required for a change in input to propagate through the gates and stabilize the outputs, typically on the order of nanoseconds in modern implementations but varying with circuit complexity and technology. Once inputs settle, outputs reach a stable state, providing reliable operation for applications like arithmetic units or decoders, provided the delays are managed to avoid transient issues such as glitches.[4] The concept of combinational logic builds on binary logic, where signals represent two states—logical 0 (low voltage, false) and 1 (high voltage, true)—allowing circuits to process information as combinations of these values through interconnected gates. Historically, the foundations of combinational logic trace back to Claude Shannon's 1938 master's thesis at MIT, "A Symbolic Analysis of Relay and Switching Circuits," which demonstrated how Boolean algebra could model and simplify electrical switching networks, bridging mathematics and engineering. This work laid the groundwork for digital electronics, with practical combinational circuits emerging in the 1940s and 1950s as part of early computers and automated systems, revolutionizing circuit design by enabling efficient Boolean function realization without sequential dependencies.[5][6]Distinction from Sequential Logic
Combinational logic circuits produce outputs that depend solely on the current input values, without any mechanism to retain or recall previous states, whereas sequential logic circuits incorporate memory elements that allow outputs to depend on both current inputs and prior states stored over time.[7][8] This fundamental distinction arises because combinational circuits reset their outputs immediately upon changes in inputs, functioning as stateless systems, while sequential circuits use feedback paths to maintain state information across multiple clock cycles.[9][10] The structural implications of this difference are significant for circuit design and analysis: combinational circuits form acyclic networks, enabling straightforward static analysis where outputs can be verified against all possible input combinations without considering temporal dependencies.[11][12] In contrast, sequential circuits introduce cycles through feedback, necessitating dynamic timing analysis to account for clock synchronization, propagation delays, and potential race conditions that could alter state transitions.[13][14] This makes sequential designs more complex to verify, as they require simulation over time to ensure reliable behavior under varying clock rates and input sequences.[15] Key sequential elements, such as latches and flip-flops, exemplify this memory capability by storing binary values that influence future outputs, distinguishing them from purely combinational components. Latches operate asynchronously, capturing inputs when enabled, while flip-flops are clock-synchronous, updating state only on clock edges to prevent timing hazards.[16][17] These elements enable sequential circuits to implement functions like state machines but introduce dependencies that combinational logic avoids. In terms of design trade-offs, combinational logic is ideal for tasks requiring instantaneous, deterministic computation without history, such as arithmetic operations in adders, where outputs directly reflect input combinations.[18] Sequential logic, however, excels in applications needing persistence, like counters that increment based on accumulated states, allowing for more versatile but timing-sensitive systems.[19] Designers must balance these by using combinational blocks for efficiency in data processing while reserving sequential elements for control and memory-intensive roles.[20]Basic Components
Logic Gates
Logic gates are the fundamental building blocks of combinational logic circuits, performing basic Boolean operations on binary inputs to produce a binary output.[21] The most basic gates include the AND, OR, and NOT gates, each defined by their specific logical functions and represented using standardized symbols from ANSI/IEEE Std 91-1984, which employs rectangular outlines with internal notations for clarity.[22] The AND gate produces an output of 1 only if all inputs are 1; otherwise, the output is 0.[21] Its ANSI/IEEE symbol is a rectangle containing the "&" notation at the output end.[22] The truth table for a two-input AND gate is as follows:| Input A | Input B | Output Y = A · B |
|---|---|---|
| 0 | 0 | 0 |
| 0 | 1 | 0 |
| 1 | 0 | 0 |
| 1 | 1 | 1 |
| Input A | Input B | Output Y = A + B |
|---|---|---|
| 0 | 0 | 0 |
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 1 |
| Input A | Output Y = Ā |
|---|---|
| 0 | 1 |
| 1 | 0 |
| Input A | Input B | Output Y = (A · B)̄ |
|---|---|---|
| 0 | 0 | 1 |
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 0 |
| Input A | Input B | Output Y = (A + B)̄ |
|---|---|---|
| 0 | 0 | 1 |
| 0 | 1 | 0 |
| 1 | 0 | 0 |
| 1 | 1 | 0 |
Universal Gates
In combinational logic design, certain single gate types possess the property of universality, allowing them to implement any possible Boolean function when used in sufficient combinations. This capability stems from their ability to construct the fundamental operations of AND, OR, and NOT, which together form a functionally complete set capable of expressing all Boolean functions. The NAND and NOR gates are the primary examples of such universal gates, offering significant efficiency in circuit implementation by eliminating the need for multiple gate varieties.[29] The NAND gate, defined as the negation of the AND operation (output is 1 unless both inputs are 1), demonstrates universality through explicit constructions of the basic gates. To realize a NOT gate, both inputs of a NAND gate are tied together to the same signal A, yielding \overline{A \wedge A} = \overline{A}. An AND gate is then constructed by applying a NOT to the output of a NAND gate: if C = A \wedge B is desired, compute D = A \ NAND \ B = \overline{A \wedge B}, followed by C = D \ NAND \ D = \overline{\overline{A \wedge B}} = A \wedge B. Similarly, an OR gate is built using two NOTs followed by a NAND: E = A \ NAND \ A = \overline{A}, F = B \ NAND \ B = \overline{B}, then A \ OR \ B = E \ NAND \ F = \overline{\overline{A} \wedge \overline{B}} = A \vee B. These constructions prove that any Boolean function, expressible in terms of AND, OR, and NOT, can be realized solely with NAND gates.[30][29] The NOR gate, the negation of the OR operation (output is 0 only if at least one input is 1), exhibits analogous universality. A NOT gate is obtained by tying both inputs to A, resulting in \overline{A \vee A} = \overline{A}. An OR gate follows by negating the NOR output: G = A \ NOR \ B = \overline{A \vee B}, then A \vee B = G \ NOR \ G = \overline{\overline{A \vee B}} = A \vee B. An AND gate uses two NOTs into a NOR: H = A \ NOR \ A = \overline{A}, I = B \ NOR \ B = \overline{B}, then A \wedge B = H \ NOR \ I = \overline{\overline{A} \vee \overline{B}} = A \wedge B. Thus, NOR gates alone suffice to implement all Boolean functions via similar substitutions.[30][29] Functional completeness refers to a gate set's ability to generate every possible Boolean function of any number of variables, a property held by {NAND} or {NOR} because they can produce AND, OR, and NOT, which are known to be complete. Minimal universal sets like these are particularly valuable, as larger sets (e.g., {AND, OR, NOT}) require more diverse components without added expressive power. In practice, using only NAND or NOR simplifies manufacturing by reducing the inventory of gate types needed, which lowers production complexity and costs in integrated circuit fabrication. In CMOS technology, NAND gates are especially preferred due to their pull-down network consisting of faster NMOS transistors in series, enabling higher density and better performance compared to NOR gates, which use slower PMOS transistors in series for pull-up.[29][30][31]Representation
Boolean Expressions
Boolean expressions form the foundational mathematical language for specifying the functionality of combinational logic circuits, where each expression defines the output as a function of binary input variables. These variables, typically denoted as uppercase letters like A, B, and C, assume values of 0 (false) or 1 (true), representing the two possible states of digital signals. The algebra underlying these expressions, known as Boolean algebra, was originally developed by George Boole to model logical reasoning and later adapted for electrical switching circuits.[32] In combinational logic, Boolean expressions enable precise description of how inputs combine to produce outputs without reliance on memory elements. The core operations in Boolean algebra are conjunction (logical AND, denoted by a dot · or juxtaposition), disjunction (logical OR, denoted by a plus +), and negation (logical NOT, denoted by an overbar ¯ or prime '). For example, A \cdot B evaluates to 1 only if both A and B are 1, while A + B is 1 if at least one is 1, and \bar{A} inverts the value of A. These operations satisfy a set of fundamental axioms that define the structure of Boolean algebra, including commutativity (A + B = B + A, A \cdot B = B \cdot A), associativity (A + (B + C) = (A + B) + C, A \cdot (B \cdot C) = (A \cdot B) \cdot C), distributivity (A \cdot (B + C) = (A \cdot B) + (A \cdot C), A + (B \cdot C) = (A + B) \cdot (A + C)), identity elements (A + 0 = A, A \cdot 1 = A), complements (A + \bar{A} = 1, A \cdot \bar{A} = 0), idempotence (A + A = A, A \cdot A = A), and absorption (A + (A \cdot B) = A, A \cdot (A + B) = A). These postulates, formalized as independent sets for the algebra of logic, ensure consistent manipulation of expressions.[33] Boolean functions in combinational logic are commonly expressed in sum-of-products (SOP) or product-of-sums (POS) forms. An SOP expression represents the output as a disjunction (OR) of conjunctions (ANDs) of literals, where a literal is a variable or its complement; for instance, F(A, B, C) = A B + \bar{A} C means the output is 1 if either A and B are both 1 or A is 0 and C is 1. This form directly corresponds to a two-level AND-OR circuit structure. Conversely, a POS expression is a conjunction (AND) of disjunctions (ORs), such as F(A, B, C) = (A + B)(\bar{A} + \bar{C}), which evaluates to 1 only if all summed terms are simultaneously 1, aligning with an OR-AND circuit. These disjunctive and conjunctive normal forms were key to analyzing switching circuits as Boolean functions. Canonical forms provide a standardized, exhaustive representation of Boolean functions using all possible input combinations. In the canonical SOP form, also called the minterm expansion, the function is a sum of minterms—product terms that include every variable exactly once (in true or complemented form) and evaluate to 1 for exactly one input combination. For three variables, the minterm for inputs 1, 0, 1 (decimal 5) is A \bar{B} C, and a function true for minterms 2 and 4 might be written as F = \sum m(2, 4) = \bar{A} B \bar{C} + A \bar{B} \bar{C}. Similarly, the canonical POS form uses maxterms—sum terms that evaluate to 0 for exactly one input combination—and is denoted as F = \prod M(0, 2), expanding to the product of those maxterms. These forms ensure uniqueness and completeness, facilitating systematic design. De Morgan's theorems are essential identities for transforming Boolean expressions, particularly for converting between AND and OR operations via negation. The first theorem states that the negation of a disjunction is the conjunction of the negations: \overline{A + B} = \bar{A} \cdot \bar{B}. The second states that the negation of a conjunction is the disjunction of the negations: \overline{A \cdot B} = \bar{A} + \bar{B}. These generalize to multiple variables, such as \overline{A + B + C} = \bar{A} \cdot \bar{B} \cdot \bar{C}. These laws were formally introduced in the context of syllogistic logic.[34] To prove the first De Morgan's theorem algebraically, show that \bar{A} \cdot \bar{B} is the complement of A + B, meaning (A + B) + (\bar{A} \cdot \bar{B}) = 1 and (A + B) \cdot (\bar{A} \cdot \bar{B}) = 0. First, (A + B) \cdot (\bar{A} \cdot \bar{B}) = (A \cdot \bar{A} \cdot \bar{B}) + (B \cdot \bar{A} \cdot \bar{B}) = (0 \cdot \bar{B}) + (\bar{A} \cdot B \cdot \bar{B}) = 0 + (\bar{A} \cdot 0) = 0. Second, (A + B) + \bar{A} \bar{B} = A + (B + \bar{A} \bar{B}). Now, B + \bar{A} \bar{B} = (A + \bar{A}) \cdot B + \bar{A} \bar{B} = A B + \bar{A} B + \bar{A} \bar{B} = A B + \bar{A} (B + \bar{B}) = A B + \bar{A} \cdot 1 = A B + \bar{A}. Then, A + A B + \bar{A} = A (1 + B) + \bar{A} = A + \bar{A} = 1. Thus, \bar{A} \cdot \bar{B} = \overline{A + B}.[32] The second theorem follows by duality, interchanging + and · while preserving the overbar. De Morgan's theorems enable efficient circuit inversion and equivalence checking in combinational design.[34]Truth Tables and Diagrams
Truth tables provide a tabular representation of the behavior of combinational logic circuits by enumerating all possible input combinations and their corresponding outputs. For a circuit with n binary inputs, the table consists of 2^n rows, one for each possible input combination, with columns dedicated to the input variables and the output functions. The inputs are typically listed in a binary counting order, starting from all zeros to all ones, ensuring exhaustive coverage of the function's domain. To illustrate, consider a two-input exclusive-OR (XOR) gate, a basic combinational element that outputs 1 only when the inputs differ. The truth table for this function is constructed as follows:| A | B | A XOR B |
|---|---|---|
| 0 | 0 | 0 |
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 0 |
Design and Minimization
Synthesis Process
The synthesis process for combinational logic circuits begins with a clear problem statement specifying the inputs, outputs, and desired functionality. For instance, the design of a majority function for three inputs (A, B, C) requires an output M that is 1 when at least two inputs are 1, otherwise 0.[39] The first step involves identifying the number of input bits and deriving the output requirements, followed by constructing a truth table to enumerate all possible input combinations and corresponding outputs.[40] From the truth table, a Boolean expression is obtained, typically in sum-of-products (SOP) or product-of-sums (POS) form by identifying minterms or maxterms where the output is 1. For the majority function, the truth table yields the SOP expression: M = AB + AC + BC This expression is then implemented using logic gates, such as AND gates for product terms and an OR gate to combine them, resulting in a gate-level circuit with three 2-input AND gates and one 3-input OR gate.[39][40] A practical example is the synthesis of a 2-bit magnitude comparator, which takes two 2-bit inputs A = (A_1 A_0) and B = (B_1 B_0) and produces outputs EQ (A = B), GT (A > B), and LT (A < B). The process starts with a truth table listing all 16 input combinations and their outputs, from which SOP expressions are derived, such as: \text{EQ} = (A_1 \oplus B_1)' (A_0 \oplus B_0)' \text{GT} = A_1 B_1' + A_1 A_0 B_0' + A_0 B_1' B_0' These are realized using XOR, AND, and OR gates, often incorporating exclusive-NOR for equality checks to minimize gate count. Gate selection prioritizes basic components like NAND or NOR for universality, ensuring the circuit directly maps the expressions without feedback.[41][40] Modern synthesis leverages hardware description languages (HDLs) such as Verilog for automated design. A behavioral description, like anassign statement or always block defining the logic function (e.g., a multiplexer as assign Y = S ? A : B), is synthesized by tools into a gate-level netlist. This process translates high-level constructs into interconnected gates, optimizing for target technologies like FPGAs or ASICs.[42]
The overall flow is iterative, incorporating design constraints such as area (measured by gate count or literals) and power (via switching activity reduction). Refinement involves repeated optimization and mapping stages, where initial netlists are evaluated against constraints using tools like static timing analysis, and adjustments (e.g., algebraic decomposition) are applied until targets are met. This approach, as implemented in systems like Berkeley's MIS, achieves up to 20% area reduction on benchmark circuits while respecting timing and power limits.[43][44]
Minimization Techniques
Minimization techniques in combinational logic aim to simplify Boolean expressions from their canonical sum-of-products (SOP) or product-of-sums (POS) forms into equivalent expressions with fewer terms and literals, thereby reducing the number of logic gates required for implementation.[6] These methods exploit the properties of Boolean algebra to eliminate redundancies while preserving the function's behavior.[45] Common approaches include algebraic manipulation, graphical tools like Karnaugh maps, and tabular algorithms such as the Quine-McCluskey method, each suited to different numbers of variables and complexity levels.[46][47] Algebraic minimization applies Boolean theorems to simplify SOP or POS expressions systematically. Key theorems include the consensus rule, which states that for variables X, Y, and Z, the expression XY + X'Z + YZ reduces to XY + X'Z, as the term YZ is redundant under the consensus condition.[45] This theorem, derived from the idempotence and absorption properties of Boolean algebra, allows covering of minterms without explicit enumeration.[6] For example, starting with the SOP expression AB + \overline{A}C + BC, applying the consensus theorem identifies BC as the consensus of AB and \overline{A}C, yielding the simplified form AB + \overline{A}C.[45] Other techniques involve distributive laws for factoring or expansion to reveal common factors, iteratively reducing literal count until no further simplification is possible.[6] Karnaugh maps (K-maps) provide a graphical method for minimizing Boolean functions with up to four variables by visualizing adjacencies in a truth table.[46] Introduced by Maurice Karnaugh in 1953, the technique arranges minterms in a rectangular grid where cells differing by one variable are adjacent, including wrap-around edges to form a torus-like structure.[46] The procedure involves plotting 1s (for SOP) in the map from the truth table, then grouping adjacent 1s into the largest possible power-of-two rectangles (1, 2, 4, or 8 cells), where each group represents a product term with literals corresponding to unchanging variables across the group.[46] Overlapping groups are permitted to cover all 1s with minimal terms. For don't care conditions (denoted as X), which arise when certain input combinations are irrelevant, these cells can be treated as 1s or 0s to enlarge groups and further simplify, but must not be covered if it complicates the expression.[46] Consider a three-variable function with minterms m_0, m_2, m_3, m_5 and don't cares m_1, m_7:| \overline{C} | C | |
|---|---|---|
| \overline{A}\overline{B} | 1 | X |
| \overline{A}B | 1 | 1 |
| A\overline{B} | 0 | 1 |
| AB | 0 | X |
Analysis and Verification
Functional Analysis
Functional analysis ensures that a combinational logic circuit produces outputs matching its specified Boolean function for all input combinations, confirming logical correctness before implementation. This verification is essential in digital design to detect discrepancies between intended and actual behavior, typically performed post-synthesis but prior to physical realization. Techniques range from informal simulation-based checks to rigorous formal methods, enabling detection of design errors without exhaustive hardware testing. Simulation forms the foundation of functional analysis, involving the application of input vectors to the circuit model and comparison of outputs against expected results. For small circuits, test cases are often derived from truth tables, providing a complete enumeration of input-output mappings. Software tools facilitate this process; gate-level simulators, such as those based on SPICE models for transistor gates, allow verification by mimicking circuit behavior under various inputs, though digital-specific simulators like event-driven logic analyzers are preferred for efficiency in combinational networks.[48][49] Equivalence checking verifies that two circuit descriptions implement the same Boolean function, commonly used to compare an original design against a minimized version or a reference specification. This is achieved through Boolean matching, where circuit outputs are represented symbolically and compared for identity across all inputs; if the representations are equivalent, the designs are functionally identical. For instance, in logic synthesis flows, this confirms that optimization preserves functionality without altering the truth table.[50] Coverage metrics quantify the thoroughness of verification test sets by assessing their ability to detect modeled faults, ensuring comprehensive functional validation. The stuck-at fault model is widely used, positing that a circuit node may be permanently fixed at logic 0 (stuck-at-0) or 1 (stuck-at-1), simulating manufacturing defects or design flaws. Fault coverage is the percentage of such faults detectable by the test vectors; for small combinational circuits with few inputs, achieving 100% coverage is feasible using complete input enumeration, providing confidence in fault-free operation.[51] Formal methods offer exhaustive verification without simulation's scalability limits, particularly through Binary Decision Diagrams (BDDs). Introduced by Bryant, BDDs provide a canonical, directed acyclic graph representation of Boolean functions, enabling symbolic manipulation of circuit outputs. Verification proceeds by constructing BDDs for the circuit and specification, then checking equivalence via graph isomorphism or simplification operations, avoiding the exponential 2^n enumeration required for truth tables in large designs. This approach excels for combinational circuits up to moderate complexity, where BDD size remains manageable under a good variable ordering.[52]Hazard Detection
In combinational logic circuits, hazards represent temporary deviations in the output due to propagation delays in gate networks, potentially leading to glitches that violate the intended steady-state behavior. Static hazards occur when the output is expected to remain constant at a logic 0 or 1 during a single input transition, but instead experiences an unintended pulse: a static-0 hazard manifests as a brief spike to 1, while a static-1 hazard shows a brief drop to 0.[53] These arise from unequal delays along different signal paths, where a changing input causes one path to deactivate before another activates, creating a momentary imbalance in the logic function. Dynamic hazards, in contrast, happen during intended output transitions (from 0 to 1 or 1 to 0), where the signal oscillates multiple times—such as 0-1-0 or 1-0-1—due to interactions among three or more delayed paths in multi-level circuits.[54] Both types stem from the inherent asynchronous nature of combinational logic, lacking a synchronizing clock to mask delay variations, unlike clocked sequential circuits where timing is controlled to prevent such anomalies from propagating.[53] Detection of hazards typically involves analyzing the circuit's timing behavior through diagrams that model gate delays and input transitions, revealing race conditions where signals arrive out of sequence. For instance, in a gate network, one simulates the propagation of a single input change (e.g., from 000 to 001) across all paths to the output, identifying glitches if the output deviates from the expected value even briefly.[55] Advanced methods, such as path sensitization or ternary simulation (incorporating an indeterminate state for transitions), systematically trace potential hazard loci by sensitizing paths to the changing input while holding others constant.[55] This approach highlights race conditions in complex networks, ensuring verification before implementation. To eliminate hazards, designers incorporate redundant logic terms during synthesis, particularly in sum-of-products (SOP) forms, by adding consensus implicants that bridge adjacent minterms and cover transition gaps without altering functionality. For static hazards, including all prime implicants from the Karnaugh map or Quine-McCluskey minimization ensures no uncovered edges exist between terms.[54] A classic example is the functionF = \bar{A}B + AC, implemented with AND-OR gates; when A transitions from 0 to 1 with B=1 and C=1 (where the output should remain at 1, covering the static-1 case), the \bar{A}B term may turn off before AC turns on due to delay differences, causing a glitch to 0. Adding the redundant consensus term BC (which is 1 during the transition) overlaps the coverage, preventing the gap: F = \bar{A}B + AC + BC.[53][56] Dynamic hazards are mitigated indirectly by first eliminating static ones and restricting to two-level implementations, or by inserting delay elements to equalize paths, though the latter increases latency.[54] These strategies maintain the clockless sensitivity of combinational logic but ensure reliable operation in asynchronous environments.
Applications and Examples
Common Circuits
Combinational logic circuits often incorporate standard building blocks such as adders for arithmetic operations, multiplexers for data selection, and decoders or encoders for address decoding and input prioritization. These circuits exemplify the principles of Boolean algebra and gate-level implementation, forming the foundation for more complex digital systems.[57] Half-AdderA half-adder is a basic combinational circuit that adds two single-bit binary inputs, A and B, to produce a sum bit (S) and a carry-out bit (C). It does not account for a carry-in from a previous stage, making it suitable only for the least significant bit of a multi-bit addition. The Boolean expressions are S = A \oplus B (or equivalently S = A'B + AB') and C = A \cdot B.[57][58] The truth table for the half-adder is as follows:
| A | B | S | C |
|---|---|---|---|
| 0 | 0 | 0 | 0 |
| 0 | 1 | 1 | 0 |
| 1 | 0 | 1 | 0 |
| 1 | 1 | 0 | 1 |
The full-adder extends the half-adder by incorporating a carry-in bit (C_in) from a prior stage, allowing it to add three single-bit inputs: A, B, and C_in, while producing a sum bit (S) and a carry-out bit (C_out). The Boolean expressions are S = A \oplus B \oplus C_{in} (or S = A'BC_{in} + AB'C_{in} + ABC' + ABC) and C_{out} = AB + BC_{in} + AC_{in}.[58][59] The truth table for the full-adder is:
| A | B | C_in | S | C_out |
|---|---|---|---|---|
| 0 | 0 | 0 | 0 | 0 |
| 0 | 0 | 1 | 1 | 0 |
| 0 | 1 | 0 | 1 | 0 |
| 0 | 1 | 1 | 0 | 1 |
| 1 | 0 | 0 | 1 | 0 |
| 1 | 0 | 1 | 0 | 1 |
| 1 | 1 | 0 | 0 | 1 |
| 1 | 1 | 1 | 1 | 1 |
A half-subtractor is a combinational circuit that subtracts two single-bit binary inputs, A and B, to produce a difference bit (D) and a borrow-out bit (B_out). The Boolean expressions are D = A \oplus B and B_out = \bar{A} B. It can be implemented using an XOR gate for the difference and a NOT-AND (or NOR) for the borrow. The truth table for the half-subtractor is:
| A | B | D | B_out |
|---|---|---|---|
| 0 | 0 | 0 | 0 |
| 0 | 1 | 1 | 1 |
| 1 | 0 | 1 | 0 |
| 1 | 1 | 0 | 0 |
A multiplexer is a combinational circuit that selects one of several input signals and forwards it to a single output line, controlled by select inputs; it functions as a digitally controlled switch for data routing. For a 2:1 MUX with data inputs A and B, select input S, and output Y, the Boolean expression is Y = \bar{S} A + S B.[61][62] The truth table for the 2:1 MUX is:
| S | A | B | Y |
|---|---|---|---|
| 0 | 0 | 0 | 0 |
| 0 | 0 | 1 | 0 |
| 0 | 1 | 0 | 1 |
| 0 | 1 | 1 | 1 |
| 1 | 0 | 0 | 0 |
| 1 | 0 | 1 | 1 |
| 1 | 1 | 0 | 0 |
| 1 | 1 | 1 | 1 |
A demultiplexer is the reverse of a multiplexer, routing a single input signal to one of several output lines based on select inputs. For a 1:2 DEMUX with input D, select S, and outputs Y0, Y1, the expressions are Y0 = \bar{S} D and Y1 = S D. It is commonly used for data distribution. The truth table for the 1:2 DEMUX is:
| S | D | Y0 | Y1 |
|---|---|---|---|
| 0 | 0 | 0 | 0 |
| 0 | 1 | 1 | 0 |
| 1 | 0 | 0 | 0 |
| 1 | 1 | 0 | 1 |
A decoder is a combinational circuit that converts binary information from n input lines to 2^n unique output lines, activating exactly one output corresponding to the input code in one-hot format. For a 2-to-4 decoder with inputs I1 and I0, and outputs O3 to O0, the outputs are generated as minterms: O0 = \bar{I1} \bar{I0}, O1 = \bar{I1} I0, O2 = I1 \bar{I0}, O3 = I1 I0, typically using AND gates with appropriate inverters.[64] The truth table for the 2-to-4 decoder is:
| I1 | I0 | O3 | O2 | O1 | O0 |
|---|---|---|---|---|---|
| 0 | 0 | 0 | 0 | 0 | 1 |
| 0 | 1 | 0 | 0 | 1 | 0 |
| 1 | 0 | 0 | 1 | 0 | 0 |
| 1 | 1 | 1 | 0 | 0 | 0 |
| w3 | w2 | w1 | w0 | y1 | y0 | z |
|---|---|---|---|---|---|---|
| 0 | 0 | 0 | 0 | X | X | 0 |
| 0 | 0 | 0 | 1 | 0 | 0 | 1 |
| 0 | 0 | 1 | X | 0 | 1 | 1 |
| 0 | 1 | X | X | 1 | 0 | 1 |
| 1 | X | X | X | 1 | 1 | 1 |
A comparator is a combinational circuit that compares two binary numbers and outputs their relationship (equal, greater than, less than). For a 1-bit comparator with inputs A and B, outputs A=B, A>B, A<B, the expressions are A=B = A \odot B (XNOR), A>B = A \bar{B}, A<B = \bar{A} B. It uses XNOR, AND, and OR gates. The truth table for the 1-bit comparator is:
| A | B | A=B | A>B | A<B |
|---|---|---|---|---|
| 0 | 0 | 1 | 0 | 0 |
| 0 | 1 | 0 | 0 | 1 |
| 1 | 0 | 0 | 1 | 0 |
| 1 | 1 | 1 | 0 | 0 |