Fact-checked by Grok 2 weeks ago

Boolean circuit

A Boolean circuit is a in representing combinational digital logic circuits as a (DAG) composed of input nodes, Boolean logic gates (such as , and NOT), wires connecting them, and output nodes, which computes a mapping binary inputs from {0,1}^n to binary outputs in {0,1}^m. These circuits process binary signals using discrete values of 0 or 1, where gates perform fundamental operations like inversion (NOT), conjunction (AND, outputting 1 only if all inputs are 1), and disjunction (OR, outputting 1 if at least one input is 1), with fan-in typically limited to 1 or 2 for NOT and 2 for . The sets {, NOT} form a complete basis, meaning any can be expressed using circuits built solely from these gates, often optimized via to minimize size (number of gates) or depth (longest path from input to output). In computer science, Boolean circuits serve as a nonuniform model of computation, allowing tailored designs for specific input sizes unlike uniform models like Turing machines, and they underpin the complexity class P/poly, which captures problems solvable by polynomial-sized circuit families. Key complexity measures include circuit size, which quantifies computational resources needed (e.g., arithmetic circuits generalize to fields beyond {0,1}), and depth, relevant for parallel computation in classes like NC. They model real-world hardware like adders, multiplexers, and cryptographic primitives (e.g., AES S-boxes optimized to 32 AND gates), with applications in cryptography, error-correcting codes, and proving lower bounds, such as the exponential size required for the parity function in constant-depth circuits. Despite their simplicity, open problems persist, including whether NP-complete problems like 3-SAT require superpolynomial circuit sizes, highlighting the gap between theoretical models and practical chip design.

Fundamentals

Basic definition

A Boolean circuit is a mathematical model used in computational complexity theory to represent combinational digital logic, formalized as a directed acyclic graph (DAG) with three types of nodes: input nodes, gate nodes, and output nodes. The input nodes, of which there are n, have no incoming edges and correspond to the Boolean variables or constants fed into the circuit. Gate nodes compute basic Boolean operations—specifically AND (\wedge), OR (\vee), or NOT (\neg)—while output nodes, of which there are m, have no outgoing edges and produce the final results.[](https:// theory.cs.princeton.edu/complexity/circuitschap.pdf) Directed edges connect these nodes, representing wires that propagate Boolean values (0 or 1) from inputs through gates to outputs in a unidirectional manner. The fan-in of a gate specifies the number of incoming edges: typically 2 for binary AND and OR gates, and 1 for unary NOT gates, though some definitions allow unbounded fan-in for generality. In contrast, fan-out refers to the number of outgoing edges from a node, which is unrestricted in standard Boolean circuits, enabling a single gate's output to drive multiple subsequent gates.[](https:// theory.cs.princeton.edu/complexity/circuitschap.pdf) This structure ensures the circuit is acyclic, distinguishing it as a combinational model with no feedback loops or memory elements; evaluation proceeds via topological order, where each node's value depends only on its predecessors, guaranteeing that outputs are determined solely by the current input configuration.[](https:// theory.cs.princeton.edu/complexity/circuitschap.pdf) Such circuits compute functions from \{0,1\}^n to \{0,1\}^m, providing a finite, explicit representation for evaluating any specific Boolean function.[](https:// theory.cs.princeton.edu/complexity/circuitschap.pdf) For example, the two-input XOR function, which outputs 1 if exactly one input is 1, can be realized with a small circuit: apply NOT gates to each input, then compute the AND of the original with the negated other input for each pair, and finally OR those two results.[](https:// theory.cs.princeton.edu/complexity/circuitschap.pdf) This construction uses five gates (two NOT, two AND, one OR) and illustrates how interconnected gates build more complex operations from the basic set.[](https:// theory.cs.princeton.edu/complexity/circuitschap.pdf)

Components and gates

The basic building blocks of Boolean circuits are logic gates, which compute specific Boolean functions on one or more binary inputs to produce a single binary output. These gates interconnect via wires to form more complex computations. The fundamental gates are the AND, OR, and NOT gates, which together form a functionally complete set capable of expressing any Boolean function. The AND gate, representing logical conjunction, produces an output of 1 only if all its inputs are 1; otherwise, the output is 0. Its standard symbol consists of a semicircular shape with a straight vertical line on the left side for multiple inputs and a single output line on the right. The truth table for a two-input AND gate is as follows:
Input AInput BOutput
000
010
100
111
The OR gate, representing , produces an output of 1 if at least one input is 1; the output is 0 only if all inputs are 0. Its standard symbol features a curved resembling a pointed with multiple input lines on the left and a single output on the right. The for a two-input OR gate is:
Input AInput BOutput
000
011
101
111
The NOT gate, or inverter, performs logical by inverting its single input: an input of 0 yields 1, and an input of 1 yields 0. Its standard symbol is a pointing right with a small circle at the output to indicate inversion. The truth table is:
Input AOutput
01
10
Extensions of these basic gates include the (NOT-AND) and NOR (NOT-OR) gates, which are in the sense that any can be implemented using only gates or only NOR gates. For instance, a NOT gate can be constructed from a by connecting both inputs to the same signal, and an from by inverting the output of a using another ; similar constructions apply for OR using . This universality stems from the ability of and NOR to simulate AND, OR, and NOT. Boolean circuits often incorporate constant inputs fixed at 0 or 1, treated as input nodes with no incoming wires to provide fixed values throughout the computation. Additionally, a circuit can have multiple outputs by designating several gates as sinks, each producing a distinct output bit.

Examples and constructions

Elementary circuits

Elementary circuits provide foundational examples of Boolean circuits, demonstrating how basic gates can compute specific functions through straightforward interconnections. One simple yet illustrative example is the majority function on three inputs, denoted as MAJ(a, b, c), which outputs 1 if at least two of the inputs are 1 and 0 otherwise. This function can be expressed as (a ∧ b) ∨ (a ∧ c) ∨ (b ∧ c), implemented using three 2-input AND gates and two 2-input OR gates to combine the results. In a circuit diagram, the inputs a, b, and c connect to the AND gates such that one AND takes a and b, another a and c, and the third b and c; the outputs of these AND gates then feed into a chain of OR gates, where the first OR combines two AND outputs and the second OR incorporates the remaining AND output to produce the final majority value. To trace signal flow, begin at the inputs and follow the wires: for instance, if a=1, b=1, c=0, the first AND outputs 1, the second 0, the third 0; the first OR outputs 1 (from the first AND), and the second OR propagates this 1 as the output. Another fundamental example is the parity circuit, which computes whether the number of 1s among n inputs is even or odd, outputting 1 for and 0 for even. The even , denoted Parity_n(x_1, ..., x_n), can be realized recursively as Parity_n = Parity_{n-1} ⊕ x_n, where ⊕ denotes the , forming a structure of that pairs inputs level by level until a single output. Since must be built from AND, OR, and NOT gates, each requires two NOT gates, two AND gates, and one , with the recursive tree ensuring balanced depth for efficient computation; for n=4, this might pair x_1 ⊕ x_2 and x_3 ⊕ x_4, then those results. Tracing the signal flow involves following the pairwise computations upward through the tree: each intermediate node computes the parity of its subtree, aggregating the overall parity at the . Half-adders and full-adders serve as essential building blocks for arithmetic circuits, performing on bits. A half-adder takes two inputs a and b, producing S = and carry C = a ∧ b; in terms of basic gates, the carry uses one , while the sum requires two NOT gates, two AND gates (for a ∧ ¬b and ¬a ∧ b), and one to combine them, totaling three AND, one OR, and two NOT gates. A full-adder extends this to three inputs a, b, and c (where c is a carry-in), with S = () ⊕ c and carry-out C = MAJ(a, b, c); it can be constructed by cascading two half-adders—the first adds a and b to produce an intermediate and carry, which are then fed with c into a second half-adder for the final —along with an additional OR gate for the carry-out, resulting in six AND, three OR, and four NOT gates overall. In diagrams, inputs connect to the first half-adder block, its outputs branch to the second block and an OR gate for carry; signal flow traces from inputs through the XOR-like sums and AND carries, merging at the outputs to reflect without prior carry handling in the half-adder case. Boolean circuit diagrams typically represent inputs as wires on the left, as symbols (AND as a curved , OR as a pointed , NOT as a with ), and outputs on the right, with wires branching to show . Tracing signal flow entails starting from input values, propagating through each —applying for AND, disjunction for OR, and for NOT—and verifying the output matches the , which aids in and understanding dependencies. Compact implementations of these circuits can also employ universal like , which alone suffice to realize any .

Synthesis methods

One common approach to synthesizing Boolean circuits involves converting a given Boolean function into its disjunctive normal form (DNF), which represents the function as a disjunction (OR) of conjunctions (ANDs) of literals corresponding to the minterms where the function evaluates to true. The synthesis steps begin with the truth table of the function: for each input combination yielding output 1, form a minterm by ANDing the appropriate literals (positive or complemented variables); then, OR all such minterms to obtain the DNF expression. This expression directly translates to a two-level circuit using AND gates for minterms and a single OR gate at the output, though it may not be minimal in gate count. Similarly, for conjunctive normal form (CNF), the function is expressed as a conjunction (AND) of disjunctions (ORs) of literals based on maxterms where the output is 0: identify maxterms from the truth table, OR literals within each, and AND the results to form the CNF, yielding a two-level circuit with OR gates feeding into a single AND gate. These normal forms ensure any Boolean function can be realized using only AND, OR, and NOT gates, leveraging the universality of such gates. To minimize the gate count in DNF or CNF circuits for functions with 4 to 6 variables, Karnaugh maps provide a graphical that visually identifies adjacent minterms (differing by one bit) for grouping into larger implicants, reducing the number of literals and terms. The process involves plotting the values on a where rows and columns represent Gray-coded variable combinations, circling groups of 1s (powers of 2 in size) that cover minterms without overlap where possible, and deriving the minimized sum-of-products expression from the product terms of each group. This technique, introduced by in 1953, excels for small-scale synthesis by avoiding algebraic manipulation while ensuring a minimization for the variable limit. The Quine-McCluskey method, developed historically by Willard Quine in 1952 and extended tabularly by Edward McCluskey in 1956, offers an exact algorithmic alternative for minimization beyond ' variable limit. It proceeds in two phases: first, generate all prime implicants by iteratively comparing and combining minterms in a tabular format based on the number of 1s in their binary representations, merging pairs that differ by one bit until no further combinations are possible; second, select a minimal cover of these implicants to match the function's onset using a covering table that identifies essential primes and solves the set-cover problem. This tabular approach guarantees optimality for two-level circuits but scales exponentially with variables, making it suitable for up to about 8 inputs in practice. For larger or multi-level circuits, the algorithm provides a minimization technique that iteratively refines two-level representations toward global optimality. Developed by Robert Rudell and in the 1980s, it operates on cube-based representations of the , performing to enlarge implicants, irredundant to select minimal sets, and to shrink cubes while preserving functionality, often achieving near-optimal results faster than exact methods. 's and complement operations further eliminate redundant terms, enabling efficient for programmable logic arrays and complex digital designs.

Formal properties

Size and depth metrics

In Boolean circuit theory, the size of a circuit C, denoted S(C), measures the total computational resources required and is defined as the number of gates in C. This metric captures the overall complexity in terms of hardware or logical elements needed to implement the function. Formally, S(C) = |V|, where V is the set of gate nodes in the directed acyclic graph (DAG) representation of C. The depth of a circuit C, denoted D(C), quantifies the inherent or sequential dependency and is defined as the length of the longest directed from an input node to an output node in the DAG. This length, typically measured in the number of edges, reflects the minimum number of time steps required for evaluation in a parallel model where each computes in unit time. Formally, D(C) = \max \{ \ell(p) \mid p \text{ is a [path](/page/path) from input to output} \}, where \ell(p) is the number of edges in p. Size and depth represent fundamental trade-offs in : circuits with small depth often require larger to achieve high parallelism, while deeper circuits can be more compact by serializing computations. For instance, a might be computed by a shallow circuit with exponentially many gates or a deep circuit with , highlighting how reducing depth typically increases exponentially in the worst case. These trade-offs are central to analyzing efficiency in both theoretical models and practical implementations.

Uniformity and non-uniformity

In the non-uniform model of circuits, a circuit family \{C_n\}_{n \geq 1} is defined as a collection where each C_n is a fixed circuit with exactly n inputs and a single output, designed to compute a specific for inputs of length n. There are no computational restrictions on how the circuits C_n are generated or "wired" for each n; this allows the model to incorporate arbitrary pre-computed , such as hard-coded truth tables or lookup structures tailored to the input size, effectively providing "" that can be non-computable in the uniform sense. Uniform circuit families, by contrast, require that the circuits be constructible in a resource-bounded manner, ensuring a tight connection between circuit complexity and models based on . Formally, a family \{C_n\} is uniform if there exists a M such that, given the unary input $1^n, M outputs a complete description of C_n (e.g., in the form of an or representing the circuit's gates and connections). This uniformity condition prevents unrestricted advice, linking the circuit model's power to the efficiency of the constructor machine. Two standard notions of uniformity are P-uniformity, where the M runs in time in n, and logspace-uniformity, where M uses only logarithmic space in n. Logspace-uniformity is especially relevant for polynomial-sized circuits, as it establishes that languages accepted by such families are precisely those decidable by deterministic polynomial-time s, thereby equating uniform with time-bounded uniform computation. P-uniformity provides a slightly broader but equivalent characterization for many purposes, as the circuit descriptions themselves are polynomial-length outputs. The implications of non-uniformity are that it yields a strictly more expressive model than uniformity, as the allows circuits to exploit input-size-specific optimizations or that no single could generate efficiently across all sizes. For instance, non-uniform families can decide languages that are undecidable in the setting, such as certain languages requiring infinite case analysis, highlighting how non-uniformity captures scenarios with pre-wired, size-dependent knowledge at the cost of losing algorithmic . In , and depth serve as key parameters for both models, but ensures these parameters align with provably efficient constructions.

Evaluation and computation

Evaluation algorithms

Evaluating a Boolean circuit on a given input involves computing the output by propagating values through the circuit's (DAG) in a manner that respects the dependencies between gates. The standard sequential algorithm relies on a topological ordering of the gates, which ensures that each gate is evaluated only after all its input gates have been processed. This ordering can be obtained using algorithms such as (DFS) or Kahn's algorithm, both of which run in linear time relative to the circuit size. The evaluation process begins by assigning the input values to the circuit's input nodes. Then, proceeding in , each 's output is computed by applying its —such as , or NOT—to the values of its predecessor nodes. To handle , where a single 's output feeds into multiple subsequent , the algorithm stores (memoizes) the computed value for each , avoiding redundant recomputation. Finally, the outputs are collected from the designated output nodes. This approach ensures that the entire is traversed exactly once. The of this is O(S(C)), where S(C) denotes the size of the (typically the number of or wires), as each and wire is processed a constant number of times. is also O(S(C)) to store the intermediate values. The circuit's depth influences the potential for parallel evaluation, but sequential evaluation remains linear regardless. Here is for the evaluation procedure, assuming the is represented as a DAG with nodes labeled by types and edges indicating dependencies:
function evaluate_circuit(C, x):
    // C: Boolean circuit as DAG
    // x: input vector of length n
    topo_order = topological_sort(C)  // Linear-time sort, e.g., [Kahn's algorithm](/page/The_Algorithm)
    values = array of size |C.nodes|, initialized to undefined
    // Assign input values
    for i in 1 to n:
        values[input_node_i] = x[i]
    // Propagate through gates in topological order
    for node in topo_order[n+1 to |C.gates|]:
        inputs = []
        for pred in predecessors(node):
            if values[pred] is undefined:
                error "Topological order invalid"
            inputs.append(values[pred])
        values[node] = apply_gate_function(gate_type[node], inputs)
    return values[output_nodes]
The apply_gate_function uses the gate's truth table: for binary gates like AND or OR, it combines two inputs; for unary NOT, it inverts one input. This procedure correctly computes the circuit's output for any valid input.

Parallel evaluation

The evaluation of a Boolean circuit can be performed in parallel time proportional to its depth D(C) using a number of processors equal to its size S(C). This approach exploits the layered structure of the circuit, where gates are organized by their distance from the inputs, allowing concurrent computation across independent components at each level. In the PRAM model, which permits concurrent reads from but exclusive writes, the circuit value problem is solved by assigning one per . Processors at depth i read the values of their input gates from depth i-1, compute the gate function in constant time, and write the result to . Synchronization barriers ensure all processors at a given depth complete before the next level begins, yielding overall time O(D(C)) and work O(S(C)). This model suits circuit evaluation because multiple gates often share inputs, and concurrent reads handle this dependency without conflict. For space-efficient variants, fewer than S(C) processors suffice in PRAM. Logarithmic depth reductions enable polylogarithmic parallel time for circuits in classes like NC, often incorporating structures such as sorting networks to parallelize fan-in aggregation or dependency resolution in O(\log n) layers with constant fan-in comparators.

Complexity aspects

Theoretical background

The theoretical foundations of Boolean circuits originated in the 1930s with Claude Shannon's work on switching theory, where he established a formal link between and the design of relay-based electrical circuits. In his 1937 master's thesis, published in 1938, Shannon demonstrated that any switching function could be represented and simplified using expressions, enabling the systematic analysis and synthesis of circuits composed of relays that perform logical operations. This connection provided the mathematical basis for treating circuits as computational devices capable of realizing arbitrary functions through interconnected logic elements. Building on this groundwork, the 1960s saw the integration of Boolean circuits into computational complexity theory, particularly through the efforts of Juris Hartmanis and Richard E. Stearns. In their 1965 paper, they introduced quantitative measures such as circuit size (the number of gates) and depth (the longest path from input to output) to evaluate the computational resources needed for algorithms, extending Shannon's ideas to formal models of efficiency. These metrics allowed for the classification of computational problems based on the minimal circuit resources required, marking a pivotal shift toward analyzing the inherent difficulty of Boolean computations. Boolean circuits represent a non-uniform model of computation, serving as a counterpart to the uniform model, in which circuits are constructed specifically for functions over fixed input sizes without requiring a single, adaptable description for all instances. A fundamental result in this framework is that every , being finite in domain and range, can be computed by a finite Boolean circuit, as direct synthesis methods like yield an explicit construction, in contrast to Turing machines which handle potentially infinite computations. This finiteness underscores the model's suitability for studying resource-bounded computation on discrete inputs.

Complexity classes

The class P/poly consists of all languages that can be decided by a family of circuits \{C_n\}_{n \geq 1} where each C_n has size at most p(n) for some p and n inputs. This class captures non- computation, allowing circuit designs to depend arbitrarily on input length n without a generation . Notably, P/poly properly contains the class P of languages decidable in time by s, since any polynomial-time computation can be simulated by a -size circuit family. However, P/poly also includes undecidable languages, such as the unary where inputs are encoded as unary strings representing indices and their self-inputs, computable by circuits that hardcode the halting behavior for each length. The class NC (Nick's Class) comprises languages decidable by uniform families of Boolean circuits with polynomial size and O(\log^k n) depth for some constant k \geq 1, using bounded fan-in gates. Uniformity ensures the circuits can be generated in logarithmic space, modeling efficient computation on models like the PRAM. NC is contained in P, as a logarithmic-depth polynomial-size circuit can be evaluated sequentially in polynomial time by processing layer by layer. The class is stratified into levels NC^k for increasing k, with \bigcup_k \text{NC}^k = \text{NC}, and it includes problems like and iterated that admit efficient algorithms. The class extends NC by allowing unbounded fan-in AND and OR gates in alternating layers, while maintaining polynomial size and O(\log^k n) depth for some constant k \geq 1. This alternation models more powerful parallel computations, such as those using symmetric Boolean functions at each level. Like NC, is contained in P, and contains NC. Subclasses like AC^k define the hierarchy based on depth levels. Boolean formulas, as a restricted model of tree-like circuits without fan-out greater than 1, define complexity classes based on their size and depth. A language belongs to the class of polynomial-size logarithmic-depth formulas if there exists a uniform family \{F_n\}_{n \geq 1} of formulas computing it with size S(F_n) \leq p(n) and depth D(F_n) \leq O(\log n) for some p. This class is contained in NC^1, the lowest level of NC, since formulas can be converted to bounded circuits with a logarithmic depth increase and polynomial size blowup. Formula size measures are particularly useful for proving lower bounds, as they avoid sharing subcomputations present in general circuits.

Circuit completeness

In circuit complexity, complete problems for classes like AC^0 are those to which every language in the class reduces via suitable reductions, allowing the class to be characterized by the hardness of such problems. A canonical complete problem for depth-d AC^0 (under logspace reductions with uniformity conditions) is the st-connectivity problem in width-d grid graphs of polynomial length, where one must determine if there is a from the left side to the right side using only marked vertices. This problem captures the power of constant-depth circuits precisely, as reductions preserve the depth parameter while simulating the alternating structure of AND and OR gates with path existence in layered graphs. For the AC^p classes, which augment constant-depth circuits with modular gates computing sums modulo a prime p, the modular counting problem MOD_p—determining whether the number of 1s in the input is congruent to 1 modulo p—is a natural complete problem under logarithmic space reductions. These reductions map instances of arbitrary AC^p computations to evaluations of MOD_p by simulating the modular gates and alternating layers via counting paths or witnesses modulo p in auxiliary structures, preserving both polynomial size and constant depth. Similar completeness holds for higher levels in the AC hierarchy, where generalized modular counting captures the additional power of deeper alternating computations. Reductions in circuit complexity often take the form of logarithmic space or NC^1 computations that preserve key parameters like circuit size and depth, ensuring that transfers between problems without inflating resources. Logspace reductions, for instance, construct new instances using O(log n) space while mapping the input bits to a polynomial-size instance of the target problem, maintaining depth by layering the simulation to match the original circuit's depth. NC^1 reductions, computable by logarithmic-depth circuits of size, similarly preserve these metrics by using tree-like computations to generate the reduced instance, making them suitable for non-uniform classes where uniformity is not required. These are crucial for establishing , as they allow simulating the full power of the class within the complete problem. The Razborov-Smolensky method provides a high-level algebraic for proving lower bounds against constant-depth by approximating their computations with low-degree polynomials over finite fields. Specifically, it shows that any polynomial-size constant-depth circuit family can be approximated (in ) by a low-degree over GF(p), whereas functions like MOD_q (for prime q ≠ p) require high-degree representations, leading to exponential size lower bounds for computing MOD_q in AC^0. This technique relies on inductively approximating gate outputs—AND and OR gates as products and sums—and bounding the degree growth across layers, establishing that certain modular functions cannot be computed efficiently in weaker constant-depth classes.

Lower bounds and challenges

A fundamental trivial lower bound in Boolean circuit complexity states that any non-constant Boolean function on n inputs, which depends on all n input bits, requires a circuit of size at least n. This follows from the necessity of incorporating each input into the computation, as smaller circuits could not access all variables. More generally, the size must be at least logarithmic in the size of the function's truth table, which has $2^n entries, ensuring the circuit can distinguish the function's outputs. Significant progress has been made on lower bounds for restricted circuit classes, particularly constant-depth circuits. A landmark result shows that the , which computes the exclusive-or of n bits, cannot be computed by constant-depth circuits (known as AC^0) unless their size is exponential in n. This exponential separation was established using Håstad's switching lemma, which approximates constant-depth circuits by simpler read-once formulas with high probability under random restrictions. Such bounds highlight the limitations of shallow circuits for even simple symmetric functions like , which is complete for certain classes but hard for bounded depth. Despite these advances for restricted models, proving strong lower bounds for general Boolean circuits remains a central open challenge. A key unresolved question is whether polynomial-time computable functions (class P) can be separated from those computable by polylogarithmic-depth, polynomial-size circuits (class NC), i.e., whether P = NC or if there exists a P-complete problem requiring superpolylogarithmic depth. Even more ambitiously, it is unknown whether problems in NP require superpolynomial-sized circuits, a barrier tied to the P vs. NP question, as non-uniform circuits (P/poly) contain P and potentially more. Progress here would resolve fundamental questions in parallel computation and derandomization. These difficulties are partly explained by theoretical barriers to proving circuit lower bounds. The natural proofs barrier, introduced by Razborov and Rudich, demonstrates that many standard proof techniques—those that are constructive, large, and useful against pseudorandom generators—cannot establish superpolynomial lower bounds for general s unless one-way functions do not exist, which is considered unlikely. This framework captures a wide range of prior lower bound proofs (e.g., for AC^0) as "natural" and shows they fail against stronger classes under cryptographic assumptions, underscoring the need for non-natural proof methods to overcome these obstacles.

References

  1. [1]
    Circuit Complexity | CSRC - NIST Computer Security Resource Center
    A Boolean circuit is a directed acyclic graph (DAG) with input nodes, logic gates, and output nodes. A Boolean circuit with n inputs and m outputs computes a ...Missing: authoritative sources
  2. [2]
    [PDF] Lecture A3: Boolean Circuits - cs.Princeton
    Boolean circuits use variables with values 0 or 1, where Boolean functions are circuits. Boolean functions can be expressed using AND, OR, and NOT.Missing: authoritative sources
  3. [3]
    [PDF] Week 1: An Overview of Circuit Complexity 1 Welcome 2 Preliminaries
    At a high level, the basic question asked in circuit complexity is: given a collection of “simple functions” and a target Boolean function f, how efficiently ...
  4. [4]
    Boolean circuits (Chapter 6) - Computational Complexity
    This chapter investigates a model of computation called the Boolean circuit, which is a generalization of Boolean formulas and a simplified model of the ...Missing: scholarly | Show results with:scholarly
  5. [5]
    [PDF] Computational Complexity: A Modern Approach - Princeton University
    Jan 8, 2007 · Definition 6.1 (Boolean circuits) For every n, m ∈ N a Boolean circuit C with n inputs and m outputs1is a directed acyclic graph.
  6. [6]
    Boolean Circuit - an overview | ScienceDirect Topics
    A Boolean circuit is an acyclic, directed graph in which the edges carry unidirectional logical signals and the vertices compute elementary logical functions.
  7. [7]
    [PDF] Introduction to the Theory of Computation, 3rd ed.
    Page 4. Introduction to the Theory of. Computation, Third Edition. Michael Sipser. Editor-in-Chief: Marie Lee.
  8. [8]
    [PDF] 2 Logic Gates and Combinational Logic - University of Oregon
    2. Logic Gates and Combinational Logic. 2.1. Gate Types and Truth Tables. The basic logic gates are AND, OR, NAND, NOR, XOR, INV, and BUF.
  9. [9]
    None
    ### Definitions and Truth Tables for Basic Boolean Logic Gates
  10. [10]
    [PDF] NAND and NOR are universal gates
    Any function can be implemented using only NAND or only NOR gates. How can we prove this? (Proof for NAND gates). Any boolean function can be implemented using ...
  11. [11]
    [PDF] Circuits - GMU CS Department
    (For circuits having multiple outputs there may be multiple output gates.) In a Boolean circuit, each input gate is identified with some bit of the input; each ...
  12. [12]
    [PDF] 3-Input Majority Function 2-level AND-OR implementation
    2-level AND-OR implementation. 1. An AND-gate for each row of the table with 1 in the output column. 2. All inputs (A, B, C) wired to the inputs of each ...
  13. [13]
    CSCE 312 Fall 2023 Computer Organization Lecture 2 Digital Logic ...
    Another useful function to compute would be the majority function: this function returns 1 if the majority of its inputs are 1, 0 otherwise. The function we've ...
  14. [14]
    [PDF] Boolean function complexity 1 Upper Bounds - Nitin Saurabh
    Apr 24, 2019 · Thus using the recursion, Parityn(x) = Parityn-1(x1,...,xn-1) ⊕ xn, we obtain a circuit of size 3(n − 1) computing Parityn. Lower bound: The ...
  15. [15]
    [PDF] Lecture 10: Circuit Lower Bounds 1 Boolean Circuits 2 Problems in NP
    Oct 11, 2011 · Since XORs can be implemented in constant depth using ANDs, ORs and NOTs, the overall circuit has constant depth of polynomial size. We now ...
  16. [16]
    Half and Full Adders
    This circuit is known as the half adder. It can not handle the addition of any two arbitrary numbers because it does not allow the input of a carry bit.
  17. [17]
    [PDF] 1. a. Explain full adder. Design its truth table. b. Express sum ... - CVBL
    Diagram using basic logic gates. d. Implement full adder using half adder. Full Adder is the adder which adds three inputs and produces two outputs. The first ...
  18. [18]
    [PDF] 6.1 Combinational Circuits - cs.Princeton
    Boolean function: function whose inputs and outputs are 0, 1. Relationship to circuits. □! Boolean variables: signals. □!Missing: diagrams majority
  19. [19]
    [PDF] Logic Synthesis in a Nutshell - UCSD CSE
    Jul 13, 2010 · SOP Sum-of-products (SOP), or disjunctive normal form (DNF) as it is called in computer science, is a special form of Boolean formulas ...
  20. [20]
    [PDF] Symbolically Synthesizing Small Circuits - UT Computer Science
    We learn a CNF (conjunctive normal form), DNF (disjunctive normal form), CDNF (conjunction of DNFs), or DCNF (disjunction of CNFs) representation of output and ...
  21. [21]
    [PDF] The Map Method For Synthesis of Combinational Logic Circuits
    The arrays described by Veitch (see ref. 3 of the paper) and by Mr. Karnaugh repre- sent further development of the table of combinations into forms which are ...
  22. [22]
    [PDF] Guide To The K-Map (Karnaugh Map) The K-map Fill Order - CSUSM
    We can minimize Boolean expressions of 2, 3, or 4 variables very easily using the K- map without using any Boolean algebra theorems. The K-map can take two ...
  23. [23]
    [PDF] Multiple-Valued Minimization for PLA Optimization - KFUPM
    In this paper, we present the extension of the Espresso-II algorithms [3] for binary-valued minimization to the more general case of multiple-valued logic ...
  24. [24]
    [PDF] CMPT 407 - Complexity Theory Lecture 4
    May 19, 2017 · The size of a Boolean circuit C, denoted |C|, is defined to be the total number of nodes (gates) in the graph representation of C. The depth of ...
  25. [25]
    [PDF] 1 Boolean Circuits
    A Boolean circuit is a directed, acyclic graph with inputs, outputs, and gates labeled by ∨, ∧, or ¬, with fan-in of 1 or 2 on gates.Missing: science authoritative
  26. [26]
    [PDF] The Size and Depth of Layered Boolean Circuits
    Theorem 2. Every synchronous Boolean circuit of size s can be simulated by a synchronous Boolean circuit of depth O( √ s) computing the same function.
  27. [27]
    [PDF] size–depth tradeoffs for threshold circuits - UCSD CSE
    A tradeoff between the number of gates and depth is also proved: any threshold circuit of depth d that computes the parity of n variables has at least (n/2)1/2 ...
  28. [28]
    [PDF] P-Uniform Circuit Complexity - ERIC W. ALLENDER
    in complexity theory is P-uniformity [8]; the family of circuits {C} is P-uniform if the function n→→ C, is computable in time polynomial in n. This gives rise ...
  29. [29]
    [PDF] Parallel Complexity of Random Boolean Circuits - AWS
    Feb 16, 2011 · In the definition of NC the model of computation that is considered is the. PRAM, an idealized parallel computer with a number of processors ...<|control11|><|separator|>
  30. [30]
    [PDF] Limits to Parallel Computation: P-Completeness Theory
    the branch of complexity theory that focuses on.
  31. [31]
    [PDF] On the Expected Depth of Boolean Circuits - UPCommons
    Mar 11, 1994 · In order to evaluate the circuit on a. PRAM we just start evaluating inputs, after that we proceed by levels. Thus the total time (depending ...
  32. [32]
    Feasible Time-Optimal Algorithms for Boolean Functions on ...
    Richard P. Brent, The parallel evaluation of general arithmetic expressions, J. ... Boolean circuit complexity, Proc. 19th Annual ACM Symposium on Theory of ...
  33. [33]
    [PDF] Complexity Theory - Lecture 20: Circuits and Parallel Computation
    Jan 7, 2020 · Proof sketch (see Arora/Barak Theorem 6.19):. • if NP ⊆ P/poly then there is a polysize circuit family solving Sat.
  34. [34]
    [PDF] A Symbolic Analysis of Relay and Switching Circuits
    Boole, is a symbolic method of investigating logical relationships. The symbols of Boolean algebra admit of two logical interpretations. If interpreted in terms ...
  35. [35]
    [PDF] On the Computational Complexity of Algorithms Author(s)
    On the Computational Complexity of Algorithms. Author(s): J. Hartmanis and R. E. Stearns. Source: Transactions of the American Mathematical Society, Vol. 117 ( ...
  36. [36]
    [PDF] A Short History of Computational Complexity - Lance Fortnow
    Nov 14, 2002 · Circuit complexity studies bounds on the size and depth of circuits which compute a given Boolean functions. Aside from their practical ...
  37. [37]
    [PDF] Computational Complexity: A Modern Approach - cs.Princeton
    This chapter investigates a model of computation called the Boolean circuit, that is a general- ization of Boolean formulae and a formalization of the ...
  38. [38]
    Searching constant width mazes captures the AC 0 hierarchy
    Jun 20, 2005 · Cite this paper. Barrington, D.A.M., Lu, CJ., Miltersen, P.B., Skyum, S. (1998). Searching constant width mazes captures the AC 0 hierarchy.
  39. [39]
    [PDF] Computational Complexity: A Modern Approach - Princeton University
    Jan 8, 2007 · One way to extend the AC0 lowerbounds of the previous section was to define a more general class of circuits. What if we allow more general ...
  40. [40]
    [PDF] ALGEBRAIC METHODS IN THE THEORY OF LOWER BOUNDS ...
    Roman Smolensky. Department of Mathematics. University of California, Berkeley kbstr act. We use algebraic methods to get lower bounds for complexity of ...
  41. [41]
    [PDF] Circuit lowerbounds - cs.Princeton
    The Razborov-Smolensky method of using approximator polynomials is from [Raz87], strengthened in[Smo87]. Valiant's observa- tions about superlinear circuit ...
  42. [42]
    [PDF] Boolean Function Complexity
    Computational complexity theory is the study of the inherent hardness or easiness of computational tasks. Research in this theory has two main strands.
  43. [43]
    Almost optimal lower bounds for small depth circuits
    This paper, by J Hastad, discusses almost optimal lower bounds for small depth circuits. It was published in STOC '86.
  44. [44]
    [PDF] The Complexity of Boolean Functions
    ... complexity (in its extended definition, as the sum of all features and qualities) far exceeds the complexity of all Boolean functions. Frankfurt a.M. ...
  45. [45]
    [PDF] Natural Proofs
    We introduce the notion of natural proof. We argue that the known proofs of lower bounds on the complexity of explicit Boolean functions.