Fact-checked by Grok 2 weeks ago

Circuit complexity

Circuit complexity is a subfield of that investigates the minimal resources, such as size and depth, required to compute functions using circuits composed of basic logic gates like , and NOT. These circuits are modeled as directed acyclic graphs where nodes represent gates with bounded or unbounded , inputs are variables, and the output computes a target function f: \{0,1\}^n \to \{0,1\}. The size of a circuit is defined as the total number of gates, reflecting the computational effort, while the depth measures the longest path from inputs to output, indicating parallel computation time. Originating in the with Claude Shannon's foundational work on the complexity of switching circuits, the field has evolved to address fundamental questions in computation, including the separation of complexity classes like and . A key distinction is between non-uniform circuit families, which allow different circuits for each input length n and relate to the class , and uniform circuits, where the circuit for each n can be generated by a polynomial-time , aligning more closely with standard complexity classes like . Circuit complexity over complete bases, such as those including gates, is equivalent up to constants for functions computable by constant-fan-in circuits. Notable results include exponential lower bounds for constant-depth circuits with polynomial size (AC^0 class), such as the parity function requiring superpolynomial size, proven by Furst, Saxe, Sipser, and Ajtai in the early 1980s. Monotone circuit complexity, restricting to AND and OR gates without negation, has yielded superpolynomial lower bounds for functions like the clique problem, with sizes at least $2^{\Omega(n^{1/3})}. These advances separate restricted classes like AC^0 from others, such as TC^0 or ACC^0, using methods like gate elimination and approximation by low-degree polynomials. Despite progress on restricted models, major open problems persist, including whether there exists a superpolynomial lower bound for general (unrestricted) circuits computing NP-complete problems, which would imply P ≠ NP. The best known general lower bound remains linear, around 4.5n for certain functions, highlighting the field's challenges in proving strong separations. Circuit complexity continues to influence areas like proof complexity and derandomization, providing nonuniform analogues to uniform models of computation.

Basic Concepts

Boolean Circuits

In computational complexity theory, a is formally defined as a (DAG) consisting of vertices representing either input nodes, gate nodes, or output nodes. Input nodes correspond to the variables of the being computed, with no incoming edges, while output nodes have no outgoing edges and represent the result. Gate nodes are labeled with operations, typically from the basis {∧ (AND), ∨ (OR), ¬ (NOT)}, and compute the function applied to the values propagating from their predecessors. Each circuit computes a by assigning truth values (0 or 1) to the inputs and evaluating the gates in until reaching the output. The of a refers to the number of incoming edges, determining its : NOT gates have fan-in 1, while AND and OR gates conventionally have fan-in 2 in standard models, though extensions allow unbounded fan-in for generality. Fan-out, conversely, is the number of outgoing edges from a gate, which can be unbounded, enabling the of results across multiple subsequent gates—a key feature distinguishing circuits from tree-like formulas. This structure allows circuits to model efficient computations by sharing subcomputations. For decision problems over languages, circuit complexity employs families of circuits {C_n}_{n ≥ 1}, where each C_n is a Boolean circuit with exactly n input nodes (corresponding to binary strings of length n) and a single output node that accepts (outputs 1) if and only if the input string belongs to the language. A language L ⊆ {0,1}^* is recognized by {C_n} if for every n and x ∈ {0,1}^n, C_n(x) = 1 precisely when x ∈ L. This framework captures non-uniform computation, where circuit designs can vary arbitrarily with input size. Simple examples illustrate the model: a NOT circuit consists of a single gate with one input and one output, inverting the input bit; an AND circuit has two input nodes feeding into a single AND gate outputting their conjunction; similarly, an OR circuit outputs the disjunction of two inputs. More complex functions arise via composition; for instance, the (outputting 1 if the number of 1s in the n-bit input is odd) can be computed by a balanced of XOR gates (where XOR is realized as (a ∧ ¬b) ∨ (¬a ∧ b), using AND, OR, and NOT), with n-1 gates in total. A fundamental computational task associated with Boolean circuits is the circuit value problem (CVP), which, given a circuit C, an input assignment to its variables, and a designated gate g, asks for the Boolean value computed at g under that assignment (or equivalently at the output for the standard variant). This problem is solvable in polynomial time by topological evaluation but is under log-space many-one reductions, making it a canonical hard problem for deterministic polynomial-time computation.

Size and Depth

In Boolean circuit complexity, the size of a circuit C, denoted |C| or s(C), is the total number of in the . This measure quantifies the overall computational resources required, analogous to the number of operations in a sequential . The depth of a C, denoted d(C), is the length of the longest directed path from an input node to an output node. Depth captures the inherent parallelism of the , as it represents the minimum number of sequential steps needed when can be evaluated simultaneously in parallel. For a f: \{0,1\}^n \to \{0,1\}, the s(f) is the minimum |C| over all C that compute f, and the depth d(f) is defined analogously as the minimum d(C). These extend to \{C_n\} for functions on n inputs, where the s(f(n)) is the minimum over C_n computing f on n bits, and similarly for depth. A has O(s(n)) if |C_n| \leq c \cdot s(n) for some constant c > 0 and all sufficiently large n. Size and depth exhibit fundamental trade-offs that reflect choices between and computational speed. A with small but large depth prioritizes minimal gates at the expense of longer sequential execution, as in computations where operations are chained linearly. Conversely, balanced and depth enable greater , allowing multiple operations to proceed concurrently, though often requiring more gates overall. For instance, the elementary schoolbook method for adding two n-bit numbers yields a of O(n) but depth O(n), simulating a carry propagation, while addition techniques achieve depth O(\log n) with O(n \log n). In general, any satisfies d(C) \leq |C|, illustrating how reducing depth can increase . These measures define efficient circuits in non-uniform models, where size—specifically, s(f(n)) = O(n^k) for some k—indicates feasible computation for large inputs, even if depth is superlogarithmic. Depth further refines efficiency by bounding time, with logarithmic depth enabling highly concurrent evaluations suitable for implementations.

Uniformity Conditions

Polynomial-Time Uniformity

Polynomial-time uniformity, denoted as U_P, imposes a computational on circuit families to ensure their descriptions are constructible in a feasible manner. A family of circuits \{C_n\}_{n \geq 1} is U_P-uniform if there exists a deterministic M that runs in time in n and, on input $1^n (the encoding of n), outputs the encoding E_n of the circuit C_n. Equivalently, M can output the i-th bit of E_n in time when given $1^n and an index i ranging from 1 to the length of E_n, which is itself in n since circuit sizes are assumed .90038-6) This uniformity condition bridges non-uniform circuit models to uniform complexity classes like P by requiring that circuit constructions do not rely on arbitrary or non-constructive advice. In non-uniform settings, circuit families allow arbitrary descriptions for each n without any bound on how those descriptions are produced, potentially incorporating input-size-specific optimizations that cannot be algorithmically generated efficiently. In contrast, U_P-uniformity ensures the circuit family is generated by a single polynomial-time , making it suitable for modeling realistic computational resources where hardware designs must be computable.90038-6) One key advantage of U_P-uniformity is its equivalence to polynomial-time Turing machines for simulating computations: any language in P can be decided by a U_P-uniform family of polynomial-size circuits, and conversely, such circuit families can be simulated by polynomial-time machines by generating and evaluating the on the input. For example, the (computing the XOR of n bits), which requires linear circuit size, admits a U_P-uniform family where a polynomial-time constructs a balanced XOR by recursively defining connections based on the input size n. This construction highlights how U_P-uniformity captures problems solvable in polynomial time without extraneous non-computable elements.90038-6)

Logspace Uniformity

Logspace uniformity, also known as L-uniformity, imposes a strict condition on circuit families to ensure they can be generated using limited computational resources, specifically logarithmic space. A family of circuits \{C_n\}_{n \geq 1} is logspace-uniform if there exists a deterministic that, on input $1^n (the unary encoding of n) and an i specifying a or , outputs the relevant description bit of C_n while using only O(\log n) space. Equivalently, auxiliary functions such as \mathsf{SIZE}(n) (number of ), \mathsf{TYPE}(n, i) (gate type at position i), and \mathsf{INPUT}(n, i, j) (inputs to gate i) must all be computable in O(\log n) space. This condition, often denoted as U_L, contrasts with less restrictive uniformities by bounding not just time but space during circuit construction. The primary motivation for logspace uniformity arises in modeling highly computations where circuit descriptions must be efficiently producible without excessive resource demands, aligning with the needs of parallel architectures that prioritize low-depth . It captures "uniform" families that simulate feasible parallel machines, avoiding the unrealistic assumption of arbitrary circuit designs in non-uniform models. This uniformity is central to defining complexity classes like NC, which consist of problems solvable by logspace-uniform families of polynomial-size, polylogarithmic-depth circuits, thereby linking to parallel time hierarchies. Representative examples of logspace-uniform circuit families include those for parallel prefix computation, such as the Brent-Kung or Ladner-Fischer networks, which compute prefix sums or products in logarithmic depth and can have their gate descriptions generated via a logspace machine by recursively specifying wire connections based on input size. Similarly, Batcher's odd-even mergesort network provides a logspace-uniform construction for sorting n elements using a fixed, recursive merging pattern that a logspace transducer can output gate-by-gate. Logspace uniformity defines a proper of P-uniform circuit families, as any logspace-computable description is also poly-time computable, but the converse does not hold: there exist P-uniform families whose construction requires more than logarithmic space, such as those encoding solutions to space-bounded problems during generation. This stricter space bound makes logspace uniformity particularly suited for capturing the essence of efficient parallelism, unlike the weaker polynomial-time uniformity that allows unbounded space in construction.

Complexity Classes

Non-Uniform Classes

Non-uniform complexity classes in circuit complexity dispense with uniformity conditions, allowing circuit families where each circuit C_n for input length n can be arbitrarily designed, as long as the size remains in n. This models with "" that depends on the input length but not on the specific input, enabling the capture of problems beyond uniform -time solvable ones. The central non-uniform class is \mathbf{P}/\text{poly}, defined as the set of languages L \subseteq \{0,1\}^* such that there exists a polynomial p where for every n, there is a Boolean circuit C_n of size at most p(n) that decides L on all inputs of length n. Formally, L \in \mathbf{P}/\text{poly} \iff \exists \text{ polynomial } p \text{ s.t. } \forall n, \exists C_n \text{ with } |C_n| \leq p(n) \text{ deciding } L \cap \{0,1\}^n. This class includes all languages in \mathbf{P}, since any polynomial-time Turing machine can be simulated by a polynomial-size circuit family without uniformity, but \mathbf{P}/\text{poly} is strictly larger in general, containing undecidable languages like the unary halting problem by hard-coding solutions for each n. A key example of non-uniformity in \mathbf{P}/\text{poly} is the use of hard-coded lookup tables for small n, where the circuit C_n embeds a truth table of size $2^n but scales polynomially overall by exploiting the fixed n for each family member; for instance, the unary language of halting Turing machines on empty input can be decided by circuits that precompute the answer for each length n. Whether \mathbf{P} = \mathbf{P}/\text{poly} remains open, with separations implying significant nonuniform advice is needed for certain problems. The class also relates to \mathbf{P} vs. \mathbf{NP}: if \mathbf{NP} \subseteq \mathbf{P}/\text{poly}, the polynomial hierarchy collapses to its second level \Sigma_2^p. Derandomization implications arise because \mathbf{BPP} \subseteq \mathbf{P}/\text{poly}, meaning any bounded-error probabilistic polynomial-time language has polynomial-size circuits; thus, derandomizing \mathbf{BPP} to \mathbf{P} would require proving superpolynomial circuit lower bounds for some problem, connecting non-uniform classes to randomized computation barriers. Other notable non-uniform classes include \mathbf{AC}^0/\text{poly}, comprising constant-depth, polynomial-size circuits with unbounded fan-in AND, OR, and NOT gates, which capture highly parallel but shallow computations and are properly contained in \mathbf{P}/\text{poly}. Similarly, \mathbf{MAJ}/\text{poly} consists of polynomial-size circuits built from gates (outputting 1 if at least half the inputs are 1), along with AND, OR, and NOT, offering a model for threshold-based decisions with non-uniform . These classes highlight the of non-uniform power, from shallow to general resources.

Uniform Classes

Uniform circuit classes impose restrictions on the construction of circuit families to ensure that the circuits can be efficiently described by a , typically under logspace uniformity, where a logarithmic-space can output the circuit description in time. The class NC^k, for a fixed k ≥ 1, consists of decision problems solvable by logspace-uniform families of circuits of size, depth O(\log^k n), and fan-in at most 2. The class NC is defined as the union over all k of NC^k, capturing problems that admit efficient algorithms in the sense of polylogarithmic time on a bounded-fan-in model. Related uniform circuit classes extend NC by incorporating more powerful gates to model different parallel architectures, with the inclusions NC^k ⊆ AC^k ⊆ TC^k. The class AC^k comprises logspace-uniform families of polynomial-size circuits of depth O(\log^k n) built from unbounded fan-in AND and OR gates (with NOT gates only at the inputs). Similarly, TC^k allows threshold gates, which output 1 if at least half of the inputs are 1, in addition to AND, OR, and NOT, again with polynomial size and depth O(\log^k n) under logspace uniformity. These classes highlight trade-offs in gate power versus depth, with AC and TC often coinciding with NC in expressive power for many problems. Representative examples illustrate the power of these classes. Integer multiplication of two n-bit numbers can be performed by logspace-uniform circuits of polynomial size and depth O(\log^2 n), placing it in NC^2. Sorting n integers is achievable using the AKS , which yields logspace-uniform circuits of polynomial size and depth O(\log n), though practical implementations often target higher levels like NC^3 for simplicity in construction. Formally, for inputs of length n, a language L is in NC^k if there exists a logspace-uniform family {C_n} such that |C_n| is in n, the depth d(n) = O(\log^k n), and C_n accepts exactly the strings in L of length n. A key property is that NC \subseteq , as any polynomial-size uniform circuit family can be simulated by a deterministic in time by evaluating the circuit layer by layer. This underscores the parallel-sequential : problems in NC can be solved sequentially in time but offer the potential for massive parallelism with polylogarithmic depth, enabling efficient computation on models like the PRAM under logspace uniformity.

Circuit Lower Bounds

General Techniques

One of the foundational techniques for establishing lower bounds on Boolean circuit size is the counting argument, which compares the total number of possible Boolean functions on n inputs, which is $2^{2^n}, to the number of distinct functions computable by circuits of a given size s. This approach shows that most functions require large circuits, as small circuits cannot cover all possibilities. Specifically, Shannon proved that almost every Boolean function requires circuit size \Omega(2^n / n) by bounding the number of circuits of size s above n as at most ((n+3)s^2)^s, leading to the conclusion that for s = o(2^n / n), the number of such circuits is far smaller than $2^{2^n}. Gate elimination is another key method, which simplifies a by identifying and removing that compute values or depend on a single input after applying partial assignments to the variables. The process iteratively eliminates such , reducing the circuit's size while preserving the computation on the remaining free variables; each elimination step typically removes a number of , yielding a linear lower bound when repeated sufficiently many times. This technique relies on the structure of the circuit and the function's to input changes, often combined with restrictions to force many into eliminable states. For instance, in circuits over the of AND, OR, and NOT with 2, gate elimination can establish superlinear size requirements for functions that avoid subcomputations under partial fixes. Random restrictions provide a probabilistic tool to prove lower bounds by randomly fixing a fraction of the input variables to 0 or 1, which simplifies the while maintaining the hardness of the restricted function with high probability. Under a random restriction leaving t free variables (where t \ll n), many gates become constant or depend on fewer inputs, effectively reducing the circuit's depth or width; this can be iterated to collapse the circuit to a simpler form, such as a small or constant-depth structure, against which the function's complexity can be argued. The method's power lies in concentration bounds ensuring that the restricted function remains non-trivial, often using union bounds over circuit topologies to show that small circuits fail to compute it. Distinguishing formula size from circuit size is crucial, as formulas impose a tree-like structure without sharing, making them a stricter model that often yields stronger lower bounds via combinatorial restrictions. Lower bounds for size, such as Shannon's \Omega(2^n / \log n), follow similar counting arguments but with tighter estimates on the number of tree-structured computations, bounded by roughly (4n)^s for size s. More advanced combinatorial methods, including applications of the from extremal to analyze shadows or unions in the 's subfunction , provide refined bounds for specific cases by minimizing the "coverage" of small formulas over the . These techniques highlight that while circuits can reuse subcircuits to achieve smaller sizes, formulas require for most functions due to their acyclic, non-sharing nature. Despite their generality, these techniques have limitations: counting arguments establish lower bounds only for random or most functions, not explicit ones of interest, while gate elimination and random restrictions typically yield only superlinear or mildly superpolynomial bounds for explicit functions, falling short of separating major classes.

Specific Results

One of the landmark results in circuit complexity is the separation of the from the class AC^0. The , which computes whether the number of 1s in an n-bit input is even or odd, cannot be computed by constant-depth, polynomial-size circuits. This was first shown using random restrictions by Furst, Saxe, and Sipser, who established a superpolynomial lower bound of Ω(n^{log n / log log n}) on the size of depth-d circuits computing . Ajtai independently proved an exponential lower bound in 1983 using similar techniques. Håstad's switching lemma provides a quantitative tool for analyzing the effect of random restrictions on constant-depth circuits, enabling tighter bounds. The lemma states that a width-w DNF formula restricted by a random restriction with probability p of leaving a variable free simplifies to a decision tree of depth at most t with probability at most O((pw)^t). Applying this iteratively to each layer of a depth-d circuit yields an almost-optimal size lower bound of exp(Ω(n^{1/(d-1)})) for parity in AC^0. For constant depth d, this gives a lower bound of 2^{Ω(n^{1/(d-1)})}. Another key explicit separation involves the k-clique problem, which determines if a on n vertices contains a clique of size k. Razborov proved in 1985 that monotone circuits computing k-clique require superpolynomial size, specifically Ω(n^{k/16}) for k up to about n^{1/2}, using the that bounds the number of useful subcircuits via combinatorial approximations of monotone functions. Extending this to non-monotone AC^0 circuits while avoiding the natural proofs barrier, Rossman showed in 2008 that k-clique requires AC^0 circuits of size ω(n^{k/4}) for any constant k, by combining gate-elimination arguments with correlation bounds against low-degree polynomials. A significant conditional non-uniform separation shows that if NEXP ⊆ , then the polynomial hierarchy collapses, extending the Karp-Lipton result for via padding arguments. Using self-reducibility of SAT, Karp and Lipton showed in 1980 that if , then the collapses to its second level; padding arguments extend this to imply that NEXP ⊆ would cause a similar collapse for exponential-time classes, but improved analyses via padding yield almost-exponential separations assuming no such collapse. For TC^0, which augments AC^0 with majority gates, separations are more modest but include exponential lower bounds for specific functions like the permanent. Jukna and others have shown that computing the permanent requires TC^0 circuits of size exp(Ω(n^{1/3})) using combinatorial nullstellensatz techniques to bound the influence of gates. In 2024, new results established maximum circuit lower bounds of at least 2^n / n for the class AMEXP / 2^{n^ε} of exponential-time Arthur-Merlin protocols with sub-exponential advice. Despite progress on these Boolean separations, the algebraic analog—whether VP equals VNP—remains unresolved, with no polynomial-size arithmetic circuits known for the permanent despite Valiant's completeness result placing it in VNP.

Monotone Circuits

Monotone circuits are a restricted model of Boolean circuits that use only AND and OR gates, without negation (NOT gates). They compute monotone Boolean functions, which satisfy the property that if x \leq y (bitwise), then f(x) \leq f(y). These circuits are particularly relevant for studying combinatorial problems like the clique function, as many natural problems in graph theory are monotone. The study of monotone circuit complexity aims to determine the minimal size required to compute such functions. Unlike general circuits, where strong lower bounds are elusive, monotone circuits have yielded significant results. In 1985, Alexander Razborov proved the first superpolynomial lower bound for monotone circuits, showing that the clique function (detecting a k-clique in an n-vertex graph, with k = \Theta(n^{1/3})) requires monotone circuit size at least $2^{\Omega(n^{1/3})}. This bound was later improved by Andreev, Alon, and Boppana to exponential lower bounds for certain parameters. Other notable lower bounds include \Omega(n \log n) for binary and merging functions, and a linear bound of approximately 3.5n for the . These results highlight the gap between and general circuit complexity and have implications for separating complexity classes under monotone restrictions. Monotone circuit complexity also connects to proof complexity and .

Historical Development

Circuit complexity originated in the late 1940s with Claude Shannon's seminal 1949 work on the synthesis of switching circuits, where he demonstrated that almost all functions require circuits of size, albeit through a non-constructive counting argument. This laid the foundation for analyzing computational resources in terms of circuit size and depth. In the 1970s, progress included the Meyer-Stockmeyer theorem, which established super- lower bounds for certain logical problems using circuits. The 1980s marked a surge in results on restricted circuit classes. In 1983, Miklós Ajtai proved that the cannot be computed by polynomial-size constant-depth circuits (AC^0), a breakthrough later extended by , James Saxe, and in 1984 using approximation by low-degree polynomials. Alexander Razborov, in 1985, showed that the function requires superpolynomial-size monotone circuits, providing the first explicit superpolynomial lower bound for a combinatorial problem. Subsequent developments included Johan Håstad's 1985 optimal bounds for in AC^0 circuits and Razborov's 1987 result on gates requiring exponential size in certain restricted classes. In 1994, Razborov and Steven Rudich introduced the natural proofs barrier, explaining why common techniques for proving lower bounds might fail for general circuits due to implications for . These advances, while significant for restricted models, have not yet yielded superpolynomial lower bounds for unrestricted circuits, leaving major questions open as of 2023.

Recent Advances and Applications

Constant-Depth Circuits

In the 2010s, Ryan Williams made significant progress on lower bounds for constant-depth circuits with modular gates, showing that NEXP does not have non-uniform ACC^0 circuits of polynomial size. This result extends to ACC^0 for any fixed prime p, as the proof relies on the structure of modular gates with prime modulus. The approach circumvents natural proof barriers by leveraging faster algorithms for #SAT on ACC^0 circuits, demonstrating that improving exhaustive search for satisfiability implies superpolynomial lower bounds. In the 2020s, direct size lower bounds for TC^0 circuits have been established using algebraic techniques. For instance, superlinear size lower bounds have been proven for restricted TC^0 circuits computing explicit hard functions, building on gate elimination methods. Additionally, exponential lower bounds for E^NP against GCC^0 circuits have been shown using lifted switching lemmas that extend classical ^0 results to generalized classes without parameter loss. No full collapses of AC^0 have been shown, but improved barriers for moduli have refined the limitations of algebraic techniques against ACC^0 for composite p. For example, superpolynomial size lower bounds have been established for TC^0 against explicit functions in E_NP, using lifted switching lemmas that extend classical ^0 results to threshold gates without parameter loss. Key techniques in these advances include algebraic methods such as gate-scraping, which iteratively eliminates gates to reduce circuit size while preserving functionality, and variants of the combinatorial Nullstellensatz to bound the degree of approximating polynomials for modular gates. These combinatorial tools help prove that constant-depth circuits cannot approximate certain functions with low-degree polynomials over finite fields. An important open question remains the full resolution of whether TC^0 contains , the class of languages accepted by logarithmic space Turing machines, as current lower bounds fall short of separating these classes.

Connections to Other Fields

Circuit complexity has profound implications for learning theory, particularly in establishing limitations on the learnability of functions. Lower bounds for constant-depth circuits, such as those in ^0, demonstrate that certain functions like cannot be approximated by small ^0 circuits, implying that algorithms restricted to ^0 hypothesis classes cannot learn such functions efficiently from random examples. This connection is highlighted in work showing that ^0 lower bounds limit the expressive power of learnable circuit classes, with extensions beyond ^0 requiring augmented models like majority gates at the output. Furthermore, techniques from circuit lower bounds, including natural proofs, have been adapted to agnostic learning settings, where tolerant natural proofs yield algorithms for learning under approximate assumptions, but also reveal barriers to learning more powerful circuit classes in the agnostic regime. In cryptography, circuit complexity underpins the construction of pseudorandom generators that fool non-uniform classes like . The Nisan-Wigderson generator constructs such PRGs from functions in that require superpolynomial circuit size, enabling derandomization of probabilistic algorithms while preserving security against polynomial-sized circuits. A key implication arises if P ≠ P/poly: the existence of problems requiring large circuits implies the existence of one-way functions (OWFs), as non-uniform advice cannot efficiently invert hard functions. Formally, if every language in requires circuits of size $2^{\Omega(n)}, then OWFs exist, providing a foundation for cryptographic primitives like and signatures. Circuit complexity also intersects with proof complexity, where the simulation power of proof systems mirrors circuit sizes. Frege systems, which use polynomial-sized formulas as proof lines, can simulate polynomial-sized circuits, meaning that tautologies provable in poly-size Frege proofs correspond to functions computable in . For monotone variants, Razborov's superpolynomial lower bounds on circuits for the function extend to proof systems, showing that Frege proofs require superpolynomial size for certain unsatisfiable formulas encoding non-existence. Recent developments in the have leveraged lower bounds to establish separations in quantum learning. Specifically, efficient quantum algorithms for learning parities with noise or unitaries imply classical lower bounds for related problems, aiding proofs of quantum-classical separations in learning models like shadow tomography. In , fine-grained complexity assumptions, such as the hardness of beyond NC (log-depth parallel ), have enabled constructions of secure protocols against preprocessing attacks, yielding OWFs and garbled with time-space tradeoffs tied to depth restrictions.

References

  1. [1]
    [PDF] Week 1: An Overview of Circuit Complexity 1 Welcome 2 Preliminaries
    The area of circuit complexity has a long history, starting in the 1940's. It is full of open problems and frontiers that seem insurmountable, ...
  2. [2]
    [PDF] Circuit Complexity - Brown CS
    The circuit complexity of a binary function is measured by the size or depth of the smallest or shallowest circuit for it. Circuit complexity derives its ...
  3. [3]
    [PDF] Computational Complexity: A Modern Approach - Princeton University
    Jan 8, 2007 · Shannon defined circuit complexity, including monotone circuit complexity, in 1949. The topic was studied in Russia since the 1950s. (See ...<|control11|><|separator|>
  4. [4]
    [PDF] Computational Complexity: A Modern Approach - Princeton University
    Jan 8, 2007 · A Boolean circuit is a just a diagram showing how to derive an output from an input by a combi- nation of the basic Boolean operations of OR (∨) ...
  5. [5]
    (PDF) The Complexity of Problems Defined by Boolean Circuits
    We study the complexity of circuit-based combinatorial problems (e.g., the circuit value problem and the satisfiability problem) defined by boolean circuits ...<|control11|><|separator|>
  6. [6]
    [PDF] The Complexity of Boolean Functions
    Apart from the classical circuit model and the parameters of complexity, circuit size and depth, providing the basis for sequential and for parallel ...
  7. [7]
    [PDF] 1 Circuit Size Bounds - Harvard SEAS
    Mar 3, 2010 · Definition 4 A boolean formula is a boolean circuit where every gate has fan-out 1 and inputs can be copied multiple times. We can define fsize( ...<|separator|>
  8. [8]
    [PDF] Lecture 11 1 Non-Uniform Complexity - UMD Computer Science
    Definition 2 Circuit family {Cn} is logspace-uniform if the function mapping 1n to Cn can be computed using O(log n) space. Equivalently, each of the following ...
  9. [9]
    [PDF] On Uniform Circuit Complexity* - IRISA
    We argue that uniform circuit complexity introduced by Borodin is a reasonable model of parallel complexity. Three main results are presented.
  10. [10]
    [PDF] P-Uniform Circuit Complexity - ERIC W. ALLENDER
    BORODIN, A. On relating time and space to size and depth. SIAM J. Comput ... Ruzzo, W. L. On uniform circuit complexity, J. Comput. Syst. Sci. 21 (1981) ...
  11. [11]
    [PDF] Computational Complexity: A Modern Approach - cs.Princeton
    This chapter investigates a model of computation called the Boolean circuit, that is a general- ization of Boolean formulae and a formalization of the ...
  12. [12]
  13. [13]
    [PDF] Two Theorems about BPP - Zoo | Yale University
    These results were presented in class on October 19, 2010. Adleman's Theorem: BPP ⊆ P/poly. Proof: Let L be a set in BPP. Recall that the Chernoff bounds on ...
  14. [14]
    [PDF] 17.1 Learning Constant Depth Circuits with MAJ gates
    Take a circuit in AC0 (the class of polynomial-sized constant depth circuits) and stick a single majority (“MAJ”) gate at the top. A MAJ gate returns true when ...
  15. [15]
    Constant Depth Reducibility | SIAM Journal on Computing
    Constant Depth Reducibility. Authors: Ashok K. Chandra, Larry Stockmeyer, and Uzi VishkinAuthors Info & Affiliations. https://doi.org/10.1137/0213028 · PDF.
  16. [16]
    [PDF] Circuit Lower Bounds: -Shannon's Counting Argument
    circuits and formulas: • Shannon's size s = Ω(2n/n) lower bound for circuits. • Shannon's size s = Ω(2n/log n) lower bound for formulas. – Other Examples of ...
  17. [17]
    [PDF] Lecture 2: Gate elimination and formula lower bounds
    Jan 17, 2019 · Shannon's lower bound: Almost every n-ary boolean function has circuit size > 2n/n. A corollary of the Lupanov and Shannon bounds is the ...
  18. [18]
    [PDF] Gate Elimination: Circuit Size Lower Bounds and #SAT Upper Bounds
    Jan 14, 2019 · Abstract. Most of the known lower bounds for binary Boolean circuits with unrestricted depth are proved by the gate elimination method.Missing: combinatorial | Show results with:combinatorial
  19. [19]
    [PDF] Circuit lowerbounds - cs.Princeton
    A Boolean circuit is monotone if it contains only AND and OR gates, and no. NOT gates. Such a circuit can only compute monotone functions, defined as follows.
  20. [20]
    [PDF] Lower bounds based on restrictions
    • Best known formula size lower bound n1.5 – o(1). 1961. [Subbotovskaya] n2 ... • More sophisticated gate elimination arguments give the best lower ...
  21. [21]
    [PDF] PARITY /∈ AC Contents 1 Introduction - People | MIT CSAIL
    Imagine performing the following “experiment” on the PARITY function, and separately on an. AC0 circuit. Pick an input variable xi at random, and fix all other ...
  22. [22]
    [PDF] Almost Optimal Lower Bounds for Small Depth Circuits Warning
    This paper improves lower bounds for small depth circuits, proving almost optimal bounds for parity circuits, and shows functions needing exponential size when ...
  23. [23]
    [PDF] lower bounds for тне monotone complexity of some boolean functions
    We now turn to the construction of con8tructive 8equences consi8ting of monotone. Boolean function8 of sufficiently great monotone complexity. The fir8t example ...
  24. [24]
    [PDF] On the Constant-Depth Complexity of k-Clique
    (Non-uniform) AC0 is the class of languages L⊆{0, 1}∗ decided by a sequence of circuits (one for each input size n) of constant depth and size polynomial in n.Missing: 2008 | Show results with:2008
  25. [25]
    [PDF] Boolean Circuits (cont.) 1 Recap 2 Turing Machines with Advice
    May 1, 2013 · Question 7 Is NP ⊆ P/poly? D. This is an open question. We believe the answer is no. 3 The Karp-Lipton Theorem.
  26. [26]
    [PDF] On TC Lower Bounds for the Permanent
    For our definition of circuit size (string length of the circuit's description), the maximal circuit complexity is at least 2n. Theorem 3. Let h(n) be a time- ...
  27. [27]
    [PDF] Boundaries of VP and VNP - arXiv
    May 10, 2016 · One fundamental question in the context of the geometric complexity theory approach to the. VP vs. VNP conjecture is whether VP = VP, where VP ...
  28. [28]
    Nonuniform ACC Circuit Lower Bounds | Journal of the ACM
    We prove the following. ---NEXP, the class of languages accepted in nondeterministic exponential time, does not have nonuniform ACC circuits of polynomial size.
  29. [29]
    Lower Bounds Against Sparse Symmetric Functions of ACC Circuits
    Jan 21, 2020 · Lower Bounds Against Sparse Symmetric Functions of ACC Circuits: Expanding the Reach of \#SAT Algorithms. Authors:Nikhil Vyas, Ryan Williams.Missing: post- 2016
  30. [30]
    Improved Circuit Lower Bounds and Quantum-Classical Separations
    Aug 29, 2024 · His main result was that switching-lemma lower bounds for AC^0 lift to GC^0 with no loss in parameters, even though GC^0 requires exponential- ...Missing: TC0 | Show results with:TC0
  31. [31]
    [1606.05050] Proof Complexity Lower Bounds from Algebraic Circuit ...
    Jun 16, 2016 · We give two general methods of converting certain algebraic lower bounds into proof complexity ones. Our methods require stronger notions of lower bounds.Missing: Kane 2019 TC0
  32. [32]
    Learnability beyond AC 0 - ACM Digital Library
    This is the first algorithm for learning a more expressive circuit class than the class AC0 of constant-depth polynomial-size circuits, a class which was shown ...
  33. [33]
    Agnostic Learning from Tolerant Natural Proofs - DROPS
    Aug 11, 2017 · We generalize the learning algorithms from natural properties framework of [CIKK16] to get agnostic learning algorithms from natural properties with extra ...
  34. [34]
    Hardness vs randomness - ScienceDirect.com
    We present a simple new construction of a pseudorandom bit generator. It stretches a short string of truly random bits into a long string that looks random ...
  35. [35]
    P = BPP if E requires exponential circuits - ACM Digital Library
    P = BPP if E requires exponential circuits: derandomizing the XOR lemma. Authors: Russell Impagliazzo.
  36. [36]
    [PDF] Proof Complexity - Cornell: Computer Science
    is computed by polynomial size circuits with structural property . In a similar manner, we define -Frege to be the % -equivalence class of Frege-style proof.
  37. [37]
    [2012.01920] Quantum learning algorithms imply circuit lower bounds
    Abstract:We establish the first general connection between the design of quantum algorithms and circuit lower bounds.
  38. [38]
    [PDF] Data Structures Meet Cryptography: 3SUM with Preprocessing
    These fine-grained reductions have the nice corollary that the 3SUM conjecture immediately implies a eΩ(N2) lower bound for all 3SUM- hard problems. In this ...