Fact-checked by Grok 2 weeks ago

Parity function

In , the parity function is a that takes an input from \{0,1\}^n and outputs 1 the number of 1s in the vector is odd, and 0 otherwise; equivalently, it computes the (XOR) of all input bits. This function is symmetric, meaning its output depends solely on the (number of 1s) of the input rather than the positions of those 1s, and it generalizes to checking the parity of any subset of bits. The parity function plays a pivotal role in , serving as a for establishing lower bounds on the resources required to compute certain problems. It exhibits high sensitivity to input changes—flipping any single bit alters the output—and requires querying all n inputs in the worst case for algorithms, yielding a decision tree complexity of exactly n. In the of Boolean s over \{-1,1\}^n, the parity function has its entire Fourier mass concentrated on the highest-degree monomial \chi_{}(x) = \prod_{i=1}^n x_i, making it a example of a function with maximal imbalance. A landmark result in is that the parity function cannot be computed by constant-depth, polynomial-size circuits (the class AC^0), requiring superpolynomial size for any fixed depth d > 2, specifically at least \exp(n^{\Omega(1/(d-1))}). This was independently proven by Ajtai (1983) using combinatorial methods on finite structures and by Furst, Saxe, and Sipser (1984) via a switching and connections to the polynomial-time hierarchy, kickstarting the field of AC^0 lower bounds and demonstrating the limitations of constant-depth parallel computation models. Beyond theory, parity functions underpin practical applications in error detection (e.g., parity bits in data transmission) and randomized algorithms, where they model unbiased coin flips or modulo 2.

Fundamentals

Definition

The parity function on n bits, denoted PARITY_n(x), is a Boolean function that takes a binary input vector x \in \{0,1\}^n and outputs 1 if the number of 1s in x is odd, and 0 if even. Equivalently, it computes the sum of the input bits modulo 2. Mathematically, PARITY_n(x) can be expressed as the XOR (exclusive or) operation over all bits: PARITY_n(x) = \bigoplus_{i=1}^n x_i, where \oplus denotes addition modulo 2, or equivalently, the product \prod_{i=1}^n (1 - 2x_i) when viewing inputs in \{-1,1\}^n (after a suitable encoding). This formulation highlights its direct connection to modulo 2 arithmetic in the vector space \mathbb{F}_2^n. For illustration, consider n=3. The truth table below lists all $2^3 = 8 possible inputs and their corresponding outputs:
Input (x_1, x_2, x_3)Number of 1sOutput PARITY_3(x)
(0, 0, 0)00
(1, 0, 0)11
(0, 1, 0)11
(0, 0, 1)11
(1, 1, 0)20
(1, 0, 1)20
(0, 1, 1)20
(1, 1, 1)31
This example demonstrates how the function depends solely on the parity of the of x. The parity function serves as the simplest non-linear in the sense that, for n \geq 2, it has algebraic n over the reals and cannot be expressed as a of individual input variables, distinguishing it from degree-1 functions.

Basic Properties

The parity function PARITY_n: \{0,1\}^n \to \{0,1\}, defined as PARITY_n(x) = \sum_{i=1}^n x_i \mod 2, is linear over the \mathbb{F}_2, meaning it satisfies additivity: PARITY_n(x + y) = PARITY_n(x) + PARITY_n(y) for all x, y \in \{0,1\}^n, where addition is componentwise modulo 2. This linearity positions PARITY_n as an element of the of \{0,1\}^n viewed as a over \mathbb{F}_2, and more generally, the parity functions on subsets form a basis for the space of all functions. In terms of and , PARITY_n exhibits maximal properties among n-variable functions: the of each coordinate i is 1, as flipping x_i always changes the output, leading to a total of n. This uniform underscores its role as a highly unstable , where every input bit is pivotal. The (Walsh) expansion of PARITY_n, when viewed as a function to \{-1,1\} via \chi_{}(x) = (-1)^{\sum x_i}, consists solely of the term corresponding to the all-1s character, with coefficient \hat{f}(\mathbf{1}) = 1 and all others zero, concentrating its spectrum at degree n. This pure high-degree structure distinguishes it in of functions. PARITY_n is unique as the only linear of degree n that is balanced, outputting 0 and 1 each on exactly $2^{n-1} inputs. Furthermore, it is under any of its inputs, as its value depends only on the of the , rendering it fully . This , combined with its , finds application as a parity check in error-correcting codes.

Computation and Algorithms

Finite Algorithms

The parity function for a finite of n can be computed iteratively using the XOR operation, which is associative and commutative, allowing sequential accumulation of bits modulo 2. The algorithm initializes a result variable to 0 and iteratively XORs each input bit into the result. This approach achieves O(n) and O(1) , making it suitable for sequential processing on standard hardware. Pseudocode for the iterative XOR is as follows:
function parity(input_bits: array of bits) -> bit:
    result = 0
    for each bit in input_bits:
        result = result XOR bit
    return result
This method is widely implemented in programming languages supporting bitwise operations, such as or , where large n (e.g., multi-word integers) can be handled by processing bits in chunks via shifts and masks to avoid explicit loops over individual bits. For instance, in 64-bit architectures, folding the word with successive XOR shifts (e.g., v \oplus (v \gg 32), then v \oplus (v \gg 16), etc.) reduces the effective size logarithmically before a final check. In models, such as PRAM, the parity can be computed using a tree-based reduction of XOR operations, where bits are paired and XORed level by level until a single result remains. This structure forms a of depth \log n, enabling O(\log n) with O(n) processors, as each level halves the number of active operands. Such reductions are standard in parallel algorithms for associative operations like XOR. Hardware implementations of the parity function often leverage XOR trees integrated into arithmetic logic units (ALUs) for generating parity bits or flags. In adders and ALUs, parity computation corresponds to carry-less modulo 2, where partial products or sums are reduced via a cascade of XOR gates without carry propagation, ensuring efficient for updates. This is evident in designs like ripple-carry or tree-based adders extended for . For sparse inputs, where most bits are 0, optimizations focus on processing only the set bits (1s), as XORing 0 has no effect. Algorithms like the population count variant toggle iteratively over set bits using (e.g., v \& (v-1) to clear the least significant set bit), achieving time proportional to the rather than n. Early termination occurs naturally in such loops upon exhausting set bits; for subset-based processing (e.g., dividing into words or blocks), if a subset's is even (0), it can be skipped in the final XOR accumulation, further reducing operations in highly sparse cases.

Complexity Analysis

The parity function plays a central role in theory, serving as a example for establishing lower bounds on the computational resources required to evaluate it. In the model of circuits, which can only compute functions using AND and OR gates without negations, the parity function cannot be realized by any finite-size , as it is inherently non-monotone—flipping a single input bit from 0 to 1 can change the output from 1 to 0. This impossibility implies a lower bound stronger than the trivial Ω(n) size required for any function depending on all n inputs in general circuits. A landmark result in constant-depth circuit complexity is that the parity function cannot be computed by constant-depth, polynomial-size circuits, placing it outside the class AC⁰. In 1984, Furst, Saxe, and Sipser demonstrated this separation using random restrictions that simplify circuits, showing that any depth-d circuit computing parity on n bits requires size exp(Ω(n^{1/(d-1)})). This work established parity as a hard function for AC⁰ and linked circuit lower bounds to separations in the polynomial-time hierarchy. Håstad's switching lemma, introduced in 1986, refined this analysis by providing a probabilistic tool to bound the probability that a random restriction simplifies a depth-d DNF formula to a constant function, yielding nearly optimal size lower bounds of 2^{Ω(n^{1/(d-1)})} for parity in AC⁰ circuits. These results highlight parity's utility in proving unconditional lower bounds for restricted circuit classes. In models emphasizing resource tradeoffs, such as branching programs, computing imposes notable lower bounds. Branching programs model time-space tradeoffs where time corresponds to the length and space to the width of the program. For read-once branching programs, where each variable is queried at most once along any path, explicit constructions achieve linear size for , but restricted variants exhibit superlinear lower bounds; for instance, read-once branching programs require exponential size 2^{\Omega(n)} for the of the number of triangles in a on \sqrt{n} vertices, a -like function. These bounds arise from arguments and gate elimination techniques applied to the program's structure. The query complexity of parity further underscores its hardness in adaptive models. In the deterministic decision tree model, where each query reveals a single input bit, computing parity requires Ω(n) queries in the worst case, as the function has maximum sensitivity—every input has n influential bits, necessitating inspection of all variables to determine the output with certainty. This contrasts with easier functions like majority, which can be decided with o(n) queries on average. The high query complexity stems directly from parity's balanced Fourier spectrum and uniform influence of all variables. Parity also defines key completeness results in counting complexity classes. The problem ⊕SAT—determining whether a Boolean formula has an odd number of satisfying assignments—is complete for the class ⊕P under parsimonious reductions, meaning it captures the full power of nondeterministic polynomial-time machines where acceptance is based on the parity of accepting paths. This completeness positions parity as a foundational hard problem, with implications for derandomization and interactive proofs, as reductions from ⊕P problems preserve the parity of solutions. Seminal work by Valiant established the framework for ⊕P, with explicit completeness proofs for ⊕SAT following from parsimonious transformations of #P-complete problems like #SAT.

Extensions and Applications

Infinite Parity Function

The infinite parity function extends the notion of finite parity to infinite sequences x = (x_1, x_2, \dots) \in \{0,1\}^\mathbb{N} by defining it as \lim_{n \to \infty} P_n, where P_n = x_1 \oplus x_2 \oplus \dots \oplus x_n is the parity of the first n bits, provided the exists. This exists the sequence x has finitely many 1's (i.e., finite ), in which case the infinite parity equals the parity of the total number of 1's in x. If there are infinitely many 1's, the parity flips infinitely often, and the does not exist. In descriptive set theory, the set E of sequences with even infinite —those for which the limit exists and equals 0—is \mathbf{\Pi}^0_2-complete. This places E at the second level of the , where \mathbf{\Pi}^0_2 sets are countable intersections of open sets, and completeness means every \mathbf{\Pi}^0_2 set admits a continuous to E. The high descriptive complexity of E underscores the challenges in classifying such sets topologically within Polish spaces like \{0,1\}^\mathbb{N}. The infinite connects to numbers, which are infinite s where every finite appears with asymptotic $2^{-k} for k. For a , the number of 1's in the first n positions is \frac{n}{2} + o(n), so the running P_n oscillates between 0 and 1 infinitely often, ensuring the limit does not exist. This behavior contrasts with sequences of finite support, where the stabilizes, and highlights how implies non-convergence of parities. Determining the infinite of a sequence is not computable, as it requires deciding whether the is finite and, if so, computing its —tasks that exceed recursive function capabilities. This non-computability links to reductions from the : given a Turing machine M_e, construct a sequence with 1's encoding the steps of M_e on input e; the is finite if and only if M_e halts, allowing a reduction that solves halting if infinite were computable. Such reductions demonstrate that infinite resides at the \mathbf{\Pi}^0_2-complete level in the arithmetic hierarchy. Examples illustrate the behavior of infinite parity. The binary Champernowne constant, obtained by concatenating binary expansions of natural numbers (1, 10, 11, 100, ...), contains every finite binary string as a , hence has infinitely many 1's and no limiting . In random sequences drawn from the uniform on \{0,1\}^\mathbb{N}, the probability of finitely many 1's is 0, so the limit fails to exist , with P_n behaving like a flip at each step.

Practical Uses

Parity functions play a crucial role in error detection mechanisms, where a is appended to data to indicate whether the number of 1s in the representation is even or , enabling the detection of single-bit errors. In systems, such as , bits are commonly used to protect against transient faults by recomputing the parity upon reading and comparing it to the stored value, thus identifying corruption without correction. This approach is extended in systems, particularly RAID 5, where information is distributed across multiple disks to detect and recover from single disk failures by XORing corresponding blocks from surviving drives. In communication protocols, while more advanced methods like cyclic redundancy checks () in Ethernet provide robust multi-bit error detection, they fundamentally rely on polynomial-based computations to generate checksums that verify during transmission. In , parity functions underpin the generation of pseudorandom sequences essential for ciphers, where linear shift registers (LFSRs) employ XOR operations—equivalent to checks—to produce keystreams that mask bits. These sequences appear random yet are deterministic, allowing efficient in resource-constrained environments like communications, with the parity-based ensuring long periods before repetition. LFSRs are integral to standards such as those used in and for pseudorandom number generation, enhancing against pattern-based attacks. Within , parity functions serve as the foundation for constructing even-weight codes, where an overall parity check ensures all codewords have an even number of 1s, thereby achieving a minimum of 2 for detection. This is exemplified in s, which extend parity checks across multiple positions to not only detect but also correct single errors; for instance, the (7,4) Hamming code uses three parity bits derived from linear combinations (XORs) of data bits to identify and flip erroneous bits. In hardware design, parity functions are implemented in CPU flag registers, such as the parity flag (PF) in x86 architectures, which captures the even/odd bit count of the least significant byte of arithmetic results to facilitate software-based error checking in critical computations. Beyond processors, parity checks enhance fault coverage in VLSI testing by verifying the parity of scan chains during built-in self-test (BIST) operations, allowing detection of stuck-at faults with minimal overhead. Recent advancements in have integrated parity measurements into error correction schemes, particularly in surface s, where repeated measurements—computing the of states around plaquettes and vertices—enable the decoding of Pauli errors to protect logical qubits. Post-2020 experiments have demonstrated surface code thresholds below 1% error rates using superconducting processors, with real-time decoders processing parity syndromes to sustain times of approximately 0.3 milliseconds.

References

  1. [1]
    [PDF] ANALYSIS OF BOOLEAN FUNCTIONS Ryan O'Donnell
    ... parity functions. 23. Definition 1.2. For S ⊆ [n] we define χS : Fn. 2 → R by ... algebra there are two equivalent definitions of what it means for a.
  2. [2]
    Parity - Boolean Zoo
    Mar 20, 2022 · Parity only depends on the number of ones and is therefore a symmetric Boolean function. · The decision tree complexity of the parity function is ...
  3. [3]
    Bit Twiddling Hacks - Stanford Computer Graphics Laboratory
    Counting bits set, computing parity (1 if an odd number of bits set, 0 otherwise), swapping values, reversing bit sequences, modulus division.Missing: pseudocode | Show results with:pseudocode
  4. [4]
    [PDF] Circuits
    PARITY in NC1. PARITY = { x | x has odd number of 1s }. Circuit should evaluate x1⊕x2⊕...⊕xn. Tree of n-1 XOR gates: log n deep. Each XOR gate implemented in ...
  5. [5]
    [PDF] Parallel Computation Patterns – Reduction Trees Objective ...
    Reduction is a widely used parallel pattern that reduces a set of inputs to a single value, using a tree to compute the final answer.
  6. [6]
    [PDF] Parity-preserving transformations in computer arithmetic
    The parallel (combinational) version of this scheme becomes a binary tree of two-operand adders, reducing the number of operands by a factor of 2 at each level.
  7. [7]
    [PDF] EE 457_NonLinear_P2.jnt
    Hence the PARITY can be computed using a full XOR tree or repetitively using a partial XOR tree. Consider the following partial XOR trees. HW#9. W. CHON. Y. D.
  8. [8]
    [PDF] Circuit Complexity - Brown Computer Science
    Lower bounds to formula size also produce lower bounds to circuit depth, a measure of the parallel time needed for a function. Research on these restricted ...
  9. [9]
    Parity, circuits, and the polynomial-time hierarchy
    Cite this article. Furst, M., Saxe, J.B. & Sipser, M. Parity, circuits, and the polynomial-time hierarchy. Math. Systems Theory 17, 13–27 (1984). https://doi ...Missing: paper | Show results with:paper
  10. [10]
    Computational limitations for small depth circuits - DSpace@MIT
    Computational limitations for small depth circuits. Author(s). Håstad, Johan. Thumbnail. Download15748273-MIT.pdf (17.14Mb). Other Contributors.
  11. [11]
    [PDF] A Lower Bound for Read-Once-Only Branching Programs
    We give a C” lower bound for read-once-only branching programs computing an explicit. Boolean function. For n = (;), the function computes the parity of the ...
  12. [12]
    Time-space trade-offs for branching programs - ScienceDirect
    Branching program depth and the logarithm of branching program complexity are lower bounds on time and space requirements for any reasonable model of sequential ...
  13. [13]
    [PDF] Decision tree complexity of Boolean functions
    It is easy to check that the parity function requires a full binary tree to compute it and only the parity and its negation are the functions with that high ...
  14. [14]
    [PDF] Computational Complexity: A Modern Approach - Princeton University
    Jan 8, 2007 · Definition 9.13 A language L in the class ⊕P (pronounced “parity P”) iff there exists a polynomial time NTM M such that x ∈ L iff the number of ...
  15. [15]
    [PDF] Page 1 - People @EECS - University of California, Berkeley
    – Note: parity bits occupy power-of- two bit positions in code-word. – On writing to memory: » parity bits are assigned to force even parity over their ...
  16. [16]
    [PDF] A Highly Reliable GPU-Bases RAID System
    Similar lack of error correction can also be noticed in the parity-based RAIDs. For k +1, error detection is possible but error correction is completely absent.
  17. [17]
    [PPT] Lecture 16: Caches and Memory
    Protect 4 data bits with 3 parity bits. 1 2 3 4 5 ... Can detect 2 errors (Double Error Detection, i.e. DED) ... Example: Ethernet CRC-32. Application.
  18. [18]
    [PDF] Introduction to Stream Ciphers Attacks on CSS, WEP, MIFARE
    • Use a pseudo-random number generator (PRNG). • PRNG takes a short, truly ... Inverting LFSR stream: Unshift LFSR until end of authentication ai = R(ai ...
  19. [19]
    Pseudorandom sequences and stream ciphers
    This chapter concerns the generation of pseudorandom sequences and the role of these sequences in stream ciphers. Pseudorandom sequences are also used in ...
  20. [20]
    [PDF] Hamming Codes
    Hamming codes are error-correcting codes, designed to correct single errors and detect double errors, and are built using a specific check matrix.Missing: even- | Show results with:even-
  21. [21]
    [PDF] Overview of IA-32 assembly programming
    instructions are: js and jns. Parity Flag (PF). Indicates the parity of the 8-bit result produced by an operation. PF = 1 if the byte contains an even number 1 ...
  22. [22]
    [PDF] Datapath Subsystems
    The simplest form of error-detecting code is parity, which detects single-bit errors. More elaborate error-correcting codes (ECC) are capable of single-error ...
  23. [23]
    Quantum error correction below the surface code threshold - Nature
    Dec 9, 2024 · We present two below-threshold surface code memories on our newest generation of superconducting processors, Willow: a distance-7 code, and a distance-5 code.