Fact-checked by Grok 2 weeks ago

Zhegalkin polynomial

A Zhegalkin polynomial, also known as an in some contexts, is a over the (the integers modulo 2) that uniquely represents any of n variables as a sum (modulo 2) of square-free , where each monomial is a product of distinct variables without coefficients other than 1. Introduced by Russian mathematician Ivan Ivanovich Zhegalkin in 1927, these polynomials provide an algebraic framework for , realizing George Boole's original equations as identities in the ring of polynomials modulo 2. Zhegalkin polynomials differ from traditional polynomial representations by operating exclusively under modulo 2 arithmetic, where addition and subtraction coincide, multiplication corresponds to , and is expressed as x + 1. Every admits a unique Zhegalkin expansion, which can be derived from its via inversion over the lattice, ensuring no redundant terms like higher powers of variables (since x^2 \equiv x \pmod{2}). This uniqueness stems from the fact that the set of all n-variable Zhegalkin polynomials forms a of dimension $2^n over , isomorphic to the space of all possible truth tables for n-variable functions. Key properties include closure under Boolean operations: for instance, disjunction x \lor y is x + y + xy, and the polynomials are particularly useful for analyzing symmetric Boolean functions, which remain invariant under variable permutations and correspond to polynomials depending only on the number of true variables. Historically, Zhegalkin's work built on Boole's 1847 foundations by emphasizing the polynomial structure, influencing developments in Reed-Muller codes and Boolean function theory outside Russia. In applications, they facilitate efficient computation in digital circuit design, cryptographic protocols, and complexity theory, where the degree of the polynomial measures the function's algebraic complexity.

Fundamentals of Boolean Representation

Boolean Functions and GF(2) Arithmetic

A Boolean function of n variables is a mapping f: \{0,1\}^n \to \{0,1\}, assigning to each binary input vector a binary output value. This captures the essence of logical operations in computation and switching theory, where inputs represent true (1) or false (0) states. For example, the binary AND function on two variables is defined as f(x,y) = 1 only if both x=1 and y=1, outputting 0 otherwise; the OR function outputs 1 if at least one input is 1; and the XOR function outputs 1 if the inputs differ. The arithmetic operations for evaluating such functions are performed in the finite field GF(2), which consists of the elements {0, 1} with addition defined as modulo 2 (equivalent to the logical XOR operation: $0+0=0, $0+1=1, $1+0=1, $1+1=0) and multiplication as the logical AND operation ($0 \times 0 = 0, $0 \times 1 = 0, $1 \times 0 = 0, $1 \times 1 = 1). These operations form a field, satisfying the axioms of commutative rings with unity and inverses, enabling algebraic manipulation of Boolean expressions. Polynomial evaluation over GF(2) involves expressions that are linear combinations (using XOR for addition) of monomials (products via AND), with coefficients restricted to 0 or 1 since those are the only field elements. Due to the field's characteristic 2, higher powers of variables simplify dramatically: for any x \in \{0,1\}, x^2 = x because $0^2 = 0 and $1^2 = 1, and similarly x^k = x for any integer k \geq 1. This idempotence property, x \cdot x = x, eliminates the need for coefficients beyond 0 or 1 and reduces polynomial degrees in evaluation. The framework of GF(2) arithmetic provides the algebraic foundation for Boolean algebra, initially developed by George Boole in his 1854 treatise An Investigation of the Laws of Thought, where he modeled logical deduction using symbolic arithmetic resembling modulo 2 operations. Marshall Stone later formalized Boolean algebras as abstract structures in the 1930s, establishing their duality with topological spaces via his representation theorem and solidifying their connection to field theory like GF(2).

Definition of Zhegalkin Polynomials

A Zhegalkin polynomial is a multilinear polynomial representation of a Boolean function over the finite field GF(2), uniquely expressing the function's values on the domain {0,1}^n. This form arises from interpreting Boolean operations in terms of arithmetic modulo 2, where addition corresponds to exclusive-or (XOR) and multiplication to logical AND, ensuring the polynomial evaluates to the function's output at every point in the Boolean hypercube. Also known as the algebraic normal form (ANF) or modulo-2 canonical form, it provides an algebraic structure for analyzing Boolean functions without higher-degree terms beyond products of distinct variables. The general expression of a Zhegalkin polynomial for a f(x_1, \dots, x_n) is given by f(x_1, \dots, x_n) = \bigoplus_{S \subseteq \{1, \dots, n\}} a_S \prod_{i \in S} x_i, where \bigoplus denotes addition modulo 2, each a_S \in \{0,1\}, and the ranges over all subsets S of the variable indices (including the , which contributes the constant term a_\emptyset). The multilinearity ensures no squared variables appear, as x_i^2 = x_i holds in GF(2), simplifying the polynomial to a of monomials corresponding to the function's interactions. Zhegalkin polynomials were introduced by the Russian mathematician Ivan Ivanovich Zhegalkin in his 1927 paper "O tekhnike vychislenii predlozhenii v simvolicheskoi logike" (English: "On the technique of calculating propositions in symbolic logic"), published in Matematicheskii Sbornik, where he developed the approach within Russian works on Boolean algebra and symbolic logic. For instance, the Zhegalkin polynomial for the two-variable exclusive-or function f(x_1, x_2) = x_1 \oplus x_2 is simply x_1 \oplus x_2, reflecting the linear interaction without a constant or quadratic term.

Key Properties

Uniqueness and Canonical Form

Every Boolean function admits a unique representation as a Zhegalkin polynomial. For any Boolean function f: \{0,1\}^n \to \{0,1\}, there exists a unique multilinear polynomial F over the field \mathbb{F}_2 (with addition as XOR and multiplication as AND) such that F(x) = f(x) for all x \in \{0,1\}^n. This uniqueness arises because the $2^n monomials corresponding to all subsets of the n variables form a basis for the $2^n-dimensional vector space of all functions from \{0,1\}^n to \mathbb{F}_2, ensuring a one-to-one correspondence between functions and their polynomial representations. The result was first established by Ivan Zhegalkin in 1927 and is a foundational theorem in the algebraic study of Boolean functions. The Zhegalkin polynomial serves as the canonical (ANF) of the , uniquely determined by its coefficients, which are computed via the inversion over the subset lattice. This form is independent of variable ordering, making it a standard reference representation in fields like and . As such, it provides a fixed, polynomial-based encoding that captures the function's structure without redundancy. A key implication of this uniqueness is that two Boolean functions are identical if and only if their Zhegalkin polynomials coincide as elements of the polynomial ring over \mathbb{F}_2. This equivalence criterion simplifies verification and analysis, contrasting with non-polynomial forms like disjunctive normal form (DNF) or conjunctive normal form (CNF), which generally admit multiple equivalent expressions due to their reliance on literal clauses rather than monomial terms.

Degree and Structural Characteristics

The degree of a Zhegalkin polynomial, which represents a f: \{0,1\}^n \to \{0,1\} in over \mathbb{F}_2, is defined as the of the largest term with a non-zero . This measure, known as the algebraic degree of the , quantifies the highest level of interaction among the input variables required to express f. Formally, if the polynomial is f(\mathbf{x}) = \bigoplus_{S \subseteq } a_S \prod_{i \in S} x_i with a_S \in \{0,1\}, then \deg(f) = \max \{ |S| : a_S = 1 \}. For instance, constant functions, such as f(\mathbf{x}) = 0 or f(\mathbf{x}) = [1](/page/1), consist solely of the empty and thus have 0. Linear functions, including the f(x_1, \dots, x_n) = x_1 \oplus \cdots \oplus x_n, involve only single-variable monomials and have 1, reflecting their affine nature over \mathbb{F}_2. In , a of full n, such as f(\mathbf{x}) = x_1 x_2 \cdots x_n, achieves \deg(f) = n and evaluates to only when all inputs are , illustrating maximal for n variables. These examples highlight how captures the function's structural complexity, with higher degrees indicating greater deviation from . The of a Zhegalkin polynomial refers to the collection of all subsets S \subseteq for which a_S = 1, i.e., the monomials with non-zero coefficients. This support directly influences the function's nonlinearity: if the support is confined to monomials of at most 1, f is affine; otherwise, higher-degree terms in the support introduce nonlinear behavior. The size and distribution of the support thus provide insight into the polynomial's sparsity and the Boolean function's resistance to linear approximations. Zhegalkin polynomials possess key algebraic properties arising from their over \mathbb{F}_2. Additivity holds such that the polynomial for f \oplus g is obtained by taking the coefficient-wise XOR (modulo 2 sum) of the polynomials for f and g, preserving the structure under composition with the XOR operation. Linear terms in the polynomial are homogeneous of degree 1, forming the affine part of f, while higher-degree terms capture interactions. A fundamental limitation is the binary nature of coefficients—no fractional values are permitted—and all arithmetic, including and , is performed 2, ensuring the representation remains within without overflow or real-number extensions.

Computation Techniques

Indeterminate Coefficients Method

The indeterminate coefficients method involves treating the coefficients a_S of the Zhegalkin polynomial as unknown indeterminates over GF(2) and constructing a system of linear equations based on the known values of the Boolean function f at all points in \{0,1\}^n. For each input vector \alpha \in \{0,1\}^n, the equation is f(\alpha) = \sum_{S \subseteq } a_S \prod_{i \in S} \alpha_i, where = \{1, 2, \dots, n\} and the product is 1 if \alpha_i = 1 for all i \in S, and 0 otherwise. This yields a square system of $2^n equations in $2^n unknowns, with the right-hand side given by the truth table of f. The coefficient matrix A of the system has rows indexed by \alpha and columns by subsets S, with entries A_{\alpha, S} = \prod_{i \in S} \alpha_i. This matrix is invertible over GF(2) because the monomials \{\prod_{i \in S} x_i \mid S \subseteq \} form a basis for the $2^n-dimensional vector space of all Boolean functions on n variables. The invertibility ensures a unique solution for the coefficients, corresponding to the canonical Zhegalkin representation. To obtain the coefficients, the system is solved using Gaussian elimination over GF(2). The of on this $2^n \times 2^n is O(2^n n^2) when exploiting the structured sparsity and ordering of the matrix (e.g., by subset inclusion), making the method practical for small n (typically n \leq 10) in applications like cryptographic analysis. For larger n, more optimized transforms are preferred, but the indeterminate coefficients approach highlights the algebraic nature of Zhegalkin polynomials. Consider a detailed example for a 2-variable f(x_1, x_2), where the subsets are \emptyset ( 1), \{1\} (x_1), \{2\} (x_2), and \{1,2\} (x_1 x_2), with coefficients a_\emptyset, a_{\{1\}}, a_{\{2\}}, a_{\{1,2\}}. The system of equations, ordered by inputs (0,0), (0,1), (1,0), (1,1), is: \begin{align*} f(0,0) &= a_\emptyset, \\ f(0,1) &= a_\emptyset \oplus a_{\{2\}}, \\ f(1,0) &= a_\emptyset \oplus a_{\{1\}}, \\ f(1,1) &= a_\emptyset \oplus a_{\{1\}} \oplus a_{\{2\}} \oplus a_{\{1,2\}}. \end{align*} Given specific truth table values for f, proceeds row-by-row: subtract rows to eliminate variables, starting from the first equation to solve for a_\emptyset = f(0,0), then substitute into the second for a_{\{2\}} = f(0,1) \oplus a_\emptyset, and similarly for the others. For instance, if f is the XOR function with [0, 1, 1, 0], the solution yields a_\emptyset = 0, a_{\{1\}} = 1, a_{\{2\}} = 1, a_{\{1,2\}} = 0, confirming f(x_1, x_2) = x_1 \oplus x_2. This example illustrates the direct , with the matrix being lower triangular in this input ordering, facilitating straightforward back-substitution.

Conversion from Disjunctive Normal Form

The (DNF) of a represents it as a disjunction (logical OR) of minterms, where each minterm is a (logical AND) of literals, with literals being either a x_i or its \neg x_i. To convert this to a Zhegalkin , interpret the DNF over the field , where the logical OR corresponds to addition modulo 2 (denoted +), logical AND corresponds to multiplication (denoted \cdot), and the constant 1 represents the true value. The \neg x_i is replaced by the $1 + x_i, since in , $1 + x_i evaluates to 1 when x_i = 0 and 0 when x_i = 1. The conversion proceeds by expanding each minterm into a polynomial over GF(2) and then summing these polynomials modulo 2. For a minterm consisting of positive literals x_{j_1} \cdot x_{j_2} \cdots x_{j_k} and negated literals \neg x_{m_1} \cdot \neg x_{m_2} \cdots \neg x_{m_l}, substitute to obtain \left( \prod_{r=1}^k x_{j_r} \right) \cdot \left( \prod_{s=1}^l (1 + x_{m_s}) \right). Expanding the product over the negated literals yields a sum of monomials, each corresponding to a subset of the negated variables (choosing either 1 or x_{m_s} for each). The full Zhegalkin polynomial is then the modulo 2 sum of these expansions for all minterms in the DNF, where like terms cancel if they appear an even number of times due to addition modulo 2. This process requires no carrying over, as all operations are in GF(2). A practical follows directly from this : for each minterm in the DNF, generate all $2^l from the product of (1 + x_{m_s}) for the l negated literals, multiply each by the product of the positive literals, and include these monomials in a global sum modulo 2. Collect coefficients for each possible monomial (tracking only presence modulo 2, as higher powers reduce via x_i^2 = x_i but ANF uses multilinear terms). This yields the unique Zhegalkin polynomial, which is square-free and multilinear over GF(2). Consider the example of the given in DNF as (x_1 \land x_2) \lor (x_1 \land \neg x_2). The first minterm expands to x_1 \cdot x_2. The second minterm expands to x_1 \cdot (1 + x_2) = x_1 + x_1 x_2. Summing modulo 2 gives x_1 x_2 + x_1 + x_1 x_2 = x_1, since the x_1 x_2 terms cancel. Thus, the Zhegalkin polynomial is x_1. This method is particularly efficient when the DNF is sparse (few minterms), as the computational cost scales with the total number of terms generated during expansion, avoiding the need to enumerate the full of size $2^n. It directly exploits the structure of the DNF for polynomial-time conversion in the size of the input representation.

Tabular and Pascal-Based Methods

The tabular method for computing Zhegalkin polynomial coefficients involves constructing a difference table from the of a , where each level of differences is obtained by XORing adjacent entries over GF(2). This iterative process, analogous to in but adapted to the domain, allows the coefficients to be read directly from specific positions in the table, such as the leading entries or diagonals corresponding to each degree. The method was introduced by V. P. Suprun as an efficient tabular approach for polynomial decomposition, requiring O(2^n) operations in its basic form but benefiting from the structure for faster variants.Suprun (1987) The process begins by arranging the in lexicographic (binary) order of the input vectors, then computing successive differences level by level until the higher-degree coefficients are isolated. For example, consider a 2-variable with values f(00) = 0, f(01) = 1, f(10) = 1, f(11) = 0 (corresponding to the XOR function). The initial row (level 0) is [0, 1, 1, 0]. The first level is computed as [0 ⊕ 1 = 1, 1 ⊕ 1 = 0, 1 ⊕ 0 = 1]. The second level is [1 ⊕ 0 = 1, 0 ⊕ 1 = 1]. The third (and final) is [1 ⊕ 1 = 0]. In this setup, the s emerge from the first column across levels: the constant term a_∅ = 0 (level 0, 0), the linear for x_1 from the appropriate differencing path yielding 1, for x_2 yielding 1, and the quadratic a_{12} = 0 (level 3). This confirms the Zhegalkin polynomial as x_1 ⊕ x_2, demonstrating how the table propagates the finite differences to reveal the unique .Canteaut (2023) The Pascal-based method computes each coefficient a_S by summing (XORing) the function values over all input vectors T ⊆ S, weighted by binomial coefficients modulo 2 from , which simplify to 1 if the bit support of T is a of S and 0 otherwise per Lucas' theorem. This leverages the lower triangular structure of the modulo 2, where the entry for row S and column T is \binom{|S|}{|T|} \mod 2 if aligned by , but more precisely follows the relation in indexing. The full for n variables is the n-fold of the 2×2 basic Pascal block \begin{pmatrix} 1 & 0 \ 1 & 1 \end{pmatrix} over GF(2), enabling direct matrix- multiplication with the to yield the .Stanković et al. (2017) For n=2, the matrix rows correspond to Pascal entries mod 2: the row for the constant is [1, 0, 0, 0]; for x_2 is [1, 1, 0, 0]; for x_1 is [1, 0, 1, 0]; for x_1 x_2 is [1, 1, 1, 1]. Applying this to the truth table [0, 1, 1, 0]^T gives coefficients [0, 1, 1, 0], again yielding x_1 ⊕ x_2. Both methods fundamentally compute finite differences over GF(2), with the tabular approach offering an iterative, space-efficient alternative to the explicit in the Pascal method, though the latter provides clearer insight into the combinatorial structure via binomial parities.Suprun (2013) The formula underlying these techniques is a_S = \sum_{T \subseteq S} f(T) \mod 2, ensuring uniqueness in the Zhegalkin representation.Canteaut (2023)

Summation Method and Lattice Interpretation

The summation method for computing the coefficients of a Zhegalkin polynomial, also known as the (ANF) of a f: \mathbb{F}_2^n \to \mathbb{F}_2, relies on a direct formula derived from the structure of the lattice. For a S \subseteq \{1, 2, \dots, n\} corresponding to the \prod_{i \in S} x_i, the coefficient a_S is given by a_S = \bigoplus_{\substack{T \subseteq S}} f(\chi_T) \pmod{2}, where \chi_T denotes the characteristic of T (i.e., the input with 1s exactly in positions indexed by T) and the \oplus denotes 2. This formula arises from the Möbius inversion theorem applied to the (poset) of the power set of \{1, 2, \dots, n\} ordered by , where the ANF coefficients represent the Möbius transform of the values of f. In lattice-theoretic terms, the Boolean lattice \mathcal{P}(\{1, 2, \dots, n\}) serves as the underlying poset, with subsets ordered by . The zeta transform on this lattice maps the ANF coefficients to the function evaluations via f(x) = \bigoplus_{S \subseteq \operatorname{supp}(x)} a_S, where \operatorname{supp}(x) is the support of the input x. Inverting this using the \mu(U, V) = (-1)^{|V \setminus U|} for U \subseteq V yields the coefficients, and since operations are modulo 2, the signs simplify to 1, resulting in the subset summation. This interpretation highlights the Zhegalkin polynomial as the unique representation in the over \mathbb{F}_2, with coefficients encoding the inverted structure of the . To compute a_S algorithmically, evaluate f over all $2^{|S|} inputs where variables indexed outside S are fixed to 0, and variables in S range freely over \{0, 1\}; then take the (sum modulo 2) of these s. For the constant term a_\emptyset, this reduces to the single evaluation a_\emptyset = f(0, 0, \dots, 0), the at the all-zero input. As an illustrative example, consider the f(x_1, x_2) = x_1 \land x_2 with values f(00) = 0, f(10) = 0, f(01) = 0, f(11) = 1. The coefficient a_{\{1,2\}} is the sum modulo 2 over inputs with variables outside \{1,2\} fixed to 0 (none) and in \{1,2\} free: $0 \oplus 0 \oplus 0 \oplus 1 = 1. Similarly, a_{\{1\}} = 0 \oplus 0 = 0 (inputs $00, 10), confirming the ANF f(x_1, x_2) = x_1 x_2. Although this direct summation method requires O(3^n) total operations across all coefficients (summing $2^{|S|} terms for each of the $2^n subsets S), it admits an efficient O(n 2^n) implementation via the fast (binary) Möbius transform, which exploits the lattice structure for recursive computation along each variable dimension. This approach is inherently parallelizable, as computations for disjoint subsets can proceed independently, making it suitable for hardware implementations in cryptographic analysis where n is moderate (e.g., up to 32). Tabular methods for small n can be viewed as special cases of this lattice summation restricted to low dimensions.

Karnaugh Map Approach

The (K-map) serves as a graphical representation of a function's truth table, arranged in a where each cell corresponds to a minterm, and adjacent cells differ by exactly one variable, facilitating visual identification of patterns. This structure, based on ordering, allows for intuitive grouping of function values (0s and 1s) to simplify expressions. To compute a Zhegalkin polynomial using a K-map, one common method involves first grouping the 1s to derive a (DNF), then converting it to the Zhegalkin form via modulo-2 expansion of the product terms, treating disjunctions as exclusive-or operations in GF(2). Alternatively, coefficients can be determined directly by calculating the (sum modulo 2) of the function values in specific submaps: for the corresponding to variable set S, the is the parity of 1s in the subregion where variables not in S are fixed to 0 and variables in S vary freely. This submap selection leverages the K-map's layout to isolate and count entries efficiently for small numbers of variables. Consider the 3-variable , which outputs 1 if at least two inputs are 1. Its yields the following K-map (with rows labeled by x_1x_2 in and columns by x_3):
x1x2x3=0x3=1
0000
0101
1111
1001
Grouping the 1s gives the DNF x_1 x_2 + x_1 x_3 + x_2 x_3. Expanding modulo 2 (noting no overlapping coverage requires adjustment) confirms the Zhegalkin polynomial x_1 x_2 \oplus x_1 x_3 \oplus x_2 x_3. Directly, the coefficient for x_1 x_2 (S = {1,2}) is the of the submap with x_3 = 0 (cells: 000=0, 010=0, 110=1, 100=0), summing to 1 modulo 2, confirming the coefficient is 1 for each pairwise term via subset sums. This approach is practical only for up to 4 variables due to the exponential growth in map size, beyond which visualization becomes unwieldy; it primarily aids conceptual understanding and manual verification rather than large-scale computation.

Connections and Extensions

Möbius Inversion Relation

The connection between Zhegalkin polynomials and Möbius inversion stems from interpreting a Boolean function as a lattice function on the Boolean lattice B_n = (2^{}, \subseteq), where the coefficients of the polynomial arise as the inverse transform of the function values under the lattice order. The on the Boolean is given by \mu(S, T) = \begin{cases} (-1)^{|T \setminus S|} & \text{if } S \subseteq T, \\ 0 & \text{otherwise}. \end{cases} This function satisfies the key property \sum_{S \subseteq U \subseteq T} \mu(S, U) = 0 if S \subsetneq T, and equals 1 if S = T. Consider a \phi: \{0,1\}^n \to \mathrm{GF}(2), identified with a function g: 2^{} \to \mathrm{GF}(2) via g(S) = \phi(\chi_S), where \chi_S \in \{0,1\}^n is the characteristic vector of S \subseteq . The Zhegalkin polynomial is \phi(x) = \bigoplus_{T \subseteq } a_T \prod_{i \in T} x_i, where the evaluation at \chi_S yields g(S) = \bigoplus_{T \subseteq S} a_T. This expresses g as the zeta transform of the coefficient function f: 2^{} \to \mathrm{GF}(2) defined by f(T) = a_T, specifically g(S) = \sum_{T \subseteq S} f(T), with all sums over \mathrm{GF}(2). By the Möbius inversion theorem on posets, f(S) = \sum_{T \subseteq S} \mu(T, S) g(T). Over \mathrm{GF}(2), where (-1)^{|S \setminus T|} \equiv 1 \pmod{2} for all T \subseteq S, this simplifies to a_S = \bigoplus_{T \subseteq S} g(T) = \bigoplus_{T \subseteq S} \phi(\chi_T). Thus, the Zhegalkin coefficients are precisely the Möbius transform of the function values on the lattice. This derivation highlights the polynomial representation as an inverted cumulative parity over subcube supports. For n=1, the lattice has elements \emptyset and \{1\}, with g(\emptyset) = \phi(0) and g(\{1\}) = \phi(1). The coefficients are a_\emptyset = g(\emptyset) = \phi(0) and a_{\{1\}} = g(\emptyset) \oplus g(\{1\}) = \phi(0) \oplus \phi(1), recovering the Zhegalkin terms (constant and linear) from the parity of cumulative function values over the chain. The Möbius inversion framework for posets, including the lattice, was formalized by Philip Hall in 1936, bridging Zhegalkin's 1927 polynomial representation of Boolean functions to broader theory.

Applications in Logic and Cryptography

In synthesis, Zhegalkin polynomials enable multi-level minimization of functions over GF(2) by representing them in , which facilitates the design of XOR-AND circuits with reduced gate counts compared to traditional sum-of-products forms. This approach, often via fixed-polarity dual Reed-Muller expressions derived from Zhegalkin polynomials, optimizes selection to minimize the number of terms, achieving up to 30% reduction in literals for functions with up to 17 variables while maintaining computational efficiency. For instance, a four-variable function specified as a product of sums can be transformed into a Zhegalkin-based form with fewer terms, lowering area and power consumption in combinational circuits. In , Zhegalkin polynomials, as the of functions, are essential for evaluating the nonlinearity and of S-boxes, which introduce the primary source of nonlinearity in block ciphers like . High algebraic degree in the Zhegalkin representation measures resistance to algebraic attacks, as low-degree approximations can be exploited to recover keys; for example, the AES S-box achieves a degree of 7 and nonlinearity of 112, ensuring robustness against by maximizing the from affine functions via the Walsh transform. Decomposing a round function of a primitive like AES into Zhegalkin form allows simulation of attacks, such as fast algebraic methods that target low-degree terms to assess vulnerability. Bent functions, with optimal nonlinearity of $2^{n-1} - 2^{n/2 - 1} and degree at most n/2, exemplify designs resistant to linear attacks when expressed in this form. Zhegalkin polynomials underpin error-correcting codes, particularly Reed-Muller codes, where they encode codewords as evaluations of polynomials of degree at most r over GF(2), enabling efficient detection and correction of multiple errors in transmitted data. In Reed-Muller codes RM(r, m), the Zhegalkin form represents the code space as a spanned by monomials up to degree r, facilitating decoding schemes that correct up to r errors through computation. This connection supports applications in reliable communication systems, where the structure allows for compact representation and fast encoding of functions into codewords. For circuit testing, aid in synthesizing easily testable over the Zhegalkin basis, where faults of type 0 at outputs can be detected with complete tests of length 1 for arbitrary functions. In irredundant realized via this basis, short single-fault tests for arbitrary stuck-at faults at require lengths not exceeding 3 for detection or 5 for , bounding the function for test complexity. This enables efficient fault coverage in digital systems, reducing testing overhead while preserving functional integrity. Recent research since the late 2010s highlights Zhegalkin polynomials' role in , where their maps to circuits, allowing unitary transformations of truth tables into block-diagonal matrices for n-bit functions via gates like CNOT and CCNOT. In model interpretability, Zhegalkin polynomials provide transparent representations of decision rules, as in learning where functions of states are exactly captured by low-degree polynomials, enabling human-understandable explanations of behavior in domains like card games. More recent work as of 2024 has extended their use to , combining Zhegalkin polynomials with SAT solvers for identifying context-specific regulatory interactions in biological networks, and to advanced testing of multiaffine properties in functions.

References

  1. [1]
    [PDF] Aristotle, Boole, and Chu Duality since 350 BC - Stanford University
    Aug 27, 2015 · The polynomials of Zhegalkin. In 1927 Ivan Ivanovich Zhegalkin realized that Boole's theory consisted of those equations between polynomials ...
  2. [2]
    [PDF] SYMMETRIC POLYNOMIAL-LIKE BOOLEAN FUNCTIONS
    Here k is the vector containing the components of the Zhegalkin polynomial, α is the vector, composed of the coefficients of the disjunctive representation of ...
  3. [3]
    [PDF] ANALYSIS OF BOOLEAN FUNCTIONS Ryan O'Donnell
    ... Boolean functions. This is a book about Boolean functions, f : {0, 1} n. → {0,1}. Here f maps each length-n binary vector, or string, into a single binary ...
  4. [4]
    [PDF] Boolean Algebra - Basics - Electrical & Computer Engineering
    2. Definition I. 2: A completely specified Boolean function, F, of n variables is a mapping f : Bn → B, where B = {0,1}. We model Bn as a binary n-cube. Vertex ...
  5. [5]
    [PDF] The finite field with 2 elements
    The simplest finite field is. GF(2) = F2 = {0,1} = Z/2. It has addition and multiplication + and × defined to be. 0+0=0 0+1=1 1+0=1 1+1=0. 0 × 0=0 0 × 1=0 1 × 0 ...
  6. [6]
    [PDF] R01 - Groups and Finite Fields - csail
    Feb 10, 2017 · ... finite field is Z2 = {0, 1}. Addition in this field is like. XOR (0 + 0 = 1 + 1 = 0 and 1 + 0 = 0 + 1 = 1). Multiplication in this field is.
  7. [7]
    10. Polynomial Codes and some lore about Polynomials
    A polynomial defined over this "field" (the field is called GF(2)) is no more than a polynomial whose coefficients are each 0 or 1. (A field is a set of ...
  8. [8]
    (PDF) MODERN ALGEBRA WITH APPLICATIONS
    The origin of boolean algebra dates back to 1847, when the English mathematician George Boole (1815–1864) published a slim volume entitled The Mathematical ...
  9. [9]
    Algebraic Normal Form - an overview | ScienceDirect Topics
    In the Russian mathematical literature, the ANF is usually called a Zhegalkin polynomial in honor of Ivan Zhegalkin (1869-1947), a Soviet mathematician who ...
  10. [10]
    Zhegalkin algebra - Encyclopedia of Mathematics
    Jun 6, 2020 · [1], I.I. Zhegalkin, Mat. Sb. , 34 : 1 (1927) pp. 9–28 ; [2], P.M. Cohn, "Universal algebra" , Reidel (1986) ; [3], S.V. Yablonskii, G.P. Gavrilov ...Missing: Ivan | Show results with:Ivan
  11. [11]
    [PDF] Lecture Notes on Cryptographic Boolean Functions
    Theorem 1.3 (Algebraic normal form). Let f be a Boolean function of n variables. Then, there exists a unique multivariate polynomial in F2[x1,...,xn]/(x2. 1 ...
  12. [12]
    [PDF] Boolean Functions for Cryptography and Error Correcting Codes
    Boolean Functions for Cryptography and Error. Correcting Codes. Claude Carlet∗. ∗LAGA, University of Paris 8, France; e-mail: claude.carlet@univ-paris8.fr. 1 ...
  13. [13]
    [PDF] arXiv:0907.3465v1 [quant-ph] 20 Jul 2009
    Jul 20, 2009 · of a Boolean function f is the degree of its Zhegalkin polynomial. ... the algebraic normal form representation, combine its arguments in ...
  14. [14]
    [PDF] Algebraic normal form of a bent function: properties and restrictions
    Zhegalkin polynomial in honor of Ivan Zhegalkin (1869–1947), a mathematician who introduced this representation in 1927. For a Boolean function f the number ...
  15. [15]
    [PDF] Boolean Functions for Cryptography and Coding Theory - LAGA
    New notions on Boolean and vectorial functions and new ways of using them have also emerged. A chapter devoted to these recent and/or not enough studied.
  16. [16]
    On the computation of the Möbius transform - ScienceDirect
    Feb 24, 2020 · In this work, we introduce a new algebraic point of view of this transformation based on the polynomial form of Boolean functions. It appears ...Missing: coefficient | Show results with:coefficient<|control11|><|separator|>
  17. [17]
    [PDF] SOME INTERESTING APPLICATIONS OF THE KARNAUGH MAP
    output Boolean functions. It often is to advantage to apply Boolean (Zhegalkin) polynomials, whose writing with respect to the given Boolean functions is ...
  18. [18]
    [PDF] Hacking of the AES with Boolean Functions - SciTePress
    Our aim is to describe the. AES algorithm as a set of Boolean functions then calculate their algebraic normal forms by using the Moe- bius transforms. After, we ...<|control11|><|separator|>
  19. [19]
    On the Applications of Mobius Inversion in Combinatorial Analysis
    (1935) and Philip Hall (1936); both authors were motivated by group theory problems. Neither author seems to have been aware of the combinatorial ...<|control11|><|separator|>
  20. [20]
    [PDF] Combinational Logic Synthesis Based on the Dual Form of Reed ...
    Recall that the Reed-Muller transform is applied over GF (2) [92]. Step 1: Construct the transformation matrix (RMn) using Kronecker product' Q9 ' [92,. 93] ...
  21. [21]
    [PDF] Zhegalkin Polynomial and Reed-Muller Expressions
    Zhegalkin Polynomial and Reed-Muller Expressions. Reed, I.S., "A class of multiple error correcting codes and their decoding scheme", IRE Trans. Inf. Th ...
  22. [22]
    Synthesis of easily testable circuits over the Zhegalkin basis in the case of constant faults of type 0 at outputs of elements
    ### Summary of Methods for Synthesizing Easily Testable Circuits over the Zhegalkin Basis
  23. [23]
    Short Single-Fault Tests for Circuits in the Zhegalkin Basis with ...
    We consider the problem of synthesizing easy-to-test circuits implementing given Boolean functions ... Let us represent f as a Zhegalkin polynomial (see,.
  24. [24]
    [PDF] Representation of Boolean functions in terms of quantum computation
    We have demonstrated an effective method to Zhegalkin polynomial constructing using the truth table of the original function. Gates X, CNOT, CCNOT (and etc.) ...Missing: differences | Show results with:differences
  25. [25]
    Learning the Rules of the Game: An Interpretable AI ... - IEEE Xplore
    Mar 17, 2021 · ... Zhegalkin polynomials, a special class of Boolean functions. This is true for many popular games including Spanish Dominó and the card game ...