Fact-checked by Grok 2 weeks ago

Communication complexity

Communication complexity is a subfield of that quantifies the minimal amount of communication needed among distributed parties, each holding private inputs, to collectively compute a of those inputs. It focuses on the efficiency of interactive protocols in scenarios where information is decentralized, providing insights into fundamental limits of distributed computation. The field was pioneered by in his 1979 paper, which formalized the two-party model where receives input x \in \{0,1\}^n and receives y \in \{0,1\}^n, and they exchange messages to output f(x,y) for some f: \{0,1\}^n \times \{0,1\}^n \to \{0,1\}. The communication complexity of f is the minimum, over all protocols computing f, of the worst-case number of bits exchanged, with extensions to randomized protocols (allowing shared and bounded error) and nondeterministic protocols (using existential proofs). Multi-party models, such as the number-on-the-forehead variant, generalize this to k players where each sees all but their own input. Central problems illustrate the field's depth: the equality function , which checks if x = y, has deterministic complexity \Theta(n) but randomized complexity O(1); the set disjointness DISJ, determining if two n-bit sets intersect, requires \Omega(n) communication even under randomization, a landmark result by Razborov (1992). Lower bound techniques include fooling sets, matrix (where complexity is at least \log \rank(f)), discrepancy methods, and information complexity, often yielding tight bounds like \Theta(n) for DISJ. Beyond theory, communication complexity yields impossibility results in diverse areas, including \Omega(n) space lower bounds for streaming algorithms estimating frequency moments, depth lower bounds for monotone circuits, and communication barriers in distributed algorithms like triangle detection (\Omega(n^2)). It also connects to , extension complexity, and proof systems, underscoring its role in revealing inherent trade-offs in computational models.

Deterministic Communication Complexity

Formal Definition

In the deterministic two-party communication model, introduced by Yao, Alice receives an input x \in \{0,1\}^n and Bob receives an input y \in \{0,1\}^n, and their goal is to compute the value of a fixed Boolean function f: \{0,1\}^n \times \{0,1\}^n \to \{0,1\} by exchanging bits of information. A deterministic protocol \pi for f is represented as a binary protocol tree, where each internal node specifies a bit position for one party to send (Alice sends a bit of her input if the node is hers, and similarly for Bob), the edges are labeled by the possible bit values (0 or 1), and each leaf is labeled with the output value f(x,y). For any inputs (x,y), the execution follows the unique path from the root to a leaf determined by the bits revealed along the way, ensuring the output at the leaf equals f(x,y). The communication cost of a \pi is the worst-case number of bits exchanged over all inputs, which equals the depth of the (the length of the longest root-to- path). The deterministic communication complexity of f, denoted D(f), is the minimum cost over all deterministic that correctly compute f. Equivalently, D(f) = \lceil \log_2 \ell(f) \rceil, where \ell(f) is the minimum number of leaves in any for f, since each corresponds to a unique transcript of the communication. The function f can also be represented by its communication matrix M_f, a $2^n \times 2^n matrix where rows are indexed by Alice's possible inputs x, columns by Bob's inputs y, and each entry (M_f)_{x,y} = f(x,y). In any deterministic protocol, the sets of input pairs leading to the same transcript form combinatorial rectangles in this matrix: a subset R \subseteq \{0,1\}^n \times \{0,1\}^n is a rectangle if it is a Cartesian product A \times B for some A \subseteq \{0,1\}^n and B \subseteq \{0,1\}^n. Moreover, each such rectangle corresponding to a transcript must be monochromatic, meaning f(x,y) is constant for all (x,y) \in R. The protocol thus partitions the entire matrix into at most $2^{D(f)} monochromatic rectangles. This rectangularity property is a cornerstone of the model, linking the combinatorial structure of M_f to the communication requirements.

Basic Examples

One canonical example in deterministic communication complexity is the equality function, denoted EQ_n, defined on inputs ∈ {0,1}^n where EQ_n(x, y) = 1 if and only if x = y. A deterministic for computing EQ_n has send the bits of her input x one by one. After receiving each bit x_i, if x_i ≠ y_i, sends a 1-bit message indicating "not equal" and the protocol halts with output 0; otherwise, continues. If all n bits are sent without interruption, both parties know the inputs are equal and output 1. This protocol has a worst-case communication cost of n bits. This cost is optimal, as the deterministic communication complexity of EQ_n is n. To understand the lower bound, consider the communication matrix M for EQ_n, a 2^n × 2^n with rows indexed by possible x values, columns by y values, and M_{x,y} = 1 precisely when x = y (i.e., along the ) and 0 otherwise. Any deterministic induces a of this matrix into monochromatic rectangles, where each rectangle consists of rows R ⊆ {0,1}^n and columns C ⊆ {0,1}^n such that M_{x,y} is constant for all x ∈ R and y ∈ C. For the 1-entries, each must lie in a separate monochromatic rectangle, as no two 1-entries share a row or column; thus, at least 2^n such rectangles are required to cover all 1-entries. The communication complexity is at least the logarithm (base 2) of the number of leaves in the protocol tree, which equals the number of monochromatic rectangles in the partition, yielding a lower bound of n bits. A related basic example is the AND function on n bits, defined with Alice holding x = (x_1, ..., x_n) ∈ {0,1}^n and Bob holding y = (y_1, ..., y_n) ∈ {0,1}^n, where the output is ∧_{i=1}^n (x_i = y_i), which is equivalent to the function EQ_n and thus also requires n communication in the deterministic setting. The communication matrix for this function mirrors that of EQ_n, leading to the same monochromatic rectangle partition argument and lower bound of n bits. Another fundamental example is the disjointness function DISJ_n, where Alice holds a subset A ⊆ represented as its characteristic vector x ∈ {0,1}^n, Bob holds B ⊆ as y ∈ {0,1}^n, and DISJ_n(x, y) = 1 if and only if A ∩ B = ∅. An efficient deterministic protocol has Alice send the bits of her vector x one by one. After each bit x_i, if x_i = y_i = 1, Bob sends a 1-bit message indicating "intersect" and halts with output 0; otherwise, Alice continues. If all n bits are sent without interruption, both output 1 (disjoint). This achieves n bits in the worst case. The deterministic communication complexity of DISJ_n is Θ(n). The lower bound for DISJ_n follows from the structure of its 2^n × 2^n communication M, where M_{x,y} = 1 if x and y have no overlapping 1-bits. Any deterministic partitions M into monochromatic rectangles. A key insight is that the of M over the reals is 2^n (full rank), and since the deterministic communication complexity is at least the logarithm of the rank of the communication matrix, at least n bits are required. This full-rank property implies that no fewer than 2^n monochromatic rectangles can partition the matrix, as lower-rank matrices admit coarser partitions. Randomized protocols, such as those using hashing or fingerprinting, can reduce the complexity for EQ_n (and thus the AND function) to O(1) expected bits, but these fall outside the deterministic model.

Key Theorems and Bounds

One fundamental lower bound technique in deterministic communication complexity is the fooling set method. For a boolean function f: \mathcal{X} \times \mathcal{Y} \to \{0,1\}, a fooling set F \subseteq \mathcal{X} \times \mathcal{Y} is a set of pairs such that f(x_i, y_i) = z for all (x_i, y_i) \in F (where z \in \{0,1\}), and for any distinct (x_i, y_i), (x_j, y_j) \in F, there exists either (x_i, y_j) or (x_j, y_i) with f evaluating to $1-z. This ensures no two elements of F can lie in the same monochromatic rectangle of a deterministic protocol. The size of the largest such fooling set, denoted \mathsf{fool}(f), satisfies D(f) \geq \log_2 \mathsf{fool}(f), as the protocol must distinguish these pairs using at least \log_2 |F| bits to separate them into distinct leaves of the protocol tree. This method was introduced by Aho, Ullman, and Yannakakis in their work on information transfer in VLSI circuits. A canonical application is the equality function \mathsf{EQ}_n: \{0,1\}^n \times \{0,1\}^n \to \{0,1\}, defined by \mathsf{EQ}_n(x,y) = 1 if and only if x = y. The set F = \{(x,x) \mid x \in \{0,1\}^n\} forms a 1-fooling set of size $2^n, since \mathsf{EQ}_n(x,x) = 1 for all such pairs, but for x \neq x', \mathsf{EQ}_n(x,x') = 0. Thus, D(\mathsf{EQ}_n) \geq n. The sequential protocol described above achieves the upper bound of n bits. Thus, D(\mathsf{EQ}_n) = n. Another key lower bound arises from the linear algebra method using the communication matrix M_f, whose rows index Alice's inputs, columns index Bob's inputs, and entries are f(x,y). The deterministic communication complexity satisfies \log_2 \rank(M_f) \leq D(f) \leq \log_2 \rank(M_f) + 1, where is the rank over the reals (or any ). The lower bound follows because a protocol with c bits partitions M_f into at most $2^c rank-1 monochromatic rectangles, so \rank(M_f) \leq 2^c. The upper bound is obtained via a protocol where one party sends a basis for the row space or similar linear encoding. This technique is particularly effective for functions where M_f has provably high rank, such as those related to geometric problems or combinatorial designs. The bound originates from early linear algebraic analyses in the . For additional lower bounds in the deterministic setting, the corruption method (or its variants) measures how much a protocol's output distribution changes when inputs are slightly perturbed. In the deterministic case, a simple corruption bound relates the number of input pairs sensitive to a single bit flip to the required communication: if \chi(f) denotes the minimum over bits of the fraction of pairs where flipping that bit changes f, then D(f) = \Omega(\log(1/\chi(f))). This captures functions robust to small input changes as potentially low-complexity. More refined versions connect to discrepancy and are detailed in works bridging linear algebra and protocol analysis. Trivially, for any f: \{0,1\}^n \times \{0,1\}^n \to \{0,1\}, D(f) \leq 2n: sends her n-bit input to Bob, who computes f(x,y) and sends the 1-bit output (or both parties exchange full inputs for ). This universal establishes an absolute upper bound, though tighter ones exist for specific functions like \mathsf{EQ}_n.

Randomized Communication Complexity

Definitions and Models

In randomized communication complexity, protocols allow and to use to compute a f: X \times Y \to \{0,1\}, where the inputs are distributed between them, and the output is correct with high probability over the randomness. A randomized specifies that the parties' actions depend on their inputs and random bits, ensuring correctness with probability at least $2/3 for every input pair, where the probability is taken over the random bits used in the . This model extends the deterministic setting, which can be viewed as a special case with no . The bounded-error randomized communication complexity, denoted R(f), is the minimum communication cost over all randomized protocols that compute f with error at most $1/3 on every input. More generally, R^\epsilon(f) denotes the minimum cost for protocols with error bounded by \epsilon < 1/2. These definitions typically consider two-sided error, where the protocol may err on both yes and no instances of f. In contrast, the zero-error randomized complexity R^\infty(f) (or R_0(f)) requires protocols that are always correct on every input, though randomness is permitted to reduce the worst-case communication; it satisfies D(f) \leq R^\infty(f) \leq R(f), where D(f) is the deterministic complexity. Randomized protocols can employ either public coins, where Alice and Bob share a common random string visible to both, or private coins, where each party generates independent random bits unknown to the other. The public-coin model often simplifies analysis, as shared randomness simulates a distribution over deterministic protocols. By Yao's minimax principle, the bounded-error complexity satisfies R(f) = \max_\mu \min_\pi C(\pi, \mu), where the maximum is over input distributions \mu, the minimum is over randomized protocols \pi, and C(\pi, \mu) is the expected communication cost of \pi under \mu.

Examples and Protocols

One prominent example of how randomness can drastically reduce communication complexity is the equality function EQ_n, where Alice holds an n-bit string x and Bob holds y, and they must determine if x = y. In the deterministic setting, the communication complexity is Θ(n), as Alice may need to send her entire input in the worst case. However, with public randomness, a fingerprinting protocol achieves R^{pub}(EQ_n) = O(\log n). The protocol proceeds as follows: using shared randomness, they select a random prime p from a suitable range (e.g., around n^2) and a random seed for a universal hash function; Alice computes and sends the O(\log n)-bit fingerprint of x modulo p, and Bob verifies if it matches that of y. The error probability is at most 1/n due to the low collision probability of the hash, and repetition can reduce it further if needed. Another key example is the Gap-Hamming Distance (GHD_n) problem, a promise problem where Alice and Bob, holding x, y ∈ {0,1}^n, must distinguish whether the Hamming distance d(x,y) ≤ n/2 - √n (low case) or d(x,y) ≥ n/2 + √n (high case). The randomized communication complexity is Θ(n). A protocol achieving the O(n) upper bound uses public randomness to generate s = Θ(n) biased random vectors r_j ∈ {0,1}^n (each coordinate 1 with probability ≈ 1/n), for j = 1 to s. Alice computes and sends the s-bit string of inner products ⟨x, r_j⟩ mod 2 (one bit each). Bob computes ⟨y, r_j⟩ mod 2 locally and estimates the fraction of positions where ⟨x, r_j⟩ ≠ ⟨y, r_j⟩ mod 2, which equals ⟨x ⊕ y, r_j⟩ mod 2. This fraction approximates d(x,y)/n with sufficient accuracy to resolve the gap, due to concentration properties of the biased parities. The total communication is O(n) bits with constant error probability. For set disjointness DISJ_n, where Alice and Bob hold subsets A, B ⊆ (represented as characteristic vectors in {0,1}^n) and must decide if A ∩ B = ∅, the deterministic complexity is Θ(n). A naive randomized sampling protocol, using public randomness to select O(√n) random indices from , requires Alice to send the O(√n \log n)-bit list of indices in A ∩ sample (encoding each index in \log n bits); Bob checks for overlaps with his sample. This detects large intersections via birthday paradox-like collisions but fails for small intersections, yielding high error. However, the exact randomized complexity is R(DISJ_n) = Θ(n), with the O(n) upper bound achieved trivially by Alice sending her entire set. The Ω(n) lower bound can be established via the discrepancy method: the discrepancy of DISJ_n is 2^{-\Omega(n)}, implying R(DISJ_n) = \Omega(\log(1/\disc(DISJ_n))) = \Omega(n) by the connection between discrepancy and randomized complexity (full proof via direct or corruption arguments in advanced settings).

Public vs Private Randomness

In the public-coin model of randomized communication complexity, Alice and Bob have access to a shared infinite random string, which they can use jointly during the protocol execution. This shared randomness allows them to coordinate without additional communication for random choices. In contrast, the private-coin model provides each party with their own independent infinite source of private randomness, which the other party cannot access directly. Public coins are at least as powerful as private coins, since the parties can simulate private randomness by partitioning the shared string into separate portions for each. The public-coin model can achieve strictly lower communication in some cases; for the equality function on n-bit inputs, a public-coin protocol uses O(1) communication by having the parties share a random hash function and Alice send the hash of her input for Bob to compare. A private-coin protocol for equality, relying on independent hashing (such as Alice selecting and sending a random prime along with her input modulo that prime), requires \Theta(\log n) communication. Despite such gaps, the models are computationally equivalent up to a logarithmic factor, as established by Newman's theorem. Newman's theorem states that any public-coin protocol with communication complexity C can be transformed into a private-coin protocol with communication complexity at most C + O(\log n), where n is the input size. The proof works by amplifying the private randomness: the party with higher-entropy private coins publicly broadcasts a short seed (using O(\log n) bits) that allows the other party to generate an approximation of that randomness distribution, ensuring the protocol's error probability remains bounded. This simulation shows that public coins can emulate private coins with only a small overhead. For most computational tasks in communication complexity, public-coin protocols suffice due to this near-equivalence, allowing researchers to focus on the simpler shared-randomness model without loss of generality up to logarithmic factors. Private coins, however, offer advantages in information-theoretic settings, where the shared randomness in public-coin protocols could inadvertently leak information about the inputs beyond the communicated bits.

Advanced Concepts

In randomized communication complexity, a key result establishing equivalence between models involves the collapse to distributional complexity. By Newman's theorem, any public-coin randomized protocol can be simulated by a private-coin protocol with only an additive increase of O(\log n) bits in communication, assuming the input distribution has entropy \Omega(n), where n is the input size. Combined with hardness amplification techniques that boost the entropy of hard distributions, this implies that the public-coin randomized complexity R^{\text{pub}}(f) collapses to the distributional complexity up to constant factors. Distributional complexity provides a fundamental lower bound tool for randomized protocols. For a function f, distribution \mu over inputs, and error parameter \varepsilon > 0, the \varepsilon-distributional complexity {\rm DC}^\varepsilon(\mu, f) is the minimum cost of a deterministic protocol that computes f correctly on all but an \varepsilon-fraction of the inputs under \mu. By Yao's minimax principle, the public-coin randomized complexity satisfies R^{\text{pub}}(f) \approx \max_\mu {\rm DC}^{1/3}(\mu, f), up to logarithmic factors, reducing the problem of proving randomized lower bounds to finding hard input distributions. Information complexity offers a finer-grained measure that captures the revealed during communication and yields direct lower bounds on randomized complexity. For a \pi, \mu, and error \varepsilon, the \varepsilon- complexity {\rm IC}^\varepsilon(\mu, f, \pi) is the I(XY ; \Pi | \mu), where X, Y are Alice's and Bob's inputs, and \Pi is the protocol transcript under \pi and \mu. The complexity of f under \mu is the minimum over all \varepsilon-error protocols \pi, and it lower bounds the randomized communication complexity R(f) since the transcript must reveal sufficient to compute f with low error. A significant property of information complexity is its direct sum theorem, which addresses composition of functions. For k independent copies of f, denoted f^k, the information complexity satisfies {\rm IC}(f^k) \geq k \cdot {\rm IC}(f) - o(k), implying that solving multiple instances requires nearly k times the information of a single instance, unlike some communication measures that may subadditively scale. This separation highlights the cost of parallel computation in interactive settings. Hardness amplification techniques, often leveraging discrepancy, further strengthen lower bounds by converting mildly hard distributions into strongly hard ones for rectangle-based arguments. The hereditary discrepancy \text{disc}(f) of a function f measures the minimum over all signings of the maximum imbalance in any combinatorial , and low discrepancy implies high communication complexity via bounds on the size of monochromatic rectangles in the communication . Amplification via theorems or XOR lemmas preserves or boosts this hardness, enabling tight lower bounds; for instance, these methods have been applied to establish \Omega(n) bounds for the disjointness in randomized protocols.

Quantum Communication Complexity

Basics and Protocols

In the quantum communication complexity model, two parties— holding input x and holding input y—aim to compute a f(x, y) by exchanging qubits over a . Protocols consist of alternating rounds where each performs local unitary operations on their qubits (including the input encoded as a and any communication qubits), followed by sending a to the other , and concluding with a to output the result. The quantum communication complexity Q(f) is defined as the minimum number of qubits exchanged in the worst case over all inputs, such that the protocol computes f(x, y) correctly with probability at least $2/3. Unlike classical bits, qubits enable superposition, allowing parties to process multiple possibilities in parallel within a single message, which can lead to more efficient encodings of input-dependent information. Protocols may also incorporate shared entanglement, such as , as a prior resource that does not count toward the communication cost but assists in the computation; this variant is often denoted Q^*(f). For instance, shared EPR pairs allow for , where two classical bits can be transmitted using one . A foundational example is the equality function EQ_n(x, y) = 1 if x = y and 0 otherwise, for n-bit strings. The quantum fingerprinting protocol provides an efficient solution in the simultaneous message model (where both parties send messages to a without ), requiring only O(\log n) qubits per party. Alice and Bob each create a quantum fingerprint state |\phi_x\rangle = \frac{1}{\sqrt{m}} \sum_{i=1}^m |i\rangle |U_i x\rangle, where U_i are random unitary matrices applied to the input x encoded in superposition, and m = O(n) ensures low collision probability for unequal inputs; they send these states to the , who tests the inner product using a controlled-SWAP gate and Hadamard operations to distinguish with error probability exponentially small in the number of copies. This demonstrates superposition's role in parallelizing the comparison. Quantum state teleportation offers another basic primitive for protocols: to transmit an unknown state, Alice performs a Bell on the state and half of a shared pair, sending the two classical outcomes (2 bits) to Bob, who applies a correction unitary to his half of the pair to recover the state. While this uses classical communication atop shared entanglement, in pure quantum protocols, direct exchange simulates such transfers without the classical overhead, enabling computational tasks like distributed quantum gates. For analyzing multi-round protocols, the Choi-Jamiołkowski represents each communication round as a isomorphic to a bipartite state on an extended , aiding in bounding the overall complexity via channel properties like complete positivity and trace preservation.

Comparisons with Classical

Quantum communication complexity offers potential advantages over classical models, but the extent of these benefits varies by and model. For any communication task defined by a f, the deterministic complexity satisfies D(f) \geq Q(f), where Q(f) denotes the bounded-error quantum communication complexity. Furthermore, Q(f) = O(R(f) \log n), where R(f) is the bounded-error randomized complexity, but quantum protocols can achieve separations from classical randomized protocols for certain in restricted models like . A key example illustrating a quadratic quantum advantage is the problem, where , holding n-bit strings x and y (or equivalently \pm 1 vectors), must determine if the is at most n/2 - \sqrt{n} (output 0), at least n/2 + \sqrt{n} (output 1), or in the gap otherwise (arbitrary output). The classical randomized complexity is R(\text{GH}) = \Omega(n), while a quantum protocol achieves Q(\text{GH}) = O(\sqrt{n} \log n) qubits using quantum amplitude estimation to approximate the inner product \langle x, y \rangle, which correlates with the Hamming distance. This protocol involves Alice preparing a superposition based on her input and sending it to Bob, who performs measurements to estimate the overlap with error sufficient to distinguish the promise cases. In contrast, for the () function, where parties check if their n-bit inputs are identical, quantum protocols do not provide an asymptotic improvement over classical randomized ones in the standard two-way model. Here, R(\text{[EQ](/page/EQ)}) = O(\log n) using classical fingerprinting with random hashing, and Q(\text{[EQ](/page/EQ)}) = O(\log n) via analogous quantum fingerprinting schemes that send short quantum fingerprints for comparison. However, in the one-way model, quantum achieves Q^1(\text{[EQ](/page/EQ)}) = O(\log n) qubits, outperforming the classical one-way randomized bound of \Theta(\sqrt{n}). Overall, quantum offers no better asymptotic scaling than O(\log n) for in the two-way setting. The shared entanglement model, denoted Q_E(f), allows parties to pre-share EPR pairs, potentially reducing communication further. In this model, Q_E(f) \leq Q(f), and for some functions like certain relation problems, Q_E(f) can be exponentially smaller than Q(f); for instance, Raz's relation requires \Omega(n) qubits without entanglement but O(\log n) with it. However, Holevo's theorem limits the power of unmeasured quantum communication: transmitting n qubits conveys at most n classical bits of information on average, implying that quantum protocols cannot asymptotically outperform classical ones for tasks requiring \Omega(n) bits of information transfer without measurements.

Known Separations and Bounds

One of the earliest significant separations between quantum and classical communication complexity was established for certain problems, such as of the disjointness where the goal is to find a (e.g., an index where inputs intersect) rather than decide membership. For these problems, quantum protocols achieve O(\log n) communication using techniques like quantum fingerprinting to identify intersections efficiently, while randomized classical protocols require \Omega(\sqrt{n}) communication due to the hardness of searching in distributed settings. This separation, first demonstrated by Buhrman, Cleve, and Wigderson, highlights how enables exponential savings in communication for relational tasks compared to classical methods that must coordinate searches across parties. Lower bounds for the quantum communication complexity of the disjointness function DISJ, where two parties check if their sets intersect, have been proven using algebraic methods. The polynomial method, relying on the approximate degree of the function, establishes Q(DISJ) = \Omega(\sqrt{n}), as the quantum protocol must approximate a high-degree polynomial representing the function. Complementarily, the corruption bound technique, which measures how much a monochromatic rectangle can be altered while remaining useful for the protocol, also yields the same \Omega(\sqrt{n}) lower bound by showing that quantum measurements cannot efficiently distinguish hard inputs without significant corruption. These bounds confirm that even with quantum resources, DISJ remains quadratically hard, matching the upper bound from Grover-like search protocols up to logarithmic factors. A influential lower bound technique for quantum protocols, introduced by Ambainis, reformulates the adversary method using matrices to capture the progress of quantum states across protocol steps. This approach analyzes the eigenspace of a matrix encoding input relations, deriving bounds via that quantify how quantum correlations evolve, leading to tight lower bounds for functions like element distinctness in query models that extend to communication settings. The method's strength lies in its ability to handle constraints on density matrices, providing a unified framework that subsumes earlier polynomial-based bounds and has been applied to prove \Omega(\sqrt{n}) for several total functions in quantum communication. In multi-party settings, recent advancements have improved lower bounds for quantum protocols. For the three-party disjointness problem, techniques combining information complexity with min-entropy arguments establish Q(DISJ) = \Omega(n^{1/3}) in the number-on-the-forehead model, improving prior polynomial bounds by exploiting the distributed nature of inputs across parties. These 2025 results underscore the scaling challenges for quantum multi-party computation, where communication grows sublinearly but remains superconstant. For entanglement-assisted multi-party protocols, a 2024 result demonstrates a strong separation for specific function computation tasks, where classical protocols require \Omega(n) communication to coordinate among n parties, while quantum protocols with shared entanglement achieve O(\log n) by leveraging pre-shared quantum states for efficient broadcasting and verification. This exponential gap arises in scenarios like the generalized inner product function. It is that quantum communication complexity cannot simulate randomized protocols with only a constant factor advantage for all functions, implying potential superpolynomial separations beyond current bounds. While this remains open, partial results, such as exponential separations for one-way partial functions, support the by showing quantum advantages that exceed constant factors in restricted models.

Alternative Models

Nondeterministic Communication Complexity

In the nondeterministic model of communication complexity, a powerful external prover sends a c to both , who hold inputs x and y, respectively. then execute a deterministic using their inputs and the common to verify whether f(x, y) = 1. The accepts c is a valid for a yes-instance (i.e., f(x, y) = 1), ensuring : for every yes-instance, there exists at least one such c that causes acceptance; for no-instances (f(x, y) = 0), the rejects for all possible c, ensuring . This setup exhibits one-sided error, as yes-instances are perfectly verified while no-instances are always rejected. The nondeterministic complexity N^1(f) is the minimum communication cost (in bits) of the verification over all possible choices of certificates and corresponding protocols satisfying these properties. The co-nondeterministic complexity N^0(f) is defined analogously for verifying f(x, y) = 0, where the prover provides a that the accepts f(x, y) = 0, with for no-instances and for yes-instances. The total nondeterministic complexity is then \mathrm{NT}(f) = \max( N^1(f), N^0(f) ). This model captures problems asymmetrically, allowing reduced communication for functions where one output value has a structured set of witnesses. Seminal work formalized these classes within a hierarchy analogous to and in . Combinatorially, N^1(f) relates to the 1-cover number \chi_1(f), the minimum number of 1-monochromatic rectangles needed to cover all 1-entries in the $2^n \times 2^n communication matrix of f, where a 1-monochromatic rectangle is a combinatorial rectangle X \times Y containing only 1-entries. Specifically, N^1(f) = \lceil \log_2 \chi_1(f) \rceil, as each certificate corresponds to selecting one such rectangle, and verification reduces to checking membership in X \times Y (local checks by Alice and Bob for x \in X and y \in Y, followed by 1 bit of communication to compute their AND), costing O(1) bits. Similarly, N^0(f) = \lceil \log_2 \chi_0(f) \rceil, using 0-monochromatic rectangle covers for the 0-entries. This equivalence highlights how nondeterminism leverages structured covers to bound communication. A representative example is the equality function \mathrm{EQ}_n : \{0,1\}^n \times \{0,1\}^n \to \{0,1\}, where f(x, y) = 1 if and only if x = y. To verify f(x, y) = 1, the prover sends a certificate c encoding the common input value (e.g., c = x = y); Alice and Bob locally verify their input matches the relevant part of c, compute the AND of their acceptance bits (1 bit of communication), and accept if both match (since matching implies equality). This verification costs O(1) bits, but the number of possible certificates is $2^n (one per possible input string), corresponding to \chi_1(\mathrm{EQ}_n) = 2^n singleton rectangles on the diagonal, yielding N^1(\mathrm{EQ}_n) = n. For verifying f(x, y) = 0 (i.e., x \neq y), the prover sends c = i (an index i \in where x_i \neq y_i); Alice sends her i-th bit to Bob (1 bit), who checks inequality and accepts if they differ, costing O(1) bits, with n possible certificates giving \chi_0(\mathrm{EQ}_n) = n and N^0(\mathrm{EQ}_n) = \lceil \log_2 n \rceil. Thus, \mathrm{NT}(\mathrm{EQ}_n) = \max(n, \lceil \log_2 n \rceil) = \Theta(n).

Unbounded-Error Communication Complexity

In the unbounded-error model of communication complexity, protocols are required to compute a f: X \times Y \to \{0,1\} with success probability strictly greater than $1/2 on every input, allowing the error probability to approach but not equal $1/2. This model contrasts with bounded-error variants by permitting arbitrarily small advantages over random guessing, leading to potentially lower communication costs. The complexity measure U(f) denotes the minimum communication cost of such a private-coin . A key characterization links U(f) to the sign- of the sign matrix of f, where \operatorname{rank}^\pm(f) is the minimum over real matrices whose signs match those of f; specifically, U(f) = \log \operatorname{rank}^\pm(f) \pm O(1). For total functions, private-coin unbounded-error protocols, often denoted BPP^\text{cc}(f) in this context, do not provide significant savings over deterministic protocols in general, as the sign-rank can grow exponentially with input size, yielding U(f) = \Theta(\log \operatorname{rank}(f)) up to constants, where \operatorname{rank}(f) is the Boolean matrix rank. However, for specific partial functions like the Gap-Hamming Distance (GHD), where inputs x, y \in \{-1,1\}^n satisfy |\langle x, y \rangle| \geq \sqrt{n} or |\langle x, y \rangle| \leq 0, unbounded-error protocols achieve constant communication via random projections: Alice and Bob privately sample a low-dimensional random projection (e.g., onto O(1) coordinates), preserving the gap with constant advantage, leveraging the low sign-rank of the promise matrix. In contrast, total functions typically require higher communication, as their full sign-rank structure prevents such collapse. The - () model extends unbounded-error protocols with nondeterminism: sends a non-adaptive proof string, after which engage in public-coin communication to verify f(x,y) with probability >1/2 + \epsilon for some fixed \epsilon > 0. The complexity (f) is the proof length plus communication cost. For many partial functions, (f) collapses to O(1) bits, as short proofs suffice to certify inputs in low-complexity classes, though lower bounds like \Omega(\sqrt{n}) hold for GHD under . This model captures one-sided error variants of unbounded-error computation, with 1 and <1/2. Public-coin interactive variants lead to the Arthur-Merlin (AM^\text{cc}) model, where public randomness precedes Merlin's proof, followed by interactive verification with success probability >1/2 + \epsilon.

Advanced Topics

Lifting Theorems

Lifting theorems in communication complexity are techniques that transfer lower bounds from communication protocols to hardness results in and proof complexity by composing a "hard" outer communication with inner functions that the target model's computations. This , typically of the form f(g(x), h(y)), where f is the outer with known communication and g, h are inner functions related to the target model, allows the communication complexity of f to "lift" the computational hardness of g and h. For example, the Gap-Hamming Distance (GHD) exhibits an Ω(n) randomized communication , which can be used as the outer in such s to establish lower bounds in other models. A seminal result in this area is the lifting theorem of Edmonds, Impagliazzo, Rudich, and Sgall (2001), which shows that an Ω(n) communication gap lifts to an Ω(log n / log log n) lower bound on the depth of general circuits computing the composed function. This is achieved by constructing a composed function where the circuit must simulate the communication protocol layer by layer, forcing the depth to grow logarithmically with the input size relative to the communication cost. The theorem relies on a careful of protocol trees and their correspondence to circuit structures, providing the first non-trivial connection between communication gaps and circuit depth beyond constant factors. Recent advances have strengthened these results for restricted circuit classes. Lifting theorems have been developed that transfer lower bounds to generalized AC^0 (GC^0) circuits, yielding improved separations. Additionally, lifting results for deterministic communication complexity to TC^0 circuits show that Ω(n) gaps imply superpolynomial size lower bounds in TC^0 via simulation arguments. These improvements leverage randomized variants of GHD and refined composition gadgets to handle the constant-depth and threshold gate restrictions. Lifting theorems also extend to proof complexity, particularly for Frege systems, by associating tree-like protocols with proof tree structures. For instance, tree-like Frege proofs can be simulated by tree-like communication protocols, allowing a communication lower bound of Ω(n) to lift to superpolynomial size lower bounds for tree-like Frege refutations of the . This connection, explored in works connecting KW relations to proof lines, enables hardness amplification from weak proof systems to stronger ones like Frege via the composition paradigm.

Distributional and Information Complexity

Distributional communication complexity serves as a foundational tool for deriving lower bounds in randomized communication models by shifting focus from worst-case to average-case analysis over input distributions. For a function f: \mathcal{X} \times \mathcal{Y} \to \{0,1\} and a probability \mu on \mathcal{X} \times \mathcal{Y}, the \varepsilon-distributional complexity D^\varepsilon_\mu(f) is the minimum cost (number of bits exchanged in the worst case) of a deterministic protocol that computes f correctly on at least a $1 - \varepsilon fraction of the input \mu. The distributional complexity over a class of distributions \Pi is defined as DC^\varepsilon(\Pi, f) = \max_{\mu \in \Pi} D^\varepsilon_\mu(f). By Yao's minimax principle, the public-coin randomized communication R^\varepsilon(f) equals DC^\varepsilon(\Pi_\text{all}, f), where \Pi_\text{all} is the class of all distributions; thus, identifying hard distributions in \Pi (e.g., product distributions for equality or disjointness) yields lower bounds on randomized . Information complexity extends this framework by measuring the entropy reduction about inputs induced by the protocol's transcript, rather than bit count, enabling finer analysis across classical and quantum models. For a protocol \pi with transcript M (the sequence of messages) and public randomness R, under input distribution \mu on (X, Y), the (external) information complexity is IC(\pi, \mu) = I(XY; M \mid R), where I denotes ; this quantifies the total information the transcript reveals about the joint inputs to an external observer, conditioning on the shared randomness. In contrast, internal information complexity decomposes as IC_\text{int}(\pi, \mu) = I(X; M \mid Y R) + I(Y; M \mid X R), capturing the information each party gains about the other's private input. External information facilitates protocol optimization and lower bounds via information-theoretic inequalities, while internal information models the revelation of private data during interaction. Key properties include the direct sum theorem, stating IC(f^{\otimes n}, \mu^{\otimes n}) = \Theta(n \cdot IC(f, \mu)) for the n-fold composition, which implies that solving multiple independent instances requires proportionally more information. Decomposition theorems further establish that information complexity equals amortized communication: \lim_{n \to \infty} \frac{1}{n} R^\varepsilon(f^{\otimes n}) = IC^\varepsilon(f, \mu) for suitable \mu. These measures interconnect through discrepancy, bridging distributional and randomized complexities. The hereditary discrepancy \text{herdisc}(\mu, f) of f under \mu is the supremum of the discrepancy over all subdistributions induced by \mu, where discrepancy for a submatrix (or rectangle) R is \left| \Pr_\mu[(X,Y) \in R \mid f(X,Y)=1] - \Pr_\mu[(X,Y) \in R \mid f(X,Y)=0] \right|. A core relation is \text{herdisc}(\mu, f) \leq O\left( \sqrt{DC^\varepsilon(\mu, f) \cdot \log(1/\varepsilon)} \right), derived from the fact that a protocol of cost C partitions the input space into at most $2^C monochromatic rectangles, each with bounded imbalance by probabilistic arguments. This upper bound on discrepancy from distributional complexity implies randomized lower bounds, as low discrepancy forces high communication to distinguish function values. Conversely, high hereditary discrepancy yields D^\varepsilon_\mu(f) = \Omega\left( \log(1 / \text{herdisc}(\mu, f)) - \log(1/\varepsilon) \right) for \varepsilon < 1/2, making it a versatile primitive for proving separations. Recent progress has tightened direct sum results for quantum settings, extending classical theorems and closing gaps in scaling for multiple instances with small error.

Applications and Connections

Theoretical Connections

Communication complexity exhibits deep connections to circuit complexity through lifting theorems, which translate lower bounds from communication protocols to restrictions on circuit size. Specifically, deterministic communication lower bounds for composed functions, such as Search(C) composed with a high-rank gadget like equality, imply superpolynomial lower bounds on monotone Boolean formula size for explicit functions that admit small monotone real formulas. For instance, the Generalized Input Counting (GEN) problem has polynomial-size monotone Boolean circuits but requires monotone Boolean formulas of size $2^{\Omega(n / \polylog n)}. These results stem from generalized lifting theorems using simple gadgets with high matrix rank, establishing that communication hardness escalates to circuit hardness without relying on intricate constructions. In proof complexity, protocol trees in deterministic communication models serve as tree-like proof systems, where the depth of the tree corresponds to proof length and the leaves represent verified instances. Nondeterministic communication complexity, defined via the minimal size of a 1-cover for the function's matrix, relates directly to the width of resolution proofs; for example, the nondeterministic complexity of the negation of a function bounds the resolution width needed to refute it. Lifting techniques further bridge these areas: a communication lower bound for a search problem composed with a suitable gadget yields lower bounds on cutting planes (CP) proof length for pebbling formulas, showing that certain CNF formulas have short CP refutations of length \tilde{O}(N^2) but require CP* proofs with space-log-length product \tilde{\Omega}(N). These connections highlight how communication protocols model succinct proofs, amplifying hardness from interactive settings to static proof systems. Communication complexity provides foundational lower bounds for streaming and sketching algorithms by simulating space-bounded computation through multi-round protocols. In particular, the \Omega(n) randomized communication complexity of the disjointness (DISJ) function, where two parties with n-bit sets determine if their intersection is empty, implies an \Omega(\sqrt{n}) space lower bound for estimating the second frequency moment F_2 (the sum of squared frequencies in a stream of length n) in the turnstile model with constant passes. This reduction works by having one party simulate the stream updates while the other holds a hard DISJ instance, forcing high communication that translates to space usage; sketching algorithms, which compress data for approximate queries, inherit similar bounds since they embed into communication protocols via public randomness. These links underscore how two-party interaction captures the memory constraints of one-pass or multi-pass streaming computation. Algebraic connections arise through discrepancy methods, which bound communication complexity and extend to property testing and probabilistically checkable proofs (s). The hereditary discrepancy of a function's communication lower-bounds its nondeterministic , and low-discrepancy functions like inner product enable efficient protocols while high-discrepancy ones resist them; this mirrors the role of discrepancy in PCP verifiers, where small discrepancy ensures soundness against provers. In property testing, communication lower bounds directly yield query lower bounds: for testing if a distribution has small , a two-party protocol where one player holds the support and the other samples implies \tilde{\Omega}(\sqrt{k}) queries to test size k, connecting interactive communication to non-adaptive testing. These ties via discrepancy facilitate algebraic proofs of inapproximability, as seen in reductions from communication-hard functions to testing low-degree polynomials or monotonicity. The multi-party number-on-the-forehead (NOF) model, where each of k players sees all inputs except their own, reveals collapses between multi-party and pairwise complexities for certain functions. In NOF, the disjointness problem requires \Omega(n^{1/(k+1)} / 2^{2^k}) randomized communication, but for exact-N (checking if exactly N inputs are 1), the complexity collapses to O(\log n) for constant k, matching two-party bounds via efficient protocols using shared randomness. More broadly, deterministic NOF communication for set disjointness is \Omega(n / 4^k), yet for some algebraic tasks like generalized inner product, multi-party NOF reduces to pairwise via tensor decompositions, implying that hardness in pairwise models lifts to multi-party without exponential blowup. These collapses demonstrate how NOF refines two-party insights, providing separations like superpolynomial lower bounds for tree-like proofs from multi-party hardness.

Algorithmic and Practical Applications

Communication complexity provides essential lower bounds for streaming algorithms, particularly in tasks involving the estimation of frequency moments and identification of heavy hitters. The frequency moments problem requires approximating the k-th moment F_k = \sum_{i=1}^n f_i^k, where f_i is the of the i-th element in a , and communication complexity techniques demonstrate that no deterministic streaming algorithm can approximate F_k for k \geq 2 with constant factor using sublinear space in the input size. Specifically, reductions from two-party communication protocols, such as the indexing , establish that the for approximating F_2 is \Omega(\sqrt{n}), highlighting the hardness of even basic moment estimation in resource-constrained streaming settings. For heavy hitters, which aim to report elements with frequency exceeding a \phi, lower bounds derived from the disjointness (DISJ) problem in communication complexity show that identifying \ell_1-heavy hitters requires \Omega(1/\phi) space, preventing efficient exact recovery without significant communication or memory overhead. These bounds, originally proven using multi-round communication protocols, underscore why practical streaming algorithms like the Count-Min Sketch rely on probabilistic approximations to balance accuracy and space. In frameworks like , communication complexity models the cost of data shuffling, where intermediate key-value pairs are exchanged across nodes during the reduce phase. The shuffle operation, which can dominate runtime in large-scale , is analyzed through communication complexity to derive lower bounds on the total bits transferred; for instance, computing functions like in the model requires \Omega(n^2) communication in the worst case, reflecting the inherent cost of redistributing data to reducers. Seminal work formalizes as a restricted computation model where the number of rounds and replication rates limit expressiveness, showing that shuffle-heavy tasks, such as or aggregation, cannot be optimized below certain communication thresholds without increasing computation or rounds. This analysis has influenced optimizations in systems, such as Hadoop, by prioritizing algorithms that minimize cross-node traffic through careful key partitioning, thereby reducing latency in industrial applications like log analysis and . For database query optimization in distributed environments, communication complexity informs the evaluation of join operations, where estimating join sizes or executing distributed joins incurs significant inter-site data transfer. In federated databases, the communication cost of a natural join between relations at different sites is bounded by reductions to set-disjointness, yielding \Omega(\sqrt{m}) bits for relations of size m, which guides query planners to select semijoin strategies that unnecessary data before full joins. Recent advancements extend this to fractional join queries over web-accessible sources, where the certificate complexity parameterizes the communication needed, proving that even constant-factor approximations require linear communication in the output size for certain join graphs. These insights enable query optimizers in systems like extensions or cloud databases to reorder operations and use bloom filters, minimizing bandwidth while preserving correctness in multi-tenant setups. In , communication complexity sets fundamental limits on (MPC) protocols, where parties jointly compute a on private inputs without revealing them. Protocols for tasks like secure auctions or are constrained by the deterministic communication complexity of the underlying ; for example, the two-party disjointness problem requires \Omega(n) bits, implying that information-theoretically secure MPC for n-bit inputs cannot achieve sublinear communication even with . This has led to efficient constructions, such as garbled circuits with amortized O(n \log n) communication for unconditional security against honest-majority adversaries, balancing security and practicality in applications like privacy-preserving . Bounds from multiparty communication complexity further show that optimally resilient MPC (tolerating up to n/3 corruptions) demands at least linear communication per party, influencing the design of protocols used in and . As of 2025, communication complexity has found direct applications in , where devices collaboratively train models by sharing updates while minimizing communication to preserve privacy and bandwidth. Lower bounds from communication complexity establish that personalized requires \Omega(\sqrt{\kappa} \log(1/\epsilon)) communication rounds to achieve \epsilon-accuracy, where \kappa is the , motivating randomized protocols like model sparsification and quantization. For instance, in for reinforcement tasks, results prove a sample-communication trade-off where intermittent communication protocols must exchange at least \Omega(|S||A|/(1-\gamma) \log N) total bits to converge, with |S| and |A| the state and action space sizes, \gamma the discount factor, and N samples per state-action pair. These techniques, deployed in mobile , achieve significant reductions in upload costs through in scenarios like on-device , while adhering to guarantees.

Open Problems

Major Unsolved Questions

One of the central open questions in communication complexity concerns the potential for exponential quantum advantage over classical randomized protocols for total Boolean functions. Specifically, it remains unresolved whether there exists a total function f: \{0,1\}^n \times \{0,1\}^n \to \{0,1\} such that the quantum communication complexity Q(f) satisfies Q(f) = o(R(f)/\mathrm{polylog}\, n), where R(f) is the bounded-error randomized communication complexity. This is conjectured to be false, with the prevailing belief that quantum protocols offer at most polynomial speedup for total functions; however, only polynomial separations are known, while exponential separations exist for partial functions. Another fundamental unsolved problem is the resolution of the theorem in , which seeks to determine the exact constant in the relation \mathrm{IC}(f^k) = \Theta(k \cdot \mathrm{IC}(f)) for k independent copies of a f across all communication models, where \mathrm{IC} denotes the . While theorems have been established with sublinear or logarithmic factors in various settings, such as \mathrm{IC}(f^k) = \Omega(\sqrt{k} \cdot \mathrm{IC}(f)) for randomized protocols, the precise linear scaling with constant 1 remains open, with implications for and parallel repetition. In the multiparty setting, a key open question is whether the number-in-hand (NIHand) model, where each party holds their own input privately, has communication complexity equal to that of the number-on-forehead (NOF) model, where each party sees all inputs except their own, up to logarithmic factors. Although separations between deterministic and randomized complexities are known in both models, and NOF often allows more efficient protocols for certain functions like set disjointness, the precise polynomial or logarithmic equivalence between NIHand and NOF complexities for general functions is unresolved, hindering broader applications to and proof systems. The log-rank conjecture asserts that the deterministic communication complexity D(f) of any Boolean function f satisfies D(f) = \Theta(\log \mathrm{rank}(M_f)), where M_f is the communication matrix of f and \mathrm{rank} is over the reals. Proposed in 1988, this conjecture implies tight bounds for many functions but remains open, with the best known upper bound being D(f) = O(\sqrt{\mathrm{rank}(M_f) \log \mathrm{rank}(M_f)}); resolving it would connect communication complexity directly to linear algebra, impacting streaming algorithms and data structures. Finally, in the unbounded-error model, it is unknown whether multi-round Arthur-Merlin protocols () can harness their full power to achieve exponentially lower complexity than two-message protocols for some functions. While unbounded-error complexity is characterized by spectral properties like discrepancy, and constant-round separations exist, the extent to which additional rounds amplify advantages over the two-message case—analogous to versus in —remains a major open challenge.

Recent Directions

In 2024, significant progress was made in understanding the communication complexity of entanglement-assisted multi-party , particularly in highlighting gaps between classical and quantum for distributed quantum tasks. Meng Ruoyu introduced a quantum achieving (n-1) log n bits of classical communication using entangled qudits for n players (with n prime) to compute a generalized inner product , contrasting with the classical 's (n-1)^2 (log n^2) bits. This demonstrates a quantum advantage over classical methods in multi-party settings. Additionally, an integer formulation was developed to establish lower bounds on classical communication complexity, providing a tool to quantify these separations in distributed quantum scenarios. Advancing connections between communication complexity and proof complexity, 2025 results extended lifting theorems to stronger proof systems, including bounded-depth Frege proofs. A new transforms formulas with large resolution depth into ones requiring exponential-size Res(⊕) refutations via mixing and constant-size lifting, yielding an improved separation where an n-variable formula with polynomial-size resolution refutations (depth O(√n)) demands Res(⊕) refutations of size 2^Ω(√n). This resolves open questions on top- Res(⊕) by proving exponential lower bounds on depth-cn log log n Res(⊕) refutations for Tseitin formulas lifted with the Maj_5 , closing gaps in lifting from two-party communication to advanced proof systems like extended Frege variants. Recent work from has strengthened streaming lower bounds using variants of the set-disjointness (DISJ) problem in dynamic streaming models. By extending connections between multi-party unique set-disjointness in communication complexity and streaming algorithms, new proofs establish space lower bounds of Ω(√n / log n) for estimating the number of distinct elements in multisets under adversarial updates. These DISJ-based reductions imply Ω(√n) space requirements for dynamic streaming variants of classic problems, enhancing robustness against input ordering and providing tighter barriers for sublinear-space algorithms in dynamic settings.

References

  1. [1]
    [PDF] Communication Complexity (for Algorithm Designers)
    Communication complexity offers a clean theory that is extremely useful for proving lower bounds for lots of different fundamental problems. Many of the most ...<|control11|><|separator|>
  2. [2]
    Some complexity questions related to distributive computing ...
    Some complexity questions related to distributive computing(Preliminary Report). Author: Andrew Chi-Chih Yao.
  3. [3]
    [PDF] Communication Complexity - Amir Yehudayoff
    In this book, we explain some of the central results in the area of communication complexity, and show how they can be used to prove.
  4. [4]
    Communication Complexity
    Eyal Kushilevitz, Technion - Israel Institute of Technology, Haifa, Noam Nisan, Hebrew University of Jerusalem. Publisher: Cambridge University Press.
  5. [5]
    [PDF] Notes on Communication Complexity
    communication complexity of F respectively. In the 2 player setting, the single most important property of a protocol was the fact that it induced rectangles.Missing: rectangularity | Show results with:rectangularity
  6. [6]
    On notions of information transfer in VLSI circuits - ACM Digital Library
    Is the fooling set approach the most powerful way to get information-transfer-based lower bounds? We shall show it is not, and offer a candidate for the ...<|separator|>
  7. [7]
    [PDF] Computational Complexity: A Modern Approach - cs.Princeton
    In Section 13.2 we survey some of the techniques used to prove lower bounds for the communication complexity of various functions, using the equality function ...
  8. [8]
    The Corruption Bound, Log Rank, and Communication Complexity
    Sep 14, 2014 · We prove upper bounds on deterministic communication complexity in terms of log of the rank and simple versions of the corruption bound.
  9. [9]
    None
    ### Summary of Gap-Hamming Distance Section from https://theory.stanford.edu/~tim/w15/l/l6.pdf
  10. [10]
  11. [11]
    [PDF] Lecture Notes 3: Randomized Communication, Newman's Theorem
    Public coin protocols are, of course, more powerful than private coin protocols since Alice and Bob could simply partition r into rA,rB to simulate a private ...
  12. [12]
    [PDF] Communication Complexity (for Algorithm Designers) Lecture #4
    Jan 29, 2015 · This lecture covers the most important basic facts about deterministic and randomized communication protocols in the general two-party model ...
  13. [13]
    Private vs. common random bits in communication complexity
    View PDF; Download full issue. Search ScienceDirect. Elsevier. Information ... Private vs. common random bits in communication complexity. Author links open ...
  14. [14]
    [PDF] Public vs private coin in bounded-round information
    Jan 15, 2014 · Thus, up to an additive log n, private randomness replaces public randomness in communication complexity. Does a “reverse Newman theorem” hold ...
  15. [15]
    [PDF] Communication and information complexity
    Jul 3, 2022 · In general, a direct sum theorem quantifies the cost of solving a problem 𝐹𝑛 consisting of 𝑛 sub-problems in terms of 𝑛 and the cost of each sub ...<|control11|><|separator|>
  16. [16]
    [PDF] arXiv:quant-ph/0610085v2 19 Oct 2006
    Oct 19, 2006 · The quantum communication complexity of f is defined to be the minimum number of qubits required to be transmitted between two parties (Alice ...
  17. [17]
    [PDF] Lecture 1 Quantum Communication Complexity
    We define the communication complexity of a function f as the minimum number of bits needed to compute it, and denote it by R(f).
  18. [18]
    None
    ### Extracted and Summarized Content from https://arxiv.org/pdf/quant-ph/0102001
  19. [19]
    Quantum vs. Classical Communication and Computation - arXiv
    We present a simple and general simulation technique that transforms any black-box quantum algorithm (a la Grover's database search algorithm) to a quantum ...
  20. [20]
    [PDF] The Communication Complexity of Gap Hamming Distance
    May 17, 2012 · If a randomized communication protocol solves GHDn with probability 2/3 on every valid input, then it has communication complexity Ω(n). The ...
  21. [21]
    Tight Bounds for the Randomized and Quantum Communication ...
    Jul 25, 2021 · We investigate the randomized and quantum communication complexities of the well-studied Equality function with small error probability \epsilon.
  22. [22]
    Improved Quantum Communication Complexity Bounds for ... - arXiv
    Sep 14, 2001 · Abstract: We prove new bounds on the quantum communication complexity of the disjointness and equality problems. For the case of exact and ...
  23. [23]
    [PDF] Lower Bounds in Communication Complexity
    This monograph survey lower bounds in the field of communication complexity. Our focus is on lower bounds that work by first representing the communication ...
  24. [24]
    Quantum query complexity and semi-definite programming
    Abstract: We reformulate quantum query complexity in terms of inequalities and equations for a set of positive semidefinite matrices.Missing: communication | Show results with:communication
  25. [25]
    A new quantum lower bound method - ACM Digital Library
    We give a new version of the adversary method for proving lower bounds on quantum query algorithms. The new method is based on analyzing the eigenspace ...
  26. [26]
    A Min-Entropy Approach to Multi-Party Communication Lower Bounds
    Jul 29, 2025 · Abstract. Information complexity is one of the most powerful techniques to prove information-theoretical lower bounds, in which Shannon ...
  27. [27]
    Communication Complexity of Entanglement-Assisted Multi-Party ...
    The communication complexity of a given protocol is the total number of classical bits that need to be communicated.
  28. [28]
    Quantum versus Randomized Communication Complexity, with ...
    Nov 6, 2019 · Abstract:We study a new type of separation between quantum and classical communication complexity which is obtained using quantum protocols ...
  29. [29]
  30. [30]
    [PDF] Nondeterminism - UCLA Computer Science Department
    Jan 25, 2012 · Definition 5.4. For a function f : X × Y → {0, 1}, the nondeterministic communication complexity of f is defined as N(f) = log2 C1(f). The co- ...
  31. [31]
    [PDF] Communication Complexity, Winter 2019 Unbounded-error protocols
    Mar 6, 2019 · We first prove that logrank±(f) ≤ U(f) + O(1). Assume that U(f) = c. That is, there is an unbounded-error protocol Π for f with cost c.Missing: MA( | Show results with:MA(
  32. [32]
    [PDF] The Unbounded-Error Communication Complexity of Symmetric ...
    The unbounded-error communication complexity of f, denoted U( f ), is the least cost of a protocol that computes f. The unbounded-error model occupies a special ...Missing: MA( | Show results with:MA(
  33. [33]
    Arthur-Merlin games: A randomized proof system, and a hierarchy of ...
    Babai, P. Frankl, J. Simon. Complexity classes in communication complexity theory. Proceedings, 27th IEEE Symp. Found. of Comput. Sci. (1986), pp. 337-347.
  34. [34]
    [PDF] Improved Merlin–Arthur Protocols for Central Problems in Fine ...
    We present an improved Merlin–Arthur protocol for the harder #Zero-Weight k- Clique problem.
  35. [35]
  36. [36]
    [2408.15570] Direct sum theorems beyond query complexity - arXiv
    Aug 28, 2024 · In this paper, we introduce a novel framework that extends to classical/quantum query complexity, PAC-learning for machine learning, statistical estimation ...Missing: DISJ | Show results with:DISJ
  37. [37]
  38. [38]
    [PDF] Proof Complexity of Natural Formulas via Communication Arguments
    Jan 20, 2025 · This approach allows proving lower bounds for proof systems operating with proof lines having small communication complexity in the appropriate ...
  39. [39]
    Communication Complexity - ScienceDirect.com
    In this article we survey the theory of two-party communication complexity. This field of theoretical computer science aims at studying the following, ...
  40. [40]
    [PDF] Property Testing Lower Bounds Via Communication Complexity - MIT
    Communication complexity is one technique that has proven effective for proving lower bounds in other areas of computer science. In a typical setup, two parties ...Missing: rectangularity | Show results with:rectangularity<|control11|><|separator|>
  41. [41]
    Disjointness is hard in the multi-party number on the forehead model
    Dec 27, 2007 · We show that disjointness requires randomized communication Omega(n^{1/(k+1)}/2^{2^k}) in the general k-party number-on-the-forehead model of complexity.Missing: collapses | Show results with:collapses
  42. [42]
    [PDF] Simplified Lower Bounds on the Multiparty Communication ...
    Abstract. We show that the deterministic number-on-forehead communication complexity of set disjointness for k parties on a universe of size n is Ω(n/4k).
  43. [43]
    [PDF] arXiv:0710.0095v4 [quant-ph] 13 Apr 2008
    Apr 13, 2008 · A major open problem in communication complexity is whether or not quantum protocols can be exponentially more efficient than classical ones ...
  44. [44]
    [PDF] Communication and information complexity - Mark Braverman
    We give one specific example of an exact communication complexity bound. Recall that the disjointness problem Disjn (X,Y) takes two n-bit vectors X ...
  45. [45]
    [PDF] The BNS-Chung Criterion for Multi-Party Communication Complexity
    A major open problem is to prove a super-poly-logarithmic lower bound, for the k-parties communication complexity of an explicit function f, where the number of ...
  46. [46]
    [PDF] Recent advances on the log-rank conjecture in communication ...
    Jan 5, 2014 · It speculates that the deterministic communication complexity of any two- party function is equal to the log of the rank of its associated ...
  47. [47]
    [PDF] Communication Complexity - Full-Time Faculty
    The first thing to do is to introduce an extremely useful concept of a his- tory or a transcript: this is the whole sequence (a1,b1,...,at,bt) of messages.
  48. [48]
    Communication complexity of entanglement assisted multi-party ...
    May 8, 2023 · We consider a quantum and classical version multi-party function computation problem with n players, where players 2, \dots, n need to communicate appropriate ...
  49. [49]
    Lifting to Bounded-Depth and Regular Resolutions over Parities via ...
    Jun 15, 2025 · We develop a method that transforms any formula with large resolution depth into a formula requiring exponential-size regular Res(⊕) refutations.
  50. [50]
    Streaming Lower Bounds and Asymmetric Set-Disjointness - arXiv
    Jan 13, 2023 · Our proof builds and extends classical connections between streaming algorithms and communication complexity, concretely multi-party unique set-disjointness.Missing: DISC 2023-2025