Fact-checked by Grok 2 weeks ago

Interactive proof system

An interactive proof system (IPS) is a two-party interactive protocol in which a computationally unbounded prover attempts to convince a probabilistic polynomial-time verifier of the validity of a given statement, typically regarding membership in a language L \subseteq \{0,1\}^*, through an exchange of messages over multiple rounds. The system is characterized by two core properties: completeness, ensuring that if the statement is true, the honest prover succeeds in convincing the verifier with high probability (at least $2/3); and soundness, guaranteeing that if the statement is false, no prover—even a cheating one—can convince the verifier except with negligible probability (at most $1/3). These error probabilities can be amplified arbitrarily close to zero by repeating the protocol. Introduced in the mid-1980s, interactive proof systems extend the classical notion of non-interactive proofs underlying the NP by incorporating and , allowing the verifier to adaptively query the prover. The foundational work by , Micali, and Rackoff formalized IPS and introduced the concept of zero-knowledge proofs, where the verifier learns nothing beyond the statement's validity, enabling applications in such as secure protocols. Subsequent developments, including Arthur-Merlin protocols, bridged IPS to probabilistic verification models. In , the class comprises all languages admitting an interactive proof system, marking a significant expansion of verifiable computation. A landmark result, established independently by et al. and Shamir in 1992, proves that IP = , demonstrating that interactive proofs with polynomial-time verifiers capture the full power of polynomial-space computation. This equivalence underscores the robustness of IPS in and has influenced areas like probabilistically checkable proofs (PCPs) and delegation protocols in .

Fundamentals

Definition and Model

An interactive proof system for a language L \subseteq \{0,1\}^* is formally defined as a pair (P, V), where P is a prover with unbounded computational power and V is a probabilistic verifier that runs in time. Both parties receive a common input x \in \{0,1\}^n, and they engage in an interactive protocol consisting of a number of message exchanges, where the verifier's running time and the total communication are bounded by a polynomial in n. The verifier has access to private random coins, which it uses throughout the interaction, while the prover computes messages based on the input, previous messages, and its unlimited resources. At the conclusion of the interaction, the verifier outputs 1 (accept) or 0 (reject) based solely on the transcript of the conversation and its random coins. The interaction structure is turn-based: the parties alternate sending , with the verifier typically initiating the by sending the first (though variations exist). Each is a computed deterministically by the prover or probabilistically by the verifier, depending on their role and the design. The tree, representing all possible conversation paths, has a depth in n, ensuring the verifier's efficiency. This model allows the prover to provide evidence adaptively, responding to the verifier's challenges, which distinguishes it from non-interactive proofs like those in , where the prover sends a single without further exchange. The probabilistic nature of the verifier introduces randomness into the decision process: acceptance or rejection is probabilistic, with success probabilities defined over the verifier's private coin tosses. Standard notations include the completeness probability c(n), the probability that V accepts when interacting with the honest prover P on input x \in L, and the soundness probability s(n), the maximum probability that V accepts on x \notin L against any (possibly malicious) prover P^*. For the system to be effective, these satisfy c(n) - s(n) \geq 1/\mathrm{poly}(n), allowing error reduction via repetition. A canonical example illustrating the is the for graph non-isomorphism, where the input consists of two s G_0 and G_1 on n vertices that are promised not to be isomorphic. The verifier selects a random bit b \in \{0,1\} and a \pi of the vertices, computes the permuted H = \pi(G_b), and sends H to the prover. The prover, knowing G_0 and G_1 are non-isomorphic, identifies b' such that H is isomorphic to G_{b'} and returns b'; the verifier accepts if b' = b. This single-round succeeds with probability 1 for (the prover always identifies correctly due to non-isomorphism) and at most $1/2 for (a cheating prover must guess b correctly if the graphs are isomorphic). Repetition amplifies the soundness gap.

Completeness and Soundness

In interactive proof systems, the property ensures that if the input x belongs to the language L, an honest prover P can convince the verifier V of acceptance with high probability. Formally, \Pr[\langle P, V \rangle(x) = 1 \mid x \in L] \geq \frac{2}{3}. This threshold is conventional and can be adjusted; more generally, the completeness error \epsilon satisfies \Pr[\langle P, V \rangle(x) = 1 \mid x \in L] \geq 1 - \epsilon, where \epsilon is typically a constant less than \frac{1}{2}. The soundness property protects against deception: if x \notin L, no prover—even a malicious one P^*—can convince V except with low probability. Formally, for all P^*, \Pr[\langle P^*, V \rangle(x) = 1 \mid x \notin L] \leq \frac{1}{3}. In general terms, the soundness error \delta satisfies \Pr[\langle P^*, V \rangle(x) = 1 \mid x \notin L] \leq \delta, with \delta < \frac{1}{2}. These properties define the reliability of the system, balancing the verifier's probabilistic polynomial-time computation against the prover's potentially unbounded power. Perfect completeness, where \Pr[\langle P, V \rangle(x) = 1 \mid x \in L] = 1, is achievable for any language admitting an interactive proof by transforming the protocol into an (public-coin) system. However, such perfect guarantees are rare in fully interactive settings without restrictions, as they often require specific protocol structures; in contrast, perfect soundness (zero acceptance probability for x \notin L) holds only for languages in . Error probabilities can be amplified to enhance reliability. For a protocol with completeness probability c \geq \frac{2}{3} and soundness probability s \leq \frac{1}{3}, repeating it k times independently and accepting if the majority of runs accept reduces the completeness error to at most (1 - c)^k \leq \left(\frac{1}{3}\right)^k and the soundness error to exponentially small in k, specifically at most \exp(-\Omega(k)) (via Chernoff bounds). For multi-round protocols with perfect completeness, parallel repetition—running k independent copies simultaneously and accepting only if all instances are accepted—multiplies the soundness error to at most s^k, and in public-coin cases, the number of rounds remains constant. For general protocols without perfect completeness, sequential repetition with majority vote is used to amplify both errors. By choosing k = \Theta(\log n), errors become $1/\mathrm{poly}(n); larger k = \mathrm{poly}(n) yields negligible error, smaller than any inverse polynomial (i.e., \mathrm{negl}(n) < 1/n^\omega(1) for all polynomials). The distinction between $1/\mathrm{poly}(n) and negligible error is crucial for computational soundness: in complexity-theoretic IP (unbounded provers), $1/\mathrm{poly}(n) error suffices for class definitions, as amplification ensures asymptotic separation. Negligible error, however, is essential in cryptographic applications to bound adversarial success against computationally unbounded but time-limited threats, preventing polynomial-time exploits.

Historical Background

Origins and Early Concepts

The roots of interactive proof systems trace back to the development of complexity theory in the 1970s, particularly the formalization of the class by in 1971, which captured decision problems verifiable in polynomial time given a non-interactive certificate provided by a prover. This model highlighted key limitations of non-interactive proofs, as NP verifiers must examine the entire certificate upfront, restricting efficient verification to problems solvable in polynomial time nondeterministically, while leaving open the efficient verification of harder decidable problems, such as those in , that require exponential nondeterministic time. The need for interaction arose from these constraints, enabling a verifier to query a prover adaptively and thus handle computations beyond NP's static proof structure without requiring the prover to reveal full computational details. Influences from automata theory and oracle machines further shaped early conceptual foundations, extending the Turing machine model introduced by Alan Turing in 1936 to incorporate interactive elements. Oracle machines, formalized in the relativization results of in 1975, demonstrated how access to an external "oracle" could alter separation results between complexity classes, inspiring views of the prover as a powerful oracle assisting a resource-limited verifier in decision processes. These extensions addressed limitations in standard Turing machine hierarchies by allowing dynamic computation paths, much like the branching interactions in later proof systems, and built on probabilistic automata developments, such as Solovay and Strassen's 1977 work on randomized primality testing, which introduced error-bounded verification. Pre-1980s precursors also drew from game theory and two-player computation models, where adversarial interactions between rational agents—originating in John von Neumann's 1928 minimax theorem for zero-sum games—provided a framework for modeling verifier-prover dynamics as strategic exchanges. In complexity contexts, these ideas influenced early explorations of bounded rationality and verification in multi-agent settings, as surveyed in Halpern's 2007 overview of game theory's role in computer science since the 1950s. By the mid-1980s, these strands converged in informal notions of probabilistic verification; Shafi Goldwasser, Silvio Micali, and Charles Rackoff introduced interactive proof systems in 1985 as a probabilistic extension of NP, motivated by cryptographic needs for knowledge-efficient proofs. Independently, László Babai proposed similar systems that year to classify graph isomorphism problems using randomness in group theory contexts. Interactive proofs also related early on to circuit complexity and parallel computation, viewing the verifier's polynomial-time checks as analogous to parallel evaluation in circuit models, where interaction rounds mimic logarithmic parallel time for verification. This connection, emerging in the 1980s, positioned interactive proofs as efficient models for distributed verification, allowing a bounded verifier to leverage a computationally unbounded prover in settings akin to parallel architectures, without the full exponential cost of non-interactive alternatives.

Key Milestones

In 1985, Shafi Goldwasser, Silvio Micali, and Charles Rackoff formally introduced the model of interactive proof systems, defining a framework where a computationally unbounded prover interacts with a probabilistic polynomial-time verifier to convince it of the truth of a statement. They demonstrated the model's power by constructing an interactive proof for graph non-isomorphism, a problem in co-NP believed not to be in NP, showing how interaction and randomness enable proofs for languages beyond traditional NP. In 1985, László Babai developed the , a public-coin variant of interactive proofs where the verifier (Arthur) publicly announces random challenges to the prover (Merlin), simplifying the interaction model while preserving expressive power. Building on this, Babai and Shlomo Moran in 1988 established the , a sequence of complexity classes AM for k rounds of interaction, proving that increasing rounds strictly increases power relative to known lower classes like NP and co-NP, thus highlighting the role of interaction depth in probabilistic verification. The year 1987 also saw Oded Goldreich, Silvio Micali, and Avi Wigderson advance zero-knowledge proofs within interactive systems, constructing protocols that reveal no information beyond the statement's validity and showing, under cryptographic assumptions, that every language in NP has a zero-knowledge interactive proof. Their work demonstrated the practical applicability of interactive proofs to secure computation, enabling privacy-preserving verifications. In 1990, Carsten Lund, Lance Fortnow, Howard Karloff, and Noam Nisan introduced arithmetization techniques to convert decision problems into polynomial identity verifications, providing interactive proofs for #P-complete functions like the permanent. This breakthrough laid the groundwork for characterizing the full power of interactive proofs. By 1992, Adi Shamir proved that the class IP equals PSPACE, showing that any polynomial-space computation has a constant-round interactive proof verifiable in polynomial time, completing the characterization of single-prover interactive proofs. Extensions to multi-prover interactive proofs, introduced in 1988, include results such as one-round two-prover zero-knowledge protocols for NP (Lapidot and Shamir, 1995), demonstrating greater power than single-prover systems. These milestones bridged probabilistic verification with classical complexity classes, revealing that interaction amplifies verifier efficiency to match high-level resources like PSPACE, while enabling cryptographic applications such as zero-knowledge in secure multi-party protocols.

Protocol Variants

Public-Coin versus Private-Coin Protocols

In interactive proof systems, protocols are classified based on how the verifier utilizes randomness. In public-coin protocols, the verifier's random bits are transmitted openly to the prover as part of the interaction, typically in the form of random challenges or strings that do not depend on the prover's previous responses beyond the randomness itself. This approach, also known as , simplifies the verifier's message structure, where each message from the verifier consists solely of a random string generated afresh. In contrast, private-coin protocols require the verifier to keep its random bits secret, using them internally to compute messages that may depend on both the interaction history and the hidden randomness. This secrecy allows the verifier's decisions to incorporate private computations without revealing the underlying random choices to the prover, which can be essential in certain applications where information leakage must be controlled. Public-coin protocols offer advantages in theoretical analysis due to their structural simplicity, enabling techniques like round collapse (e.g., multiple rounds reducing to a constant number) and facilitating proofs of containment in complexity classes such as PSPACE. Private-coin protocols, however, may provide enhanced security in cryptographic contexts by preventing the prover from gaining insights into the verifier's random choices, potentially reducing the risk of adaptive attacks. Despite these differences, public-coin and private-coin protocols recognize exactly the same class of languages, known as IP. Specifically, any private-coin interactive proof with polynomially many rounds can be transformed into a public-coin protocol with at most two additional rounds, preserving completeness and soundness. This equivalence is established through a black-box simulation technique, where the verifier publicly commits to a random hash function to approximate the distribution of its private coins, allowing the prover to respond as if interacting with the original private randomness. The conversion incurs a polynomial overhead in communication but maintains the overall efficiency of the protocol.

Arthur-Merlin Protocols

Arthur-Merlin (AM) protocols represent a specific class of interactive proof systems where the verifier, named Arthur, sends only random strings as messages, making it a public-coin model with a constant number of rounds. In this setup, the prover, named Merlin, responds to Arthur's random challenges, allowing Merlin to convince Arthur of the truth of a statement with high probability if it is true, while Arthur rejects false statements with high probability regardless of Merlin's strategy. Merlin-Arthur (MA) protocols, in contrast, are non-interactive from Merlin's perspective: Merlin first sends a single proof string to Arthur, after which Arthur uses randomness to verify it probabilistically, resembling the NP class but with a probabilistic verifier instead of a deterministic one. This structure limits interaction to one message from Merlin followed by Arthur's internal randomization, providing a bounded form of nondeterminism combined with randomness. The AM hierarchy is defined by the number of messages exchanged, denoted AM for protocols with k messages alternating between Arthur and Merlin, starting with Arthur. Notably, any constant-round AM protocol collapses to AM, meaning two-message protocols suffice for the full power of constant-round AM without increasing message complexity beyond two rounds. This collapse highlights the efficiency of random challenges in AM systems. For polynomial-round public-coin protocols, the class AM[poly(n)] equals IP. A seminal result places the graph non-isomorphism problem in AM, where Merlin convinces Arthur that two graphs are non-isomorphic by responding to random graph isomorphisms proposed by Arthur, succeeding with high probability if non-isomorphic and failing otherwise if they are isomorphic. This was the first natural problem shown to be in AM but not known to be in NP or coNP, demonstrating the power of randomization in interactive proofs. The complexity classes satisfy MA ⊆ AM ⊆ IP, where MA captures non-interactive probabilistic verification, AM extends it with limited interaction via public coins, and IP encompasses full interactive proofs with private coins. Additionally, AM = coAM, meaning the complement of an AM language is also in AM, achieved through a simple protocol inversion that swaps roles while preserving randomness.

Core Complexity Classes

Relation to NP

Interactive proof systems generalize the nondeterministic polynomial-time complexity class by incorporating interaction between the prover and verifier, as well as randomness in the verifier's decisions. In the model, a language has a short, non-interactive certificate that a polynomial-time deterministic verifier can check to confirm membership. This corresponds to a special case of an interactive proof where the protocol involves only one message from the prover (the certificate) and no subsequent interaction or randomness on the verifier's part. A key limitation of NP is its inability to efficiently verify languages believed to lie outside NP, such as those that are PSPACE-complete. Verifying a PSPACE-complete language using an NP-style non-interactive proof would place it in NP, implying NP = PSPACE and causing the polynomial hierarchy to collapse to the second level, a consequence widely considered unlikely. Interactive proof systems address this by enabling an unbounded prover to engage in a multi-round dialogue with a probabilistic polynomial-time verifier, allowing the prover to adaptively guide the verification toward confirming membership in harder languages. The class AM, defined by public-coin interactive proofs with two messages (verifier first, then prover), contains all of NP. For any language in NP, an AM protocol exists where the verifier sends a random string to the prover, who responds with a valid NP witness; the verifier then ignores the random string and deterministically checks the witness in polynomial time. For languages admitting random self-reducibility, such as certain NP problems like quadratic residuosity, this framework further enables efficient AM protocols with additional properties like zero-knowledge. In contrast to the non-interactive NP model, which focuses on existential proofs, interactive systems—especially public-coin variants like AM protocols—can verify languages in coNP via probabilistic interaction. For instance, graph non-isomorphism, a canonical coNP language not known to be in NP, admits an AM protocol: the verifier sends a random permutation of the vertices, and the prover responds with an isomorphism to one of the input graphs if they are non-isomorphic, which the verifier checks against both inputs. This highlights how public-coin interaction facilitates proofs for universal statements characteristic of coNP. These generalizations underscore the added power of interaction and probability: they expand verification capabilities beyond deterministic polynomial-time checks, ultimately equating the full class IP to PSPACE and enabling efficient proofs for a broad range of computational problems.

The IP Class

The class IP consists of all decision problems (languages) for which there exists a probabilistic polynomial-time verifier V and a computationally unbounded prover P such that, for any input x of length n, the interaction proceeds in a polynomial number of rounds (at most \mathrm{poly}(n)), with V sending messages possibly depending on private randomness, and the following holds: if x \in L, then \Pr[V accepts after interacting with P] \geq 2/3; if x \notin L, then for any prover P^*, \Pr[V accepts after interacting with P^*] \leq 1/3. The error probabilities can be reduced to negligibly small values by repetition, without altering the class. A landmark result establishes that IP = PSPACE, meaning every language decidable in polynomial space admits an interactive proof system, and conversely, languages in IP are contained in PSPACE. The inclusion IP \subseteq PSPACE follows from the verifier's polynomial-time simulation of the interaction using its randomness and messages, which requires only polynomial space to track. The converse, PSPACE \subseteq IP, was shown by reducing the PSPACE-complete problem TQBF (true quantified formulas) to an interactive protocol via arithmetization. In this technique, a TQBF \Phi = \exists x_1 \forall y_1 \exists x_2 \dots Q z_k \, \phi(x_1, y_1, \dots, z_k) with formula \phi is transformed into an arithmetic expression over a finite \mathbb{F} (chosen such that |\mathbb{F}| > n and larger than factorials up to n! for uniqueness). Boolean variables are mapped to field elements in \{0,1\}; connectives are arithmetized as OR: $1 - (1 - a)(1 - b), AND: a \cdot b, NOT: $1 - a; existential quantifiers become sums over \{0,1\}, and universal quantifiers become products over the two possible assignments (equivalent to $1 - sum for falsity). This yields a g of total degree at most k \leq n such that \Phi is true if and only if the iterated sum-product evaluating g equals 1. The core of the protocol is the sum-check subroutine, which allows the verifier to interactively verify claims about sums of multilinear polynomials without evaluating the entire expression. Formally, given a v-variate multilinear polynomial g: \mathbb{F}^v \to \mathbb{F} with claimed sum H = \sum_{b_1=0}^1 \cdots \sum_{b_v=0}^1 g(b_1, \dots, b_v), the protocol proceeds in v rounds as follows:
  1. The prover sends the univariate polynomial g_1(X_1) = \sum_{b_2=0}^1 \cdots \sum_{b_v=0}^1 g(X_1, b_2, \dots, b_v), which must be of degree at most 1 (since g is multilinear). The verifier checks that \deg(g_1) \leq 1 and g_1(0) + g_1(1) = H; otherwise, reject.
  2. For j = 2 to v: The verifier picks a random r_{j-1} \in \mathbb{F} and sends it to the prover. The prover responds with g_j(X_j) = \sum_{b_{j+1}=0}^1 \cdots \sum_{b_v=0}^1 g(r_1, \dots, r_{j-1}, X_j, b_{j+1}, \dots, b_v), of degree at most 1. The verifier checks \deg(g_j) \leq 1 and g_{j-1}(r_{j-1}) = g_j(0) + g_j(1); otherwise, reject.
  3. After the final round, the verifier picks random r_v \in \mathbb{F}, checks g_{v-1}(r_{v-1}) = g_v(0) + g_v(1), and verifies g_v(r_v) = g(r_1, \dots, r_v) using an access to g (in the TQBF context, this reduces to a subproblem verifiable by ).
If all checks pass, accept. The verifier's total work is , as each univariate polynomial is low-degree and evaluations are efficient. For , if the initial claim H is incorrect, then with probability at least $1 - v/|\mathbb{F}| over the random choices, at least one consistency check fails, because discrepant univariate polynomials agree at a random point with probability at most their over |\mathbb{F}| (here, 1). Since v \leq n and |\mathbb{F}| = \Theta(n^2), the overall error is O(n / n^2) = O(1/n), reducible by . In the full TQBF protocol, sum-check is invoked iteratively for each quantifier layer, with the field extended appropriately, reducing the outermost sum to inner evaluations that recurse on smaller formulas until the base polynomial \phi (verifiable in time). This chain of protocols establishes TQBF \in . Extensions of IP include variants with perfect soundness (error 0 on no-instances), which coincide exactly with , as the can be simulated non-interactively by guessing the prover's responses along a random verifier path. The IP = characterization demonstrates that probabilistic with an unbounded prover captures the full strength of deterministic polynomial-space computation, bridging randomized and space-bounded Turing machines.

Zero-Knowledge Proofs

Zero-knowledge proofs represent a privacy-enhancing property of certain interactive proof systems, where the verifier gains no information beyond the assurance that the claimed statement is true. Formally, in a zero-knowledge proof for a language L, the prover convinces the verifier that an input x is in L such that there exists an efficient simulator that, on input x \in L, generates a view indistinguishable from the verifier's real interaction with an honest prover, even without interacting with the prover itself. This black-box simulation paradigm ensures that the proof reveals nothing about the prover's witness beyond membership in L. The concept was introduced as a measure of "knowledge complexity" in interactive proofs. A key distinction arises between honest-verifier zero-knowledge (HVZK) and full (or malicious-verifier) zero-knowledge. In HVZK, the simulator generates transcripts indistinguishable from those produced when the verifier follows the honestly, by simulating the verifier's random challenges and the prover's responses. Full zero-knowledge strengthens this by requiring even against arbitrary, possibly malicious verifiers; this is achieved through techniques like extracting the verifier's challenges or using commitments to handle adaptive adversaries. HVZK is often easier to construct and sufficient for many applications, while full ZK provides stronger privacy guarantees. The power of zero-knowledge was dramatically expanded by Goldreich, Micali, and Wigderson in 1987, who constructed zero-knowledge interactive proofs for all languages in , assuming the existence of one-way functions. Their protocol uses as a building block, combined with cut-and-choose techniques to achieve computational zero-knowledge, demonstrating that privacy can be integrated into the expressive power of interactive proofs without sacrificing efficiency. Knowledge extraction is a complementary property in zero-knowledge proofs of knowledge, where a sound extractor can efficiently recover a valid witness from any prover that convinces the verifier. This is typically achieved via rewinding, a technique where the extractor runs the prover multiple times, "rewinding" the execution to a point after the prover's commitment but before the challenge, forking into parallel paths to obtain multiple accepting transcripts and solving for the witness (e.g., in discrete log-based protocols). Parallel repetition can amplify soundness while preserving extractability. Soundness in zero-knowledge proofs can be perfect (no false proofs accepted) or computational (false proofs accepted only by adversaries breaking hardness assumptions); special honest-verifier zero-knowledge often relies on the latter for practicality. The Fiat-Shamir heuristic transforms interactive zero-knowledge proofs into non-interactive versions suitable for signatures, by replacing the verifier's random challenges with a hash of the prover's commitment message. This yields non-interactive zero-knowledge arguments under the model, enabling efficient digital like the Fiat-Shamir signature scheme, where the signer proves knowledge of a secret without interaction.

Extended Classes and Proofs

Multi-Prover Interactive Proofs

Multi-prover interactive proof systems extend the single-prover model by allowing a probabilistic polynomial-time verifier to interact with multiple computationally unbounded provers that cannot communicate with each other after receiving the common input. In this framework, known as MIP, the verifier sends queries to the provers either simultaneously or sequentially, alternating between them to prevent coordination, while the provers respond based solely on the input and their private strategies. The protocol ensures , where honest provers convince the verifier to accept with high probability (typically at least 1 - negligible) if the input is in the , and , where no provers can convince the verifier to accept with more than negligible probability if the input is not in the . This non-communicating setup relies on separate channels for each prover, ensuring that responses are independent given the verifier's queries. The class MIP was introduced to capture languages verifiable by such systems, with the number of provers denoted as k in k-MIP and the number of rounds as the total interactions between verifier and provers. Formally, for a language L, a k-prover r-round MIP protocol consists of a verifier V running in time polynomial in the input size n, interacting with k provers P_1, ..., P_k over r rounds, where in each round the verifier selects one prover to query based on previous responses and its randomness. The provers share the input x but no further communication, and acceptance is determined by V after all rounds. Seminal work established that even with a constant number of provers and rounds, MIP encompasses nondeterministic exponential time: MIP = NEXP. This equality was first shown for multiple provers by Ben-Or, , Kilian, and Wigderson in 1988, who demonstrated NEXP ⊆ MIP using a polynomial-time verifier and polynomially many provers, and extended to two provers by Babai, Fortnow, and in 1991, completing the characterization with a matching upper bound. MIP is strictly more powerful than the single-prover class , as = while MIP = NEXP, assuming the standard ≠ NEXP does not hold. The additional provers enable the verifier to cross-check consistency across independent responses, amplifying expressive power beyond polynomial space. in MIP protocols can be amplified using parallel repetition, where multiple independent instances of the protocol are run simultaneously; for two-prover one-round systems, Raz proved in 1998 that this reduces the error exponentially in the number of repetitions, preserving . Applications of MIP include efficient zero-knowledge proofs for all NP languages without cryptographic assumptions, as shown by Ben-Or et al., where the multiple provers allow perfect zero-knowledge by simulating honest behavior statistically. For instance, graph isomorphism admits a constant-round two-prover zero-knowledge MIP, leveraging the model to verify without revealing the mapping. MIP also connects to probabilistically checkable proofs through models, where the verifier queries non-adaptive provers akin to accessing an tape, though detailed equivalences lie beyond the core MIP framework.

Probabilistically Checkable Proofs

Probabilistically checkable proofs (PCPs) provide a non-interactive framework for verifying membership in languages using limited access to a proof string. In this model, for an input x of length n, a prover supplies a proof \pi of length polynomial in n. A probabilistic verifier, running in polynomial time, uses O(\log n) random bits to select and query O(\log n) positions in \pi, accepting with probability at least $2/3 if x \in L (for some honest \pi) and rejecting with probability at least $2/3 if x \notin L (for all possible \pi). This setup extends interactive proof systems (IPS) by eliminating interaction: the proof encodes all necessary information upfront, and verification relies on random sampling rather than back-and-forth communication. The PCP theorem establishes that every language in admits such a proof system, formally NP = (\log n, \log n). This characterization was first partially achieved by and Safra, who showed NP = (\log n, (\log n)^{[O](/page/O)(1)}), introducing key techniques like proof composition to reduce query complexity. The full theorem, reducing queries to O(\log n), was completed by , , Motwani, , and Szegedy, who combined low-degree testing with earlier results to amplify soundness gaps while controlling proof length and randomness. A more accessible combinatorial proof was later provided by Dinur, relying on expander graphs for long-code encodings and iterative gap amplification to transform weak verifiers into efficient without algebraic machinery. PCPs relate to IPS, particularly multi-prover variants (MIP), as a non-interactive analogue obtained via parallel repetition. In this view, an interactive protocol is repeated polynomially many times independently; the proof string encodes responses to all possible verifier message sequences, and the verifier randomly selects a small to check and correctness, simulating oracle access to multiple non-communicating provers. This connection highlights PCPs' role in bridging interactive and static verification, with query complexity mirroring the read complexity in the underlying protocol. The has profoundly impacted by enabling strong inapproximability results for NP-hard optimization problems. For instance, it implies that approximating MAX-3SAT (maximizing satisfied clauses in a 3-CNF formula) within any constant factor better than $7/8 is NP-hard, as the verifier's acceptance probability directly translates to approximation ratios via gap-introducing reductions. These results, extended through label-cover problems and gadget constructions, establish that numerous problems—like and set cover—cannot be approximated within polylogarithmic factors unless P = .

Quantum Interactive Proofs

Quantum interactive proof systems, denoted as QIP, generalize classical interactive proofs to the quantum domain. In this model, the verifier is a -time that interacts with an unbounded computational prover, which may be classical or quantum. The interaction proceeds over quantum channels, where the verifier sends qubits to the prover, and the prover responds with quantum states, potentially leveraging entanglement across multiple messages. The protocol consists of a number of rounds, with requiring that for yes-instances, there exists a prover strategy inducing the verifier to accept with probability at least \frac{2}{3}, and ensuring that for no-instances, no prover strategy yields acceptance probability exceeding \frac{1}{3}, where probabilities arise from quantum measurements on the final state. These error probabilities can be amplified via repetition and parallelization, similar to classical IP. A key formal aspect of QIP is the role of quantum entanglement in prover responses, which enables the prover to correlate outputs in ways impossible classically, potentially exploiting superposition and interference for more concise proofs. Soundness in the quantum setting is rigorously captured using the trace distance between density operators: the verifier's acceptance test distinguishes accepting and rejecting behaviors if the trace distance between the overall quantum states exceeds a constant gap, ensuring negligible overlap between valid and invalid proof distributions. It has been established that QIP equals , matching the power of classical , with ⊆ QIP shown via constant-round protocols and QIP ⊆ via approximations of quantum interactions. This equality highlights that quantum interaction does not increase the computational power beyond classical for polynomial-time verifiers, though it introduces novel quantum mechanical features. Recent advancements include space-bounded variants of QIP, such as QIPUL and QIPL, which limit the verifier's working space to logarithmic or polylogarithmic qubits while maintaining interactive quantum communication; these models, introduced in , explore efficiency in quantum memory-constrained settings and relate to quantum logarithmic space classes. As a non-interactive analog, the class QMA (Quantum Merlin-Arthur) involves a single quantum proof state verified by a polynomial-time , containing and serving as a quantum counterpart to protocols. In the multi-prover setting, QMIP extends QIP to multiple non-communicating provers sharing possible entanglement, containing the classical MIP class and equaling MIP* (multi-prover IP with shared entanglement) for polynomial rounds. A landmark result shows that QMIP = MIP* = , the class of recursively enumerable languages, even for polynomial rounds, implying undecidability of the verification problem akin to the . These quantum extensions offer advantages in efficiency, such as reduced round complexity or communication for certain problems in quantum-secure protocols and delegated quantum computation.

References

  1. [1]
    The Knowledge Complexity of Interactive Proof Systems
    Goldwasser, S. Micali, C. Rackoff, The knowledge complexity of interactive proof systems, 1985, in Proc. 27th Annual Symposium on Foundations of Computer ...
  2. [2]
    [PDF] Lecture 3: Interactive Proofs
    Apr 8, 2019 · Interactive proofs are multi-round protocols between a prover and verifier, aiming to convince someone a statement is true. The verifier can be ...
  3. [3]
    [PDF] 2 Definitions: Interactive Proofs and Argument Systems
    An interactive proof system is valid if dc,ds  1/3. The complexity class IP is the class of all languages possessing valid interactive proof systems.
  4. [4]
    IP = PSPACE | Journal of the ACM
    In this paper, it is proven that when both randomization and interaction are allowed, the proofs that can be verified in polynomial time are exactly those ...
  5. [5]
    [PDF] The Knowledge Complexity of Interactive Proof-Systems
    The Knowledge Complexity of Interactive Proof-Systems. (Extended Abstract). Shafi Goldwasser Silvio Micali Charles Rackoff. MIT. MIT. University of Toronto. 1 ...
  6. [6]
    [PDF] GMW86.pdf
    Proofs of Graph Isomorphism and. Graph Non-Isomorphism. We start by presenting a (probably non- zero-knowledge) interactive proof for graph non-isomorphism.
  7. [7]
    [PDF] On Completeness and Soundness in Interactive Proof Systems
    We show that any language having an interactive proof system has one (of the Arthur-Merlin type) with perfect completeness. On the other hand, only languages in ...
  8. [8]
    [PDF] A Note on Negligible Functions - UCSD CSE
    In an interactive proof setting [6], where cheating provers are not computationally bounded, we would say the error probability is δ: N → R if ErrP (n) ≤ δ(n) ...<|control11|><|separator|>
  9. [9]
    [PDF] A Short History of Computational Complexity - Lance Fortnow
    Nov 14, 2002 · Computational complexity began in the early 1960s with Hartmanis and Stearns, measuring time and space as a function of input length, and the ...
  10. [10]
    [PDF] Complexity-Theoretic Aspects of Interactive Proof Systems
    Interactive proof systems, used in cryptography, merge probabilistic and nondeterministic computation in complexity theory. This thesis studies their  ...Missing: definition | Show results with:definition
  11. [11]
    [PDF] Computer Science and Game Theory: A Brief Survey
    May 18, 2007 · I look at the various roles of computational complexity in game theory in Section 2, including its use in modeling bounded rationality, its role ...
  12. [12]
    The knowledge complexity of interactive proof-systems
    The knowledge complexity of interactive proof-systems. Authors: S Goldwasser S Goldwasser MIT View Profile , S Micali S Micali MIT View Profile , C Rackoff C ...
  13. [13]
    Trading group theory for randomness - ACM Digital Library
    The aim of this paper is to replace most of the (proven and unproven) group theory of [BS] by elementary combinatorial arguments.
  14. [14]
    Arthur-Merlin games: A randomized proof system, and a hierarchy of ...
    (1986), pp. 368-379. Full version to appear in Combinatorica. View in Scopus ... Babai, Interactive proofs in finite groups, in “Randomness and complexity ...
  15. [15]
    A one-round, two-prover, zero-knowledge protocol for NP
    The model of zero-knowledge multi-prover interactive proofs was introduced by Ben-Or, Goldwasser, Kilian and Wigderson in [4]. A major open problem ...
  16. [16]
    [PDF] Private Coins versus Public Coins in Interactive Proof Systems
    One formalization is due to Babai [B] and the other to Goldwasser, Micali and. Rackoff [GMR]. Both definitions would col- lapse to NP if no coins were flipped.<|control11|><|separator|>
  17. [17]
  18. [18]
    [PDF] Trading Group Theory for Randomness - McGill
    The objective of this paper is to introduce some new random tools to replace an unproven group theoretic hypothesis. 1.2. Matrix groups. Lly far the most common ...
  19. [19]
    Algebraic methods for interactive proof systems | Journal of the ACM
    ~FURER, M., GOLDREICH, O., MANSOUR, Y., SIPSER, M., AND ZACHOS, S. On completeness ~and soundness in interactive proof systems. ... PDF file. PDF. eReader.
  20. [20]
    Proofs that yield nothing but their validity or all languages in NP ...
    Proofs that yield nothing but their validity or all languages in NP have zero-knowledge proof systems. Authors: Oded Goldreich.Missing: paper | Show results with:paper
  21. [21]
    Practical Solutions to Identification and Signature Problems
    In this paper we describe simple identification and signature schemes ... Fiat, A., Shamir, A. (1987). How To Prove Yourself: Practical Solutions to ...
  22. [22]
    [PDF] Probabilistic Checking of Proofs: A New Characterization of NP
    Proof Verification. An equivalent formulation of the class MIP was suggested by Fortnow et al. [1988]. In this model, a random polynomial-time.
  23. [23]
    A note on PCP vs. MIP - ScienceDirect.com
    Two variants of interactive proof systems have been used to derive intractability of approximation results.
  24. [24]
    [PDF] Proof Verification and the Hardness of Approximation Problems
    Another such connection is reported by. Arora, Motwani, Safra, Sudan and Szegedy [5], which shows the connection between. PCP's and the hardness of ...
  25. [25]
    The PCP theorem by gap amplification | Journal of the ACM
    The PCP theorem [Arora and Safra 1998; Arora et. al. 1998] says that every language in NP has a witness format that can be checked probabilistically by ...
  26. [26]
    [cs/9901015] PSPACE has 2-round quantum interactive proof systems
    Jan 27, 1999 · Access Paper: View a PDF of the paper titled PSPACE has 2-round quantum interactive proof systems, by John Watrous. View PDF · TeX Source ...
  27. [27]
    Parallelization, amplification, and exponential time simulation of ...
    Parallelization, amplification, and exponential time simulation of quantum interactive proof systems. Authors: Alexei Kitaev. Alexei Kitaev. Microsoft Research ...
  28. [28]
    [0907.4737] QIP = PSPACE - arXiv
    Jul 27, 2009 · We prove that the complexity class QIP, which consists of all problems having quantum interactive proof systems, is contained in PSPACE.
  29. [29]
    [2410.23958] Space-bounded quantum interactive proof systems
    Oct 31, 2024 · We introduce two models of space-bounded quantum interactive proof systems, {\sf QIPL} and {\sf QIP_{\rm U}L}.
  30. [30]
    [1004.1130] QMIP = MIP* - arXiv
    Apr 7, 2010 · We show that the class of languages recognized by quantum multi-prover interactive proof systems, QMIP, is equal to MIP*, the class of languages ...