Fact-checked by Grok 2 weeks ago

PCP theorem

The PCP theorem, or Probabilistically Checkable Proofs theorem, is a foundational result in computational complexity theory asserting that the complexity class NP coincides with the class of languages having probabilistically checkable proofs verifiable by a probabilistic polynomial-time algorithm using O(log n) random bits to select O(1) positions in a polynomially long proof string, such that valid proofs are accepted with probability 1 while invalid proofs are rejected with probability at least 1/2. The theorem emerged from efforts to characterize NP in terms of proof verification systems, building on earlier concepts like interactive proofs introduced in the late 1980s for cryptographic applications and computational group theory. It was established in 1992 through two complementary results: the first by Sanjeev Arora and Shmuel Safra, who showed NP ⊆ PCP(log n, (log log n)O(1)), and the second by Arora, Carsten Lund, Rajeev Motwani, Madhu Sudan, and Mario Szegedy, who refined it to NP = ∪c>0 PCP(c log n, q(n)) for some constant q(n). These works drew inspiration from prior advances, such as the 1991 results of Uriel Feige, Shafi Goldwasser, László Lovász, Safra, and Szegedy on approximation hardness, and Arthur Babai, Lance Fortnow, and Carsten Lund's interactive proof characterizations of NP. A key innovation of the PCP theorem is its use of probabilistic verification to simulate deterministic proof checking with minimal access to the proof, enabling the construction of proofs that are locally testable via random sampling. This framework implies strong inapproximability results for NP-complete problems; for instance, it proves that approximating MAX-3SAT within a factor of 1 + ε for any fixed ε > 0 is NP-hard, and similarly for the maximum clique problem within a factor of nε for some ε > 0, assuming P ≠ NP. More broadly, the theorem revolutionized the study of approximation algorithms by establishing that no polynomial-time approximation scheme (PTAS) exists for MAX SNP-hard problems unless P = NP. In 2005, Irit Dinur provided an alternative, arguably more intuitive proof of the PCP theorem using a combinatorial "gap amplification" technique that iteratively expands and simplifies the proof structure without relying on algebraic methods. This perspective has influenced subsequent developments in property testing, derandomization, and connections to average-case complexity, underscoring the theorem's enduring role in bridging verification, randomness, and hardness in theoretical computer science.

Introduction

Overview of the Theorem

The PCP theorem states that every decision problem in the complexity class NP admits a probabilistically checkable proof system, where a probabilistic polynomial-time verifier can determine membership in the language by querying only a constant number of bits from a polynomially long proof string, while achieving a low probability of error in both accepting valid proofs and rejecting invalid ones. This characterization implies that NP problems can be verified efficiently without examining the entire proof, relying instead on randomness to select positions for inspection. Intuitively, the theorem suggests that proofs for NP statements can be structured in a way that allows a verifier to "skim" the content, much like spot-checking random sections of a lengthy document to confirm its overall consistency and validity, rather than reading it cover to cover. This spot-checking mechanism ensures that any inconsistency in an invalid proof is likely to be detected with just a few probes, while valid proofs remain robust under such limited scrutiny. Key parameters of these proof systems include a constant query complexity, polynomial-length proofs, and logarithmic randomness used by the verifier, with completeness and soundness error probabilities that can be made arbitrarily small (approaching zero) through repetition, while keeping the query count constant. Building on the foundational concept of NP-completeness, where proofs are verifiable in polynomial time, the PCP framework extends this to probabilistic verification with minimal interaction. The theorem's resolution of long-standing questions about the limits of approximation algorithms for NP-complete problems marked a pivotal advancement, demonstrating that certain optimization tasks cannot be approximated efficiently unless P=NP.

Significance in Complexity Theory

The PCP theorem has profoundly impacted the study of approximation hardness in complexity theory by demonstrating that for numerous NP-hard optimization problems, obtaining even constant-factor approximations is NP-hard. Specifically, it is NP-hard to approximate MAX-3SAT within a factor of 7/8 + ε for any ε > 0 unless P = NP, while vertex cover resists approximation ratios better than some constant, and set cover exhibits inapproximability up to logarithmic factors. These results, derived from the theorem's characterization of NP via probabilistically checkable proofs, reveal fundamental computational limits, showing that approximation does not simplify NP-complete problems in a non-trivial way. The theorem also bridges non-interactive proof systems with interactive ones, positioning PCPs as a special case of multi-prover interactive proofs (MIP) where interaction is eliminated through a single round of non-adaptive queries. This connection underscores how PCPs extend the power of interactive proofs, such as those equating IP to , by enabling efficient local verification without ongoing communication between prover and verifier. Furthermore, the probabilistic nature of PCP verifiers influences inquiries into average-case complexity and derandomization, providing frameworks to explore whether randomized algorithms in BPP can be derandomized using pseudorandom generators, thereby informing debates on whether BPP equals . More broadly, the PCP theorem resolves longstanding open questions from the 1970s and 1980s regarding the limits of proof systems and computational hardness, particularly by confirming that NP languages admit proofs verifiable with constant queries and logarithmic randomness, thus characterizing NP in a way that highlights inherent intractability. This resolution, building on earlier doubts about the feasibility of efficient approximations for NP-complete problems, has solidified the foundation for modern hardness-of-approximation results and advanced the understanding of proof verification's role in complexity hierarchies.

Formal Definition

Probabilistic Verifiers

A probabilistic verifier in the context of is a polynomial-time V that receives an input x \in \{0,1\}^n, a random R \in \{0,1\}^{r(n)} drawn from a polynomial-length source of randomness, and access to a proof oracle \Pi: \mathbb{Z}^+ \to \{0,1\}^* representing the purported proof. The verifier queries the oracle at most a constant number of positions, determined non-adaptively based on x and R, and outputs a Boolean decision of accept (1) or reject (0). The verifier satisfies two key properties: completeness and soundness. For completeness, if x belongs to the language L, there exists an oracle \Pi (the valid proof) such that the probability over random strings R that V^\Pi(x; R) = 1 is at least c > 1/2, often taken as 1 in ideal cases. For soundness, if x \notin L, then for every possible oracle \Pi, the probability that V^\Pi(x; R) = 1 is at most s < 1/2, ensuring that invalid proofs are rejected with high probability. These error probabilities can be reduced to exponentially small values by repeating the verifier multiple times and accepting only if all runs accept, using the union bound on the randomness. In the query model, the verifier interacts with the proof solely through a constant number of oracle queries, typically non-adaptive, meaning all query positions are chosen before any responses are read. This locality allows the verifier to check vast proofs efficiently without examining them in full, contrasting with traditional NP verifiers, which are deterministic (zero randomness) and read the entire proof. The randomness usage is polynomial in n, but the error amplification via repetition ensures robustness without increasing query complexity significantly.

PCP Classes and Parameters

The notation for probabilistically checkable proof (PCP) classes incorporates parameters that capture the verifier's completeness probability c, soundness probability s, randomness complexity r(n), and query complexity q(n). Specifically, \mathsf{PCP}_{c,s}[r(n), q(n)] denotes the class of languages L for which there exists a probabilistic verifier running in polynomial time, using at most r(n) random bits, and querying at most q(n) bits of a polynomial-length proof string, such that the verifier accepts with probability at least c if x \in L (for some proof) and rejects with probability at least $1 - s if x \notin L (for all proofs). A formal definition of such a verifier V for an input x of length n involves a proof string \pi of length polynomial in n and a random string R with |R| = r(n). The verifier V(x, \pi, R) computes a set of at most q(n) positions in \pi based on x and R, queries those positions, and outputs 1 (accept) or 0 (reject) accordingly. The completeness condition requires that for every x \in L, there exists a \pi such that \Pr_{R}[V(x, \pi, R) = 1] \geq c, while the soundness condition ensures that for every x \notin L and every \pi, \Pr_{R}[V(x, \pi, R) = 1] \leq s. The PCP theorem establishes that every language in NP admits a PCP verifier with perfect completeness c = 1, soundness s = 1/2, polynomial randomness r(n) = \mathrm{poly}(n), constant queries q(n) = O(1), and polynomial-length proofs. More precisely, \mathsf{NP} = \bigcup_{c > 0} \mathsf{PCP}_{1,1/2}(c \log n, O(1)), indicating that logarithmic randomness and constant queries suffice. Soundness can be amplified from s = 1/2 to any s' = 1 - 2^{-k} for polynomial k by independently repeating the verifier O(k) times and accepting only if all repetitions accept; this preserves completeness at c = 1 while exponentially reducing the rejection error probability. Variants of PCP classes include those with logarithmic randomness, as captured in the theorem's \mathsf{PCP}(O(\log n), O(1)) form, which reduces the randomness from polynomial to logarithmic while maintaining constant queries. Additionally, PCPs with perfect completeness (c = 1) can be constructed, often via techniques like expander random walks that ensure acceptance probability exactly 1 for valid proofs.

Implications

Hardness of Approximation

The PCP theorem establishes strong inapproximability results for NP-hard optimization problems by enabling reductions from probabilistically checkable proofs to constraint satisfaction problems (CSPs). Specifically, instances of a PCP verifier can be encoded into CSPs such as 3SAT or 3-LIN over GF(2), where a satisfying proof corresponds to a high-value assignment in the CSP (completeness), while any non-proof leads to a low-value assignment (soundness). These reductions preserve the logarithmic randomness and constant query complexity of the PCP, implying that solving the CSP to a certain approximation ratio would allow verifying NP proofs efficiently, which is impossible unless P=NP. A of these is , which transforms the small inherent in PCP verifiers—typically on the of 1 - 1/poly(n)—into a constant-factor suitable for hardness. Techniques such as parallel repetition of the verifier amplify the exponentially with the number of repetitions, while preserving the query complexity, thereby yielding constant-factor inapproximability thresholds that known algorithms arbitrary ε > 0. For instance, this ensures that even modest improvements over algorithms become NP-hard. Concrete results include the inapproximability of MAX-3SAT, where no polynomial-time algorithm can approximate the maximum number of satisfiable clauses within a factor of 7/8 + ε unless P=NP; a random assignment achieves 7/8 on average, making this threshold optimal. Similarly, for MAX-E3-LIN(2)—maximizing the number of satisfied linear equations over GF(2) with three variables per equation—it is NP-hard to approximate within 1/2 + ε, again matching the random assignment baseline. These bounds arise from direct reductions from PCPs to gap versions of the problems, such as distinguishing cases where all clauses (or equations) are satisfiable from those where at most 7/8 (or 1/2 + ε) fraction are. Such techniques extend to other problems, including MAX-CUT, where it is NP-hard to approximate the maximum cut in a graph within 16/17 + ε, ruling out constant-factor improvements beyond known semidefinite programming relaxations. Label Cover, an intermediate problem in these reductions, exhibits inapproximability within 1 - ε for arbitrary ε > 0 under large alphabets, serving as a hub for hardness amplification to other CSPs. More generally, predicate satisfaction problems—CSPs defined by arbitrary predicates—inherit similar constant-factor hardness from PCP encodings, underscoring the theorem's broad impact on approximation complexity. For example, the gap version of MAX-3SAT requires distinguishing instances where the optimum is 1 (all clauses satisfiable) from those where it is at most $7/8 + \epsilon, but PCP-based reductions show that gaps like between 1 and $7/8 + \epsilon are NP-hard to resolve. This formalizes the inapproximability as: \begin{cases} \text{Completeness: } \exists \text{ assignment satisfying } \geq 1 - \epsilon \text{ fraction of clauses} \\ \text{Soundness: } \forall \text{ assignments satisfy } \leq 7/8 + \epsilon \text{ fraction of clauses} \end{cases} Resolving this gap in polynomial time would contradict the PCP theorem.

Connections to Other Results

The PCP theorem provides a precise characterization of the complexity class NP in terms of probabilistically checkable proofs with logarithmic randomness and constant query complexity. Specifically, it establishes that NP = PCP[O(log n), O(1)], meaning every language in NP admits a polynomial-length proof that a probabilistic verifier can check by reading only a constant number of bits using O(log n) random bits, with perfect completeness and soundness error at most 1/3. This equivalence, proved via a combinatorial gap amplification technique applied iteratively to constraint satisfaction problems, simplifies the understanding of NP as a class of languages with highly efficient, local verifiability. The PCP theorem also illuminates the relationship between non-interactive and interactive proof systems. A PCP can be viewed as a non-interactive variant of a multi-prover interactive proof (MIP) restricted to a single round and one prover, where the prover commits to the entire proof upfront, and the verifier queries it non-adaptively. This connection highlights how the non-interactivity of PCPs enables the characterization of NP without the need for multiple rounds of communication, contrasting with more powerful classes like MIP = NEXP, which require multiple provers or rounds to capture higher complexity. Beyond proofs, the PCP theorem influences derandomization efforts through hardness-randomness tradeoffs. By establishing inapproximability results for NP-complete problems like 3SAT, the theorem implies the existence of hard functions under which pseudorandom generators can be constructed, enabling the derandomization of BPP. In particular, if exponential-time problems require superpolynomial circuit size, then BPP collapses to P. In cryptography, the PCP theorem underpins constructions of zero-knowledge proofs and secure multiparty computation protocols. It allows for the design of efficient zero-knowledge arguments for NP languages by compiling interactive proofs into non-interactive forms using PCPs as oracles, where the verifier's view reveals no information beyond the statement's validity. This framework, which supports constant-round protocols with sublinear verification, forms the basis for practical systems like zk-SNARKs, enabling secure computation without trusting the prover. Finally, PCPs relate closely to testing and sublinear algorithms by modeling local checkability of global . The verifier's ability to query a constant number of proof bits mirrors sublinear-time algorithms that test whether a satisfies a by probing few inputs, with rejection if far from any satisfying instance. This analogy has inspired testing techniques, such as linearity tests derived from PCP constructions, which efficiently distinguish functions close to linear from those distant, extending to broader classes of algebraic and combinatorial .

Proof Techniques

Core Ideas and Components

The core ideas underlying the proof of the PCP theorem revolve around transforming powerful interactive proof systems into efficient non-interactive probabilistically checkable proofs. A fundamental insight is the reduction of interactive prover communication to non-interactive oracles via shared randomness: in multi-prover interactive proofs (MIP), the verifier interacts with multiple non-communicating provers, but the provers' optimal strategies can be encoded as oracles that the verifier queries using randomness to simulate the interaction, effectively collapsing the protocol into a PCP form. The high-level flow of the proof proceeds from the characterization that MIP equals NEXP to establishing NP equals PCP. Specifically, since every language in NEXP has a polynomial-round MIP protocol, techniques are applied to collapse this into a constant-query PCP for NP languages, targeting parameters such as logarithmic randomness and constant queries as in the formal PCP definition. Parallel repetition plays a crucial role in amplifying soundness while controlling query complexity. This technique involves repeating the base interactive protocol multiple times in parallel, where the verifier generates independent random challenges for each repetition and sends the corresponding questions to independent prover instances; for two-prover one-round protocols with soundness error less than 1, this exponentially reduces the overall error probability to a constant, enabling constant soundness with polynomially many repetitions. Gadget reductions enable the composition of PCPs designed for simple predicates or languages into PCPs for general NP-complete problems like 3SAT. These reductions encode variables and constraints using local gadgets—small sub-proofs or constraint systems—that preserve completeness and amplify gaps in soundness, allowing the verifier to test arbitrary NP instances by breaking them down into verifiable local checks on the composed proof. The long code test ensures local consistency of the proof oracle by leveraging error-correcting codes. The long code encodes an n-bit string as a 2^n-bit string representing the evaluations of all possible functions on n bits applied to the string (interpreted as an input vector); the test queries the oracle at random points to verify if it matches the long code of a purported low-complexity function, detecting inconsistencies with constant probability using a small number of queries.

Key Constructions in the Proof

One pivotal construction in the proof of the PCP theorem is the Arora-Safra reduction, which transforms general NP problems into the task of approximating the value of bounded-depth circuits. This reduction establishes that approximating the maximum number of satisfiable clauses in a 3CNF formula is NP-hard, by first converting an arbitrary NP language to a bounded-depth circuit satisfiability problem and then applying a parallel repetition-like amplification to control error probabilities. Specifically, it provides a foundation for PCPs with polylogarithmic randomness and sub-logarithmic queries, implying weak inapproximability results such as NP-hardness for approximating CLIQUE within a factor of n^{1-ε} for some ε > 0, assuming P ≠ NP. Central to verifying the algebraic structure of proofs in PCP constructions are inner product tests and low-degree tests, which ensure that purported low-degree polynomial proofs over finite fields satisfy expected properties with high probability. The inner product test checks consistency between two alleged multivariate polynomials by querying their values at random points and verifying if the inner product matches an expected low-degree form; if the polynomials are far from low-degree, the test rejects with probability bounded away from 1/2. Low-degree tests extend this by sampling points to confirm that a function agrees with a degree-d polynomial on most inputs, using techniques like the Johnson-Lindenstrauss lemma for dimension reduction and Schwartz-Zippel for rejection analysis, achieving soundness error ε with O(d log(1/ε)/ε) queries. These tests underpin the algebraic approach in the full theorem, enabling verifiers to check proof consistency without reading the entire proof. A key encoding mechanism in PCP proofs is the long code, which represents each bit of the original proof as a highly expansive truth table to facilitate local checks for consistency. Formally, for a proof string of length m, the long code encodes the i-th bit w_i ∈ {-1,1} as a function f_w: {-1,1}^m → {-1,1} defined by f_w(x) = w_i · x_i, where x ∈ {-1,1}^m labels literals or variables; the full proof is then a collection of such functions, one per bit, allowing tests to probe distant correlations via random walks on the hypercube. This encoding ensures that any consistent assignment yields a codeword that passes locality tests, while inconsistent proofs fail with probability proportional to their Hamming distance from valid codewords, as analyzed via Fourier analysis showing that non-codewords have large influences on certain coordinates. The long code's exponential length is crucial for amplifying gaps in approximation hardness. Håstad's 3-bit PCP construction achieves optimal inapproximability parameters by composing long code encodings with direct product tests, yielding a verifier that queries exactly three bits and distinguishes satisfiability from near-unsatisfiability. In this setup, the proof is encoded such that each constraint in the original instance corresponds to a 3-bit predicate like x_1 ⊕ x_2 ⊕ x_3 = b, where the bits are read from long code tables via products; the test selects random constraints and verifies the predicate, rejecting unsatisfiable instances with probability at least 1 - ε for arbitrarily small ε > 0. The test amplifies completeness and soundness by repeating independent instances and checking agreement across products, ensuring that if the original problem has a 1 - δ fraction of satisfiable constraints, the PCP accepts with probability close to 1, while for 1 - γ unsatisfiability (with γ > δ), it rejects overwhelmingly; this yields NP-hardness for approximating 3LIN within factors 1 + ε. Dinur's gap amplification technique constructs smooth PCPs by iteratively expanding constraints using expander graphs, transforming low-soundness PCPs into constant-query ones with negligible error. Starting from an inner PCP with completeness 1 and soundness 1 - δ, the method replaces each proof symbol with a long code and connects tests via a constant-degree expander graph, where edges represent multi-query tests; a random walk on the expander selects test locations, ensuring that inconsistent proofs propagate errors globally due to the graph's mixing properties. This amplification repeats O(log(1/ε)/λ) times, where λ is the spectral gap, to boost soundness to 1 - ε while keeping query complexity constant, without relying on unique games but leveraging their constraint satisfaction framework for the outer layer. The result is a fully combinatorial proof of the PCP theorem, highlighting expanders' role in bridging algebraic and graph-theoretic hardness.

History and Developments

Origins of the Concept

The foundations of the PCP theorem trace back to early questions in regarding the efficiency of proof . In the , formalized the of as the of decision problems where "yes" instances have short proofs that can be in time, highlighting the tension between proof existence and efficient checking. This work raised broader inquiries into whether more powerful verification models could extend beyond NP while maintaining computational tractability, motivating subsequent explorations of probabilistic and interactive frameworks. The 1980s saw significant advancements through the introduction of interactive proofs, which allowed verifiers to engage in back-and-forth communication with untrusted provers. In 1985, Shafi Goldwasser, Silvio Micali, and Charles Rackoff defined interactive proof systems, initially motivated by cryptographic applications such as zero-knowledge proofs, where the verifier learns nothing beyond the statement's validity. Independently in the same year, László Babai introduced Arthur-Merlin protocols—a public-coin variant of interactive proofs where the verifier (Arthur) sends random challenges and the prover (Merlin) responds—demonstrating their utility for languages in coNP, such as graph non-isomorphism, via a constant-round protocol that enables efficient verification without full proof inspection. These developments shifted focus from static certificates to dynamic, probabilistically verifiable interactions, laying groundwork for non-deterministic extensions of NP. The specific "" for probabilistically checkable proofs emerged in 1988 from efforts to characterize multi-prover interactive protocols. Fortnow, Rompel, and proposed the "probabilistic checking of proofs" in their of multi-prover systems, evolving from earlier ideas of "certifiable proofs" to describe proofs that could be verified by querying only a few random bits. This naming captured the of reducing to local checks, bridging interactive proofs with traditional certificate verification and setting the stage for later breakthroughs in approximation .

The 1990s Breakthrough

In the early 1990s, significant progress toward the PCP theorem was made through partial results on specific problems. In 1993, Sanjeev Arora, László Babai, Jacques Stern, and Zachary Sweedyk showed that approximating the solution to bounded-depth satisfiability (SAT) problems is NP-hard, using a probabilistically checkable proof construction that required logarithmic randomness and query complexity; this provided the first strong inapproximability results for natural optimization problems beyond exact solutions. Building on these ideas, Arora and Shmuel Safra formalized the PCP hierarchy in 1992 and proved that every language in NP has a PCP verifier using O(log n) random bits and O((log log n)^{1/5}) queries, establishing NP ⊆ PCP[log n, (log log n)^{O(1)}]. This work named the class and connected it directly to NP, highlighting the potential for low-query verification while leaving the query complexity non-constant. Concurrently, Arora, Carsten Lund, Rajeev Motwani, Madhu Sudan, and Mario Szegedy advanced the result to NP = PCP[log n, 1], demonstrating that a single query suffices for soundness and completeness, albeit with logarithmic randomness; their proof relied on novel reductions from PCPs to approximation problems like MAX-3SAT. These conference results from 1992 established the full PCP theorem NP = PCP[O(log n), O(1)], with journal versions appearing in 1998. The proofs were later simplified by Irit Dinur in 2005 using a gap-amplification approach based on expander graphs.

Recent Extensions and Quantum Analogs

In 2005, Irit Dinur provided a significantly simpler proof of the PCP theorem, relying on an elementary construction that amplifies gaps using expander graphs to achieve constant-query PCPs for NP without the intricate algebraic techniques of prior works. This approach demonstrates how expanders preserve the structure of satisfying assignments while exposing inconsistencies in unsatisfying ones through repeated local checks. Building on such foundations, advancements in PCP variants have addressed additional properties like smoothness—where proof oracles are nearly uniform over satisfying assignments—and strength, ensuring low acceptance probability for unsatisfying proofs even under adaptive queries. In 2021, Paradise constructed polynomial-length, constant-query PCPs for NP that are simultaneously smooth and strong, enabling more robust applications in proof composition and delegation protocols. These PCPs maintain completeness close to 1 and soundness below 1/2, with the smoothness parameter ensuring that no single bit is overly correlated with acceptance. Recent developments have extended PCPs to zero-knowledge settings, where the verifier learns nothing beyond the statement's validity. In 2024, Gur, O'Connor, and Spooner introduced perfect zero-knowledge PCPs (PZK-PCPs) for #P, using masked sumcheck protocols to hide counting information while preserving constant queries and polynomial length. This was further advanced in 2025 by the same authors at STOC, who established a zero-knowledge PCP theorem for NP, constructing non-adaptive, polynomial-size, constant-query PCPs that are perfect zero-knowledge against adaptive polynomial-time distinguishers, for any desired polynomial security parameter. These results enable succinct, verifiable computations for counting problems without revealing solution details. The quantum PCP (QPCP) conjecture posits an analogous framework for quantum proofs, where a quantum verifier checks polynomial-length quantum proofs using constant-location measurements to decide languages in QMA with high probability. Progress toward this conjecture remains ongoing, but 2025 saw key advances in handling adaptivity and multiple provers. In July 2025, Buhrman, Helsen, and Weggemans in Quantum formalized adaptive multi-prover quantum PCPs, constructing unentangled prover systems that reduce to local Hamiltonian problems while bounding the verifier's measurement complexity. Complementing this, at QIP 2025, Arad and Santha introduced quasi-quantum states—relaxed density operators without positivity constraints—and proved a quasi-quantum PCP theorem, yielding hardness-of-approximation gaps for k-local Hamiltonians over these states, bridging classical PCP techniques to quantum settings. These extensions have deepened connections to quantum complexity classes, particularly quantum Merlin-Arthur (QMA) and the local Hamiltonian problem, where QPCP constructions imply inapproximability results assuming the conjecture holds. For instance, multi-prover QPCPs link to the quantum k-SAT problem, showing constant-factor hardness for local Hamiltonians in QMA. Similarly, quasi-quantum PCPs provide a pathway to classical simulations of quantum proof verification, with implications for optimizing ground-state approximations in quantum many-body systems.

References

  1. [1]
    [PDF] Proof Verification and the Hardness of Approximation Problems
    Arora, Motwani, Safra, Sudan and Szegedy [5], which shows the connection between. PCP's and the hardness of approximating MAX 3SAT. The following theorem summa-.
  2. [2]
    [PDF] The PCP theorem - Numdam
    THE PCP THEOREM. [after Arora, Lund, Motwani, Safra, Sudan, Szegedy] by Bernard CHAZELLE. Seminaire BOURBAKI. 54e annee, 2001-2002, n° 895, p. 19 a 36. Novembre ...
  3. [3]
  4. [4]
    [PDF] ab_pcpchap.pdf - Computational Complexity: A Modern Approach
    The PCP Theorem says that every NP language has a highly efficient PCP verifier: Theorem 11.5 (The PCP Theorem [AS92, ALMp92]) NP = PCP(log n, 1). Remark 11.6 ...Missing: citation | Show results with:citation
  5. [5]
    [PDF] PCP and Hardness of Approximation - Duke Computer Science
    The PCP Theorem (and the other PCP theorems that followed it) imply a host of such hardness of approximation results for many important problems, often showing ...
  6. [6]
    [PDF] Intro to PCP - MIT OpenCourseWare
    The connection between probabilistic checking and hardness of approximation drew attention to the question of whether NP has PCPs, and by 1992, the question was ...
  7. [7]
    [PDF] Probabilistically Checkable Proofs: A Primer - People | MIT CSAIL
    Jul 11, 2006 · Probabilistically checkable proofs are proofs that can checked probabilistically by reading very few bits of the proof.
  8. [8]
    [PDF] a survey of probabilistically checkable proofs - arXiv
    (Clarification: the singular form “PCP Theorem” will refer to a single result NP = PCP(log n, 1) proved in [3, 2], and the plural form “PCP Theorems” refers to ...
  9. [9]
    Proof verification and the hardness of approximation problems
    ARORA, S., MOTWANI, R., SAFRA, S., SUDAN, M., AND SZEGEDY, M. 1992. PCP and approximation problems. Unpublished note. Google Scholar. [6]. ARORA, S., AND SAFRA, ...
  10. [10]
    [PDF] Proof Verification and the Hardness of Approximation Problems
    The main hurdle in further improvement seemed to be Arora and Safra's proof [6] of Theorem 69, which requires a field size quadratic in the degree.
  11. [11]
    [PDF] Some Optimal Inapproximability Results
    For any ϵ >0, it is NP-hard to approximate Max-E3-Lin-2 within a factor 2 − ϵ. Said equivalently, Max-E3-Lin-2 is nonapproximable be- yond the random assignment ...
  12. [12]
    [PDF] The PCP Theorem by Gap Amplification
    Feb 13, 2007 · Abstract. The PCP theorem [3, 2] says that every language in NP has a witness format that can be checked probabilistically by reading only a ...
  13. [13]
    A note on PCP vs. MIP - ScienceDirect.com
    Two variants of interactive proof systems have been used to derive intractability of approximation results.
  14. [14]
    [PDF] A note on efficient zero-knowledge proofs and arguments. (extended ...
    In this note, we present new zero-knowledge interac- tive proofs and arguments for languages in NP. To show that z G L, with an error probability.Missing: PCP | Show results with:PCP
  15. [15]
    [PDF] Property Testing and Its Connection to Learning and Approximation
    May 4, 2025 · Property testing emerges naturally in the context of program checking and probabilistically checkable proofs (PCP). Specifically, in the context ...
  16. [16]
    [PDF] NON-DETERMINISTIC EXPONENTIAL TIME HAS TWO-PROVER ...
    ... Babai, Lance Fortnow and Carsten Lund. Abstract. We determine the exact power of two-prover interactive proof systems introduced by Ben-Or, Goldwasser ...
  17. [17]
    [PDF] on dinur's proof of the pcp theorem - UMD Computer Science
    Sep 26, 2006 · A recent work due to Irit Dinur gives a dramatically simple (and radically new) construction of probabilisti- cally checkable proofs. This ...
  18. [18]
    [PDF] Free Bits, PCPs and Non-Approximability— Towards Tight Results
    This suggests that PCPs are inherent to obtaining non-approximability results. Furthermore the tight relation suggests that reducing the amortized free bit ...<|control11|><|separator|>
  19. [19]
    Probabilistic checking of proofs: a new characterization of NP
    ARORA, S., MOTWANI, R., SAFRA, M., SUDAN, M., AND SZEGEDY, M. 1992b. PCP and approximation problems. Manuscript. Google Scholar. [7]. ARORA, S., AND SAFRA, S.
  20. [20]
    The PCP theorem by gap amplification | Journal of the ACM
    The PCP theorem [Arora and Safra 1998; Arora et. al. 1998] says that every language in NP has a witness format that can be checked probabilistically by ...
  21. [21]
    [PDF] A Short History of Computational Complexity - Lance Fortnow
    Nov 14, 2002 · The work of Cook and Karp in the early 70's showed a large number of combinatorial and logical problems were NP-complete, i.e., as hard as any ...
  22. [22]
    Trading group theory for randomness - ACM Digital Library
    The aim of this paper is to replace most of the (proven and unproven) group theory of [BS] by elementary combinatorial arguments.
  23. [23]
    Proof verification and hardness of approximation problems
    Arora and Safra (1992) characterized NP as PCP(log n, (loglogn)/sup O(1)/). ... Lund; R. Motwani; M. Sudan; Mario Szegedy. All Authors. Sign In or Purchase. 395.Missing: theorem | Show results with:theorem
  24. [24]
    Smooth and Strong PCPs | computational complexity
    Jan 6, 2021 · We prove that all sets in have PCPs that are both smooth andstrong, are of polynomial length and can be verified based on a constantnumber of queries.Missing: PSPACE | Show results with:PSPACE
  25. [25]
    Perfect Zero-Knowledge PCPs for #P - ACM Digital Library
    Jun 11, 2024 · We construct perfect zero-knowledge probabilistically checkable proofs (PZK-PCPs) for every language in #P. This is the first construction of a PZK-PCP for any ...
  26. [26]
    A Zero-Knowledge PCP Theorem | Proceedings of the 57th Annual ...
    Jun 15, 2025 · We show that for every polynomial q∗ there exist polynomial-size, constant-query, non-adaptive PCPs for NP which are perfect zero knowledge against (adaptive) ...
  27. [27]
    [2411.07972] A Zero-Knowledge PCP Theorem - arXiv
    Nov 12, 2024 · This improves upon both a recent construction of perfect zero-knowledge PCPs for #P (STOC 2024) and the seminal work of Kilian, Petrank and ...Missing: 2025 | Show results with:2025
  28. [28]
    on Adaptivity, Multiple Provers and Reductions to Local Hamiltonians
    Jul 11, 2025 · We define a general formulation of quantum PCPs, which captures adaptivity and multiple unentangled provers, and give a detailed construction of the quantum ...Missing: multi- | Show results with:multi-
  29. [29]
    Quasi-quantum states and the quasi-quantum PCP theorem - arXiv
    Oct 17, 2024 · Our main result is a PCP theorem for the k-local Hamiltonian over the quasi-quantum states in the form of a hardness-of-approximation result.Missing: QIP | Show results with:QIP