Fact-checked by Grok 2 weeks ago

Probabilistically checkable proof

A probabilistically checkable proof (PCP) is a type of in , where a probabilistic verifier can confirm the validity of a mathematical statement by accessing only a constant number of bits from a potentially long proof string, selected via random coin flips, while achieving high confidence in its decision—accepting valid proofs with probability at least 2/3 and rejecting invalid ones with probability at least 2/3. The foundational PCP theorem, proved in 1998 by , Carsten Lund, , , and Mario Szegedy, states that the NP is exactly equal to the class of languages having probabilistically checkable proofs verifiable using O(log n) random bits and a constant number of queries to the proof, denoted as NP = PCP[O(log n), O(1)]. This result built on earlier partial advances, including Arora and Safra's 1992 characterization of NP as PCP[O(log n), O(log log n)], and culminated a series of breakthroughs starting from the 1986 for graph non-isomorphism by Goldreich, Micali, and Wigderson. A simpler proof of the theorem was later provided by Irit Dinur in 2007 using techniques like gap amplification and expander graphs. PCPs have profound implications for , particularly in establishing the inapproximability of -hard optimization problems; for instance, they imply that, assuming P ≠ NP, it is impossible to approximate the for k-SAT within a factor better than some constant depending on k. Johan Håstad's 2001 refinement further strengthened these hardness results, showing = [c log n, 3] for any constant c, with completeness probability 1 - ε and soundness probability 1/2 + ε for arbitrary ε > 0, enabling tight inapproximability bounds for problems like Max-3SAT. Beyond approximation algorithms, PCPs underpin modern areas such as property testing, sublinear algorithms, and the study of proof systems in and .

Core Concepts

Definition

A probabilistically checkable proof (PCP) for an input x of length n claiming membership in a L is a proof \pi (of length in n) such that a probabilistic verifier V can check the claim with high probability using only a bounded amount of and access to a small number of bits in \pi. The verifier V is a probabilistic -time that tosses r(n) random bits to select positions in \pi and makes q(n) non-adaptive queries to those positions, deciding to accept or reject based on the responses. Formally, the PCP has completeness at least $2/3: if x \in L, there exists a proof \pi such that \Pr[V^{ \pi }(x) = 1] \geq 2/3, where the probability is over V's random coins. It has soundness at most $1/3: if x \notin L, then for every proof \pi, \Pr[V^{ \pi }(x) = 1] \leq 1/3. More generally, completeness can be any c(n) > 1/2 and soundness any s(n) < 1/2 with c(n) > s(n), as these error probabilities can be amplified. The PCP[r(n), q(n)] consists of all languages L admitting such a verifier V with complexity r(n) and query complexity q(n). Unlike standard proofs, where a deterministic verifier must read the entire (potentially polynomial-length) proof, PCPs enable verification that is non-adaptive (all query positions chosen before seeing responses), probabilistic, and bounded in both and queries, allowing the verifier to "check" exponentially long proofs efficiently. As a special case, coincides with PCP[0, poly(n)], where the verifier uses no and polynomially many queries. A basic example arises in verifying satisfiability: a formula can be reduced to a low-degree representation over a , yielding a where the verifier tests consistency and low-degree properties with a constant number of queries (e.g., q(n) = O(1)) to confirm satisfiability with high probability.

Verifier Model

In the verifier model for probabilistically checkable proofs (PCPs), the verifier V is formalized as a that operates efficiently while accessing only a small portion of a potentially long proof . Specifically, V runs in time in the input size n, utilizing a random of length r(n) to generate its , and has access to a proof \pi, which is a of length typically in n. This setup emphasizes query efficiency: rather than reading the entire proof, V makes at most q(n) non-adaptive queries to specific positions in \pi, meaning the query locations are determined solely by the input x, the random \rho, and V's internal state, without depending on the responses to previous queries. The verifier's decision process is defined by the function V(x, \pi, \rho) \to \{0,1\}, where x is the input instance, \pi is the proof, and \rho \in \{0,1\}^{r(n)} is the random string drawn uniformly. Upon querying \pi, V performs local checks on the retrieved bits to decide (1) or rejection (0). For , if x is a yes-instance (i.e., x \in L), there exists a proof \pi such that the acceptance probability over the randomness satisfies \Pr_{\rho} [V(x, \pi, \rho) = 1] \geq c(n), where c(n) is a completeness parameter, often close to 1. For , if x is a no-instance (i.e., x \notin L), then for every possible proof \pi, \Pr_{\rho} [V(x, \pi, \rho) = 1] \leq s(n), with s(n) < c(n) defining the soundness gap, ensuring that invalid proofs are rejected with high probability. These probabilities are taken over the uniform distribution on \rho, and the model guarantees that V cannot be fooled by malicious proofs beyond the soundness bound. This verifier model can be viewed as the non-interactive limit of interactive proof systems, where the prover commits to the entire proof upfront for the verifier to probe probabilistically. The focus on non-adaptive, low-query access distinguishes PCPs from traditional deterministic verification, enabling efficient probabilistic checking while maintaining the power to capture complexity classes like NP.

Historical Development

Early Foundations

The development of probabilistically checkable proofs (PCPs) was preceded by foundational work on , which extended the traditional verification model by incorporating randomness and interaction between a verifier and a prover. In this interactive paradigm, a computationally bounded verifier engages in a conversation with an unbounded prover to ascertain membership in a language, with the verifier accepting correct proofs with high probability while rejecting incorrect ones. This model built upon , where verification is non-interactive and deterministic, but introduced probabilistic elements to handle more complex decision problems. László Babai introduced interactive proof systems in 1985 through the concept of , where the verifier (Arthur) sends random challenges and the prover (Merlin) responds, demonstrating applications such as proving graph non-isomorphism. Babai further extended this to multi-prover interactive proofs (MIP), allowing multiple non-communicating provers to respond to the verifier's queries, enhancing the power of the system to capture harder complexity classes. In 1988, , , , and formalized MIP and proved that parallel repetition of interactive proofs amplifies soundness error exponentially, enabling constant-round protocols with negligible error while preserving completeness. The 1989 work by Shafi Goldwasser, Silvio Micali, and Charles Rackoff introduced the notion of knowledge complexity, quantifying how much information a verifier learns from an interactive proof beyond the mere validity of the statement, leading to the definition of zero-knowledge proofs where the verifier gains no additional knowledge. This framework highlighted the security properties of interactive systems and motivated reductions in interaction while maintaining privacy. Building on these ideas, Babai, Lance Fortnow, and Carsten Lund showed in 1990 that MIP equals NEXP, establishing that languages verifiable by multi-prover protocols with polynomial-time verifiers correspond exactly to those decidable by nondeterministic exponential-time machines, thus linking bounded randomness in proofs to exponential nondeterminism. A key conceptual bridge from interactive to non-interactive models emerged via the Fiat-Shamir heuristic, proposed by Amos Fiat and Adi Shamir in 1986, which transforms interactive identification protocols into non-interactive signatures by replacing verifier challenges with hash function outputs, assuming the hash acts as a random oracle. This technique laid groundwork for efficient, standalone proof systems without real-time interaction, influencing later non-interactive variants in complexity theory.

PCP Theorem

The PCP theorem states that every language in NP has a probabilistically checkable proof verifiable by a probabilistic polynomial-time algorithm using O(\log n) random bits and making O(1) queries to the proof, formally expressed as \mathrm{NP} = \mathrm{PCP}[O(\log n), O(1)]. This characterization implies that any NP language admits proofs of polynomial length that can be checked with high probability by reading only a constant number of bits from the proof. The theorem was established in 1992 through a series of results, including the partial characterization NP = PCP[O(log n), O(log log n)] by Sanjeev Arora and Shmuel Safra, culminating in the work of Arora, Carsten Lund, Rajeev Motwani, Madhu Sudan, and Mario Szegedy; the complete details appeared in the full version in 1998. Their proof built on earlier interactive proof systems, including multi-prover variants that served as precursors. At a high level, the proof begins with arithmetization techniques from Lund, Fortnow, Karloff, and Nisan, which transform NP computations into low-degree polynomial equations over finite fields, allowing verification through algebraic identities. It then employs parallel repetition to amplify the gap between accepting and rejecting proofs, reducing error probabilities while controlling query complexity. Self-correction mechanisms, leveraging expander graphs to ensure local consistency implies global correctness, complete the construction by enabling efficient recovery of proof values from nearby queries. This result unifies the deterministic verification of NP with probabilistically checkable proofs, providing a foundational tool for establishing inapproximability results in optimization problems. An alternative combinatorial proof was later given by in 2005, emphasizing gap amplification through constraint expansion rather than algebraic methods.

Theoretical Properties

Complexity Equivalences

The class of languages having probabilistically checkable proofs, denoted PCP[r(n), q(n)], captures decision problems verifiable in polynomial time using at most r(n) random bits and q(n) queries to the proof, with perfect completeness and soundness at most 1/2. These parameters allow PCPs to characterize a hierarchy of complexity classes by interpolating between low- and high-resource regimes. Basic equivalences arise from the definitional boundaries of randomness and queries. Specifically, PCP[0, 0] = P, as the absence of both reduces verification to deterministic polynomial-time computation without any proof access. PCP[0, poly(n)] = NP, since zero randomness permits reading a polynomial-length nondeterministic witness in full, mimicking standard NP verification. PCP[poly(n), 0] = coRP, where polynomial randomness with no queries simulates a co-randomized polynomial-time machine, with the unused proof irrelevant to the one-sided error structure. PCP[poly(n), poly(n)] = NEXP, as the high randomness and query allowances enable nondeterministic exponential-time computation via exhaustive enumeration of possible short proof configurations. The PCP theorem refines this characterization for NP, establishing that PCP[O(log n), O(1)] = NP: every NP language has a proof verifiable using logarithmic randomness and constant queries. This equivalence highlights how modest randomness suffices to compress NP witnesses into locally checkable forms. More generally, PCP[O(log n), poly(n)] = NTIME(n^{O(1)}), as the logarithmic randomness bounds the nondeterministic simulation time to polynomial while polynomial queries access the full proof if needed.
PCP[r(n), q(n)]Equivalent ClassExplanation
[0, 0]PDeterministic verification without proof or randomness.
[0, poly(n)]NPFull access to nondeterministic polynomial proof without randomness.
[poly(n), 0]coRPRandomized verification ignoring the proof, with one-sided error.
[O(log n), O(1)]NPLogarithmic randomness and constant queries suffice for NP (PCP theorem).
[O(log n), poly(n)]NTIME(n^{O(1)})Polynomial nondeterministic time via bounded randomness and full proof access.
[poly(n), poly(n)]NEXPFull resources enable exponential nondeterministic simulation.
These equivalences demonstrate how PCPs delineate the complexity hierarchy: low randomness or queries confine the power to P or NP, while increasing either parameter elevates the class toward NEXP, providing a parameterized bridge between deterministic, nondeterministic, and exponential regimes. Post-2000 developments, including simplified proofs of the PCP theorem, have further refined these connections.

Amplification Techniques

Amplification techniques in probabilistically checkable proofs (PCPs) enable the adjustment of key parameters such as soundness error, query complexity, and randomness usage while preserving the underlying complexity class, typically starting from the PCP theorem's parameters of constant queries, polynomial proof length, and soundness error bounded away from 1, such as 1/3. These methods are essential for tailoring PCPs to specific applications, like hardness of approximation, without altering the NP-completeness of the verified language. Soundness amplification reduces the probability of accepting invalid proofs by independently repeating the verifier multiple times. For a PCP with initial soundness error \epsilon < 1, repeating the verifier m times and accepting only if all repetitions accept yields a new soundness error of at most \epsilon^m, while completeness remains 1 since valid proofs pass every repetition. In the standard PCP setting with \epsilon = 1/3, choosing m = k repetitions reduces the error to (1/3)^k, which is exponentially small in k, at the cost of O(k) queries and random bits. This technique, rooted in the probabilistic nature of PCP verifiers, preserves non-interactivity and is a foundational step in many PCP constructions. For non-interactive PCPs derived from interactive proof precursors, such as those in the MIP framework, parallel repetition provides a related amplification strategy that preserves completeness while exponentially reducing soundness error, though it may increase proof length multiplicatively. However, in the fully non-interactive case, independent repetition suffices without the correlation issues that parallel repetition addresses in multi-prover settings. Query reduction transforms PCPs with polynomial query complexity into constant-query versions through composition techniques combined with expander graphs. High-level composition involves encoding long proofs into shorter ones verifiable with fewer queries, where expander-based gap amplification iteratively improves the soundness gap, reducing the effective query load from poly(n) to O(1) while maintaining polynomial proof length and constant soundness error. This process, which avoids full probabilistic verification of intermediate steps, relies on the expansion properties to ensure that unsatisfiable instances remain detectable with high probability. Randomness reduction further optimizes PCPs by replacing the verifier's truly random bits with outputs from pseudorandom generators (PRGs) that fool the verifier's decision process. Assuming sufficient hardness assumptions, such as the existence of functions hard for polynomial-time algorithms, PRGs can stretch O(\log n) seed bits into the polynomial randomness required by the original verifier, yielding an equivalent PCP with logarithmic randomness complexity. This derandomization preserves soundness and completeness, as the pseudorandom bits are indistinguishable from uniform ones to the efficient verifier.

Variants and Extensions

Linear PCP

A linear probabilistically checkable proof (linear PCP) is a variant of PCP where the proof is represented as a vector \pi over a finite field \mathbb{F}, and the verifier accesses linear combinations of the form \sum \alpha_i \pi(x_i) for chosen coefficients \alpha_i \in \mathbb{F} and positions x_i \in \{0,1\}^m. This structure ensures that the proof oracle behaves as a linear function, preserving algebraic properties under linear operations. The primary advantages of linear PCPs include support for algebraic operations directly on the proof, such as addition and scalar multiplication, which facilitate efficient composition of proofs. Additionally, their homomorphic properties allow integration with homomorphic encryption schemes, enabling proofs to be verified without decrypting the entire proof, thus reducing communication overhead to constant queries while maintaining polynomial prover communication. Constructions of linear PCPs typically start from standard PCPs and incorporate low-degree tests on polynomials over finite fields, often using to encode the proof as evaluations of low-degree polynomials. The verifier applies linearity checks and proximity tests to ensure the proof is close to a valid low-degree encoding, achieving soundness against non-linear or high-degree proofs with constant query complexity. In the 2010s, developments in linear PCPs advanced delegation protocols for verifiable computation, incorporating interactive oracle proofs and folding techniques to achieve quasilinear proof sizes. These served as a foundational component for succinct non-interactive arguments of knowledge (SNARKs), where linear structure enables efficient arithmetic circuit evaluations.

Multiprover PCP

In multiprover probabilistically checkable proofs (PCPs), the verifier accesses multiple non-communicating proof oracles, each corresponding to a separate prover that provides a proof string without interacting with the others. The verifier, running in probabilistic polynomial time, selects positions to query across these oracles based on its random coins, aiming to verify membership in a language with high probability while reading only a small number of bits total. This model extends the single-prover PCP by leveraging the independence of the provers to enhance soundness, as any dishonest provers must coordinate their responses without communication, making it harder to fool the verifier consistently. The concept builds on Babai's earlier work on multi-prover interactive proofs (MIP), introduced in the late 1980s as a generalization of where multiple provers respond to verifier queries in rounds without communicating. Babai, Fortnow, and Lund established that MIP equals NEXP, the class of languages verifiable in nondeterministic exponential time, showing that even two provers suffice for this power. Multiprover PCPs can be viewed as the non-interactive, one-round limit of MIP protocols, where proofs are fixed oracles and queries are non-adaptive. A key result is that for any constant number of provers m > 1, the class of languages with m-prover PCPs using O(\log n) random bits and O(1) queries per prover equals . This was demonstrated by Raz and Safra, who constructed such systems for every language with soundness error exponentially small in the security parameter. In this setup, the verifier queries a constant number of symbols from each of the m proof oracles, each over a suitably large , to achieve close to 1 and bounded away from 1/2. MIP protocols with polynomially many provers correspond to PCPs with polynomially many oracles, but these reduce to single-prover PCPs via disjunctive self-reductions that encode multiple proofs into one while preserving verifiability. To amplify in the multiprover setting, parallel repetition is employed: repeating the basic protocol multiple times independently strengthens the soundness error exponentially, as the provers cannot correlate their responses across repetitions without communication. Raz's parallel repetition theorem guarantees that the soundness of the repeated system is the soundness of the single instance raised to the power of the repetition length, making it a foundational technique for constructing efficient multiprover PCPs.

Applications

Hardness of Approximation

The PCP theorem serves as the foundation for proving inapproximability results in approximation algorithms by facilitating reductions that introduce significant gaps between the optimal and near-optimal solutions for NP-hard optimization problems. These reductions transform decision problems in into optimization instances where verifying a "yes" instance yields a high-value solution, while "no" instances yield only low-value solutions, making it computationally hard to distinguish them efficiently. A core application is to MAX-3SAT, where the , combined with gap-introducing reductions, demonstrates that for any ε > 0, it is NP-hard to approximate the maximum number of satisfiable clauses within a factor of 7/8 + ε. This result highlights the inherent difficulty of achieving even modest approximation guarantees for this fundamental problem. Håstad's 1997 results established optimal inapproximability thresholds for several key problems using advanced PCP constructions that employ long code encodings to encode assignments over large domains, enabling tight soundness analysis. For MAX-3SAT, the hardness factor of 7/8 + ε is optimal given the trivial 7/8-approximation from random assignments. Similarly, for the , Håstad proved that distinguishing cliques of size n from cliques of size n^{1-ε} is NP-hard for any ε > 0, matching known upper bounds up to lower-order terms. These techniques also yield optimal hardness for and independent set, where approximation within 1 + ε is impossible unless P = NP. In a broader framework, PCP-based reductions show that for any ε > 0, numerous optimization problems admit no polynomial-time within a multiplicative factor of 1 + ε, underscoring the theorem's role in ruling out efficient algorithms for problems like minimum . A representative example is graph 3-coloring, where the implies that for any ε > 0, no polynomial-time algorithm can distinguish 3-colorable graphs (chromatic number χ(G) ≤ 3) from graphs requiring more than 3(1 + ε) colors (χ(G) > 3 + 3ε), unless P = NP: \begin{cases} \text{If the graph is 3-colorable:} & \chi(G) \leq 3 \\ \text{If the graph requires many colors:} & \chi(G) > 3(1 + \epsilon) \end{cases} This gap hardness extends to constant-factor approximations for chromatic number in general. Post-2000 advancements in PCP constructions have strengthened these results, particularly through hardness analyses for unique games and label cover problems, which serve as intermediate gadgets in reductions to yield near-optimal inapproximability for diverse optimization tasks like sparsest cut and balanced separator.

Modern Proof Systems

Modern proof systems leverage probabilistically checkable proofs (PCPs) to enable efficient, scalable in cryptographic applications, particularly through zero-knowledge and computationally bounded variants that relax information-theoretic for practicality. These developments address the limitations of classical PCPs by incorporating , computational assumptions, and non-interactive transformations, facilitating deployment in real-world systems like blockchains where prover and proof brevity are paramount. Zero-knowledge PCPs (zk-PCPs) extend PCPs with a zero-knowledge property, ensuring that soundness holds only against probabilistically polynomial-time (PPT) adversaries who craft false proofs, while the verifier learns nothing beyond the statement's validity. A recent construction achieves polynomial-size, constant-query, non-adaptive zk-PCPs for all of that are perfect zero-knowledge against adaptive verifiers, under standard assumptions like the hardness of (). This advances prior work on zk-PCPs, which placed languages with efficient zk-PCPs in the complexity class , by providing more efficient parameters suitable for cryptographic protocols. Probabilistically checkable arguments (PCAs) further relax PCPs to computational , where is guaranteed only against provers generating false proofs, enabling succinct proofs for practical efficiency. A 2024 construction yields publicly verifiable succinct PCAs for all with constant query complexity and adaptive security, based on assumptions such as LWE hardness or sub-exponential decisional Diffie-Hellman (DDH); the prover runs in time and space linear in the verification time of the NP relation. These PCAs support efficient applications in , offering a new that scales to large computations without the overhead of information-theoretic proofs. Linear PCPs serve as a foundational building block for succinct non-interactive arguments of knowledge (SNARKs), where they are compiled into zero-knowledge SNARKs (zk-SNARKs) using the to eliminate , enabling constant-size proofs verifiable in logarithmic time. This approach underpins zk-SNARK deployments in for scalable privacy and correctness proofs in layer-2 rollups, such as zk-rollups, by encoding computations into linear interactive proofs transformed via commitments. Fully linear PCPs, in particular, allow secret-shared data across parties while preserving zero-, enhancing privacy in distributed settings. Interactive oracle proofs (IOPs) generalize PCPs by permitting limited prover-verifier interaction while maintaining oracle access to proofs, reducing prover time to near-linear while preserving succinctness. IOPs form the core of STARKs (scalable transparent arguments of knowledge), introduced in 2018, which achieve post-quantum secure, transparent zk-proofs with sublinear verification for massive computations like genomic database queries, using fast Reed-Solomon IOPs for error-correcting code proximity testing. This interaction model improves scalability over non-interactive PCPs, enabling exponential verification speedups. Post-2010 advances emphasize scalability in PCP-derived systems: Bulletproofs provide short non-interactive zero-knowledge arguments for and confidential transactions, achieving logarithmic proof sizes without trusted setups via inner-product arguments, and are adopted in blockchains like for efficient range proofs. Similarly, the system (2019) enables recursive zk-SNARK composition without trusted setups, using cycles of elliptic curves to accumulate proofs scalably, supporting indefinite-depth verifications critical for state proofs and enhancing overall system throughput.

References

  1. [1]
    [PDF] Probabilistically Checkable Proofs: A Primer - People | MIT CSAIL
    Jul 11, 2006 · Probabilistically checkable proofs are proofs that can checked probabilistically by reading very few bits of the proof.
  2. [2]
    Proof verification and the hardness of approximation problems
    ARORA, S., MOTWANI, R., SAFRA, S., SUDAN, M., AND SZEGEDY, M. 1992. PCP and approximation problems. Unpublished note. Google Scholar. [6]. ARORA, S., AND SAFRA, ...
  3. [3]
    [PDF] a survey of probabilistically checkable proofs - arXiv
    Szegedy [26] studied variants of what we now call probabilistically checkable proof ... A year later, Arora and Safra [3] formalized and named the class PCP and ...
  4. [4]
    [PDF] An Introduction to Probabilistically Checkable Proofs and the PCP ...
    Theorem 3 shows, in short, that every NP-language has a PCP verifier that verifies certificates of at most poly(n) bits by reading a constant number of bits.
  5. [5]
    [PDF] Proof Verification and the Hardness of Approximation Problems
    Arora, Motwani, Safra, Sudan and Szegedy [5], which shows the connection between. PCP's and the hardness of approximating MAX 3SAT. The following theorem summa-.Missing: original | Show results with:original
  6. [6]
    [PDF] Probabilistically checkable proofs - People | MIT CSAIL
    Definition 2.1. A PCP verifier of query complexity q(n), and gap (n) is a probabilistic algorithm V that, given as input ...
  7. [7]
    Multi-prover interactive proofs: how to remove intractability ...
    We introduce a new model of generalized interactive proofs as a step in this direction. We prove that all NP languages have perfect zero-knowledge proof-systems ...Missing: et al.
  8. [8]
    The Knowledge Complexity of Interactive Proof Systems
    In this paper a computational complexity theory of the “knowledge” contained in a proof is developed. Zero-knowledge proofs are defined as those proofs that ...
  9. [9]
    Nondeterministic exponential time has two-prover interactive protocols
    It is shown that the class of languages having two-prover interactive proof systems is computable in nondeterministic exponential time (NEXP).
  10. [10]
    Practical Solutions to Identification and Signature Problems
    In this paper we describe simple identification and signature schemes which enable any user to prove his identity and the authenticity of his messages to any ...
  11. [11]
    Probabilistic checking of proofs: a new characterization of NP
    NP contains languages where membership proofs can be verified probabilistically in polynomial time using logarithmic random bits and sublogarithmic proof bits.
  12. [12]
    Algebraic methods for interactive proof systems | Journal of the ACM
    ~BABAI, L., AND FORTNOW, L. Arithmetization: a new method in structural complexity theory. ~Computat. Complex. 1 (1991), 41-66. Google Scholar.
  13. [13]
    The PCP theorem by gap amplification - ACM Digital Library
    We present a new proof of the PCP theorem that is based on a combinatorial amplification lemma. The unsat value of a set of constraints C = (c1,...,cn), ...
  14. [14]
    [PDF] Computational Complexity: A Modern Approach - Princeton University
    Jan 8, 2007 · The class PCP (short for “Probabilistically Checkable Proofs”) is a generalization of this notion, with the following changes. First, the ...<|control11|><|separator|>
  15. [15]
    [PDF] Probabilistically Checkable Proofs - People | MIT CSAIL
    Completeness: It is clear that if the Gap-PCS instance is satisfiable, say by a low-degree polynomial P0, then Π1 = P0 and Π2, Π3 defined as the restrictions of ...
  16. [16]
    [PDF] Probabilistically Checkable Proofs - Corelab
    May 24, 2019 · The verifier looks at proof and decides. No randomness. Theorem. PCP[poly(n),0] = coRP. The verifier runs the randomized algorithm in poly ...
  17. [17]
    [PDF] Lecture 21 1 Probabilistically Checkable Proofs
    PCP is the class of probabilistically checkable proofs, where a verifier reads only a constant number of bits of the proof with high probability.
  18. [18]
    [PDF] Probabilistically Checkable Proofs and Codes
    Oct 25, 2010 · We now define formally the class PCP[r, q] through the notion of an (r, q)- verifier. Definition 6. An (r, q)-verifier for a language L ...
  19. [19]
    [PDF] The PCP Theorem by Gap Amplification
    Feb 13, 2007 · We give a combinatorial proof for the NP-hardness of approximating a certain constraint satisfaction problem, which can then be ...<|control11|><|separator|>
  20. [20]
    [PDF] A Primer on Pseudorandom Generators
    Jun 5, 2010 · Thus, given a pseudorandom generator with a large stretch function, one can considerably reduce the randomness complexity of any efficient ...
  21. [21]
    [PDF] On the Power of Multi-Prover Interactive Protocols - Lance Fortnow
    Interactive proof systems, as described in [23] and [3], are a model in which a probabilistic polynomial time veri er may interactively ask questions of a ...Missing: László | Show results with:László
  22. [22]
  23. [23]
  24. [24]
    [PDF] Some Optimal Inapproximability Results
    In this paper, we study the existence of polynomial time approximation algorithms for some of the basic NP-complete problems. For a maximization problem we say ...
  25. [25]
    [PDF] On the Unique Games Conjecture - NYU Computer Science
    use of Fourier analysis in analyzing the Long Code [49, 51]. Specifically, Håstad's work gives optimal inapproximbaility results for 3SAT and the Clique problem ...<|control11|><|separator|>