Fact-checked by Grok 2 weeks ago

Probabilistic Turing machine

A probabilistic Turing machine (PTM) is a theoretical model that extends the deterministic by incorporating into its transition function, where each possible move is selected with a specified probability, often modeled as outcomes of unbiased coin flips. This allows the machine to perform computations that may vary across multiple runs on the same input, with acceptance or rejection determined probabilistically rather than deterministically. The concept of probabilistic Turing machines was introduced by Eugene S. Santos in 1969 to explore in the presence of probabilistic elements, demonstrating that such machines can compute all recursive functions under certain probabilistic acceptance criteria, though with potential for non-halting behavior in some models. Santos's work built on earlier probabilistic automata by from 1963, adapting the framework to include random choices while preserving the infinite tape and read-write head mechanics of the original model. In 1977, John Gill advanced the study by analyzing the of PTMs, proving key results such as the equivalence between probabilistic polynomial-time computation and certain nondeterministic models, and establishing foundational hierarchies for time-bounded probabilistic computation. Probabilistic Turing machines are central to randomized complexity theory, particularly in defining the class BPP (bounded-error probabilistic polynomial time), which includes decision problems solvable by a PTM in polynomial time such that, for any input, the probability of outputting the correct answer (accepting yes-instances and rejecting no-instances) is at least 2/3. This error bound can be amplified arbitrarily close to 1 through repetition, making BPP a robust model for efficient randomized algorithms that outperform deterministic ones in practice for problems like primality testing. PTMs also underpin related classes like (randomized polynomial time with one-sided error) and co-RP, highlighting the power of randomness in computation while raising open questions about whether BPP equals , the class of deterministic polynomial-time problems.

Overview and Background

Informal Description

A probabilistic Turing machine (PTM) is a in which the choice among possible transitions at each step is made randomly according to specified probabilities, typically by flipping fair to select the next state or move with equal likelihood. This randomness allows the machine to follow different computational paths, each with an associated probability determined by the sequence of flips encountered during execution. In operation, the PTM begins by reading the input on its and proceeds through states, using random coin flips whenever a nondeterministic arises to branch into one of the possible transitions. The computation may halt in an accepting or rejecting state along various paths, and the overall acceptance probability for a given input is the sum of the probabilities of all accepting paths. The machine accepts the input if this probability exceeds a predefined , such as 1/2. A key feature of PTMs is bounded-error , in which the machine errs with probability at most 1/3 for any input: it accepts strings in the language with probability at least 2/3 and rejects strings not in the language with probability at least 2/3. This error bound can be amplified by repeating the multiple times and taking the majority vote, reducing the error exponentially while keeping the runtime .

Historical Context

The concept of probabilistic computation emerged in the mid-20th century amid efforts to model reliable systems using unreliable components. In 1956, explored probabilistic logics to synthesize reliable organisms from unreliable parts, laying foundational ideas for error-tolerant computing that influenced later randomized models. That same year, Kees de Leeuw, , Claude E. Shannon, and Norman Shapiro introduced probabilistic machines capable of generating random bits, demonstrating that such devices compute the same functions as deterministic Turing machines but with probabilistic acceptance criteria. During the 1950s and 1960s, these ideas extended to , where probabilistic elements were formalized for finite-state devices. Michael O. Rabin's 1963 work on probabilistic automata, which incorporated transition probabilities into nondeterministic models, provided a precursor framework for handling in computational transitions, bridging nondeterminism and probability. By 1969, Eugene Santos formalized probabilistic Turing machines (PTMs) as nondeterministic Turing machines augmented with probability distributions over transitions, establishing a precise model for randomized and proving its equivalence to deterministic Turing machines in terms of power. The 1970s marked significant advancements in both algorithms and complexity theory for probabilistic models. Robert M. Solovay and Volker Strassen's 1977 primality test introduced an efficient probabilistic algorithm that ran in time with high success probability, highlighting the practical advantages of randomness and motivating formal analyses for algorithmic efficiency. Concurrently, John Gill's 1977 paper rigorously defined PTMs as Turing machines with coin-tossing capabilities and explored their , introducing key classes such as (probabilistic time) to capture majority-based acceptance. In the post-1980s evolution, PTMs integrated deeply with and proof systems, enhancing randomized protocols. Shafi Goldwasser and Michael Sipser's 1986 distinction between private and public randomness in interactive proof systems utilized PTM-like models to show equivalences between probabilistic classes, influencing zero-knowledge proofs and secure . This period also saw growing interest in derandomization, with efforts to simulate probabilistic computations deterministically, as pursued in works by Adleman and others, underscoring PTMs' role in understanding the power of randomness.

Formal Model

Machine Components

A probabilistic Turing machine (PTM) extends the standard model by incorporating a source of , while retaining the core structural elements that define its computational architecture. Like a deterministic , a PTM consists of an infinite divided into cells, each capable of holding a from a finite \Gamma, which includes a distinguished blank \sqcup. The is initially blank except for the input written on it using the input \Sigma \subseteq \Gamma \setminus \{\sqcup\}. A read/write head scans the one cell at a time, allowing the machine to read the current and write a new one from \Gamma, while moving left or right. The machine operates under the control of a finite set of Q, with a designated start q_0 \in Q, an accepting q_{acc} \in Q, and a rejecting q_{rej} \in Q, where q_{acc} and q_{rej} are absorbing that halt computation upon entry. To facilitate efficient computation, PTMs are often formalized as multi-tape variants, featuring a read-only input tape containing the input string and one or more read/write work tapes initialized to blanks. Each tape has its own independent read/write head, enabling parallel access to different data regions. This multi-tape configuration does not increase the computational power of the PTM beyond that of a single-tape version, as any multi-tape PTM can be simulated by a single-tape PTM with only a quadratic overhead in time complexity, preserving the probabilistic nature of the computation. The randomness source is integrated via access to an infinite sequence of independent fair coin flips (random bits), typically modeled by providing the PTM with two deterministic transition functions \delta_0, \delta_1: (Q \setminus \{q_{acc}, q_{rej}\}) \times \Gamma \to Q \times \Gamma \times \{L, R, S\}, where at each step, one is selected uniformly at random with probability $1/2, simulating the coin toss; the S direction allows the head to stay in place (generalized for multi-tape by applying to each head's scanned symbol). A configuration of a PTM captures its instantaneous state of operation and is formally a tuple consisting of the current control state, the positions of all heads on their respective tapes, and the contents of all tapes. Starting from the initial configuration—where the input head is at the first input symbol, work heads are at blank cells, and the control is in q_0—the machine evolves probabilistically through a sequence of configurations, with each transition determined by the random choice of \delta_0 or \delta_1 applied to the current state and scanned symbols. This probabilistic evolution distinguishes PTMs from deterministic models, enabling randomized decision-making while maintaining the tape-based storage and head-movement mechanics fundamental to Turing computation.

Transition and Acceptance

The transition function of a probabilistic Turing machine (PTM) generalizes the deterministic case by incorporating directly into the choice of next configurations. Formally, for a multi-tape PTM with k tapes, it is defined as a function \delta: Q \times \Gamma^k \to \Delta(Q \times \Gamma^k \times \{L, R, S\}^k), where Q is the of , \Gamma is the , \{L, R, S\} denotes head movements (left, right, or stay), and \Delta(S) denotes the set of probability over a S. For each current q \in Q and of read symbols (a_1, \dots, a_k) \in \Gamma^k, \delta(q, (a_1, \dots, a_k)) specifies a over possible actions, each consisting of a next q' \in Q, symbols (b_1, \dots, b_k) \in \Gamma^k to write on each , and directions (D_1, \dots, D_k) \in \{L, R, S\}^k, with the probabilities over all such actions summing to 1. This setup ensures that the machine's behavior is , allowing multiple possible evolutions from any . During execution on an input x, the PTM starts in the initial state with the head on the first of x. At each step, it reads the current a, samples an action (q', b, D) from \delta(q, a), transitions to q', writes b, and moves the head according to D. This process generates a over computation , where each is a sequence of configurations determined by the sequence of sampled actions. The probability of a specific \pi = (c_0, c_1, \dots, c_k), with c_i denoting the configuration at step i, is the product of the transition probabilities along that path: \text{Pr}[\pi] = \prod_{i=0}^{k-1} \text{Pr}[\delta(c_i) \text{ selects the move to } c_{i+1}]. The overall computation is thus a random variable over these paths, with the total probability summing to 1 across all possible paths. PTMs are required to halt with probability 1 on every input, often achieved by imposing a runtime bound such as a polynomial in the input length |x| to prevent infinite loops with positive probability. Acceptance is determined probabilistically: for an input x, the acceptance probability \text{Pr}[\text{accept}(x)] is the total probability mass over all paths that reach an accepting state q_{\text{acc}} and halt, while paths halting in a rejecting state q_{\text{rej}} contribute to rejection. If a path does not halt within the bound, it is typically treated as a rejection, though the halting assumption ensures this occurs with probability 0. In the context of decision problems, a PTM decides a language if its acceptance probability distinguishes membership reliably, with the error probability bounded away from $1/2. For instance, the machine accepts x as belonging to the language if \text{Pr}[\text{accept}(x)] \geq 2/3 and rejects if \text{Pr}[\text{accept}(x)] \leq 1/3, yielding an error probability of at most $1/3 that can be amplified by repetition. This threshold ensures the probabilistic output correlates strongly with the correct answer, distinguishing PTMs from deterministic machines.

Computational Properties

Equivalence to Deterministic Machines

A fundamental result in establishes that probabilistic Turing machines (PTMs) possess the same computational power as deterministic Turing machines (DTMs) in terms of the functions they can compute. Specifically, the class of functions computable by PTMs coincides exactly with the class of recursive functions, demonstrating that the introduction of randomness does not expand the set of effectively computable functions beyond what DTMs can achieve. This equivalence arises because any PTM can be simulated by a DTM through exhaustive of possible random inputs. In the simulation, the DTM treats the PTM's random bits as a fixed input and systematically explores all potential sequences. For a PTM bounded by T on input x, the DTM enumerates all $2^k possible random bit s of length k \leq T(|x|), executes the PTM deterministically on each (using the random string as the auxiliary ), and computes the acceptance probability as the fraction of paths that accept. The DTM then decides based on whether this probability exceeds a , such as $1/2. The of this simulation incurs a significant overhead: for a running in time T(|x|), the requires O(2^{T(|x|)} \cdot T(|x|)) time, as there are many paths to evaluate, each taking linear time in T. This blowup illustrates that while PTMs match DTMs in expressive power, they do not provide an inherent computational advantage in terms of decidability or . A parallel equivalence holds for -bounded computations. A -bounded can be simulated by a using only space relative to the PTM's bound, by enumerating random bits and counting accepting configurations without storing all paths explicitly, leveraging techniques like recursive evaluation to manage the number of possibilities within logarithmic extra space for counters. This ensures that space-restricted PTMs also do not surpass the capabilities of -bounded in computing recursive functions. Overall, these simulations confirm that PTMs do not exceed the recursive functions computable by DTMs, as the randomness can always be derandomized through complete , albeit at a resource cost.

Role of Randomness

in probabilistic Turing machines (PTMs) provides a significant efficiency advantage by enabling the solution of certain decision problems in polynomial time with high probability, even when deterministic algorithms require exponential time. For instance, primality testing, which is computationally intensive in the deterministic setting, can be performed efficiently using randomized algorithms like the Miller-Rabin test, which runs in polynomial time and errs with probability at most 1/4 per trial for composite numbers. This approach leverages random choices to witness compositeness probabilistically, avoiding exhaustive checks of divisors. A key feature of PTMs is error amplification, which allows the reduction of the initial error probability through . If a PTM accepts correct inputs with probability at least 2/3 and rejects incorrect ones similarly, running the machine k independent times and taking the vote over the outcomes yields an error probability bounded by less than $2^{-k}, exponentially small in the number of repetitions. This technique ensures that the computational overhead remains while achieving arbitrarily high confidence in the result. Despite these benefits, randomness in PTMs does not alter worst-case classes in the sense of , as PTMs can be simulated deterministically with overhead, placing their power within time (), though it is unknown whether probabilistic polynomial-time computations can be performed deterministically in polynomial time (). Instead, excels in average-case over random bits for worst-case inputs or in problems where inputs satisfy certain guarantees, allowing probabilistic solutions where fails. Derandomization explores whether pseudorandom generators (PRGs) can replace true randomness, potentially collapsing BPP to P. Under assumptions like the existence of functions in E requiring exponential-size circuits, efficient PRGs exist that fool PTMs, implying that probabilistic computations can be derandomized to deterministic polynomial-time algorithms. Monte Carlo algorithms, implementable on PTMs, exemplify this trade-off by providing approximate solutions with bounded error probability in exchange for speed. These algorithms compute approximations to quantities like volumes or integrals by sampling random paths, succeeding with high probability but occasionally erring, which is acceptable for applications prioritizing efficiency over exactness. While essential for foundational probabilistic computations, randomness in standalone PTMs does not fully capture advanced paradigms like interactive proofs and zero-knowledge protocols, which require interaction to achieve greater expressive power.

Complexity Implications

Key Probabilistic Classes

Probabilistic Turing machines (PTMs) give rise to several fundamental complexity classes that capture different error tolerances and randomness usages in polynomial-time computation. These classes, primarily defined in terms of acceptance probabilities over random bit strings, formalize the power of randomized algorithms for decision problems. The key classes include BPP for bounded two-sided error, and co-RP for one-sided errors, for majority acceptance, and ZPP for zero error with expected polynomial time. The class BPP (bounded-error probabilistic polynomial time) consists of languages L for which there exists a M running in time such that, for every input x, \Pr_{r \sim \{0,1\}^{p(|x|)}} [M(x, r) \text{ accepts}] \geq \frac{2}{3} if x \in L, and \Pr_{r \sim \{0,1\}^{p(|x|)}} [M(x, r) \text{ accepts}] \leq \frac{1}{3} if x \notin L, where p is a and r is chosen uniformly at random. This two-sided error model allows mistakes on both yes and no instances but bounds the error probability away from 1/2. BPP is closed under complementation: if L \in BPP via machine M, then the complement \overline{L} is decided by running M and flipping the output, preserving the error bounds due to symmetry. The class (randomized polynomial time) captures one-sided error with no false positives: a language L \in RP if there is a polynomial-time PTM M such that, for input x, \Pr_{r \sim \{0,1\}^{p(|x|)}} [M(x, r) \text{ accepts}] \geq \frac{1}{2} if x \in L, and \Pr_{r \sim \{0,1\}^{p(|x|)}} [M(x, r) \text{ accepts}] = 0 if x \notin L. Thus, acceptance certifies membership in L, but rejection may err on yes instances. The complementary class co-RP flips the roles: L \in co-RP if \overline{L} \in RP, meaning no false negatives (rejection certifies non-membership) but possible errors on no instances, with \Pr_{r \sim \{0,1\}^{p(|x|)}} [M(x, r) \text{ accepts}] = 1 if x \in L, and \leq 1/2 if x \notin L. The class PP (probabilistic polynomial time) allows unbounded error closer to a majority vote: L \in PP if there exists a polynomial-time PTM M where, for input x, \Pr_{r \sim \{0,1\}^{p(|x|)}} [M(x, r) \text{ accepts}] > \frac{1}{2} if x \in L, and \leq 1/2 if x \notin L. This threshold admits a wider range of error probabilities, making PP more inclusive than bounded-error classes. The inclusions RP \subseteq BPP \subseteq PP hold, as RP machines can be amplified to fit BPP error bounds by repetition, and BPP machines satisfy the looser PP threshold. Finally, ZPP (zero-error probabilistic polynomial time) requires perfect accuracy with randomized expected runtime: a language L \in ZPP if there is a PTM M that, on input x, always outputs the correct answer (accept if x \in L, reject otherwise) and halts in expected time in |x|, where the expectation is over the random bits. Equivalently, M may output "?" (don't know) but never errs, with \Pr[?] such that the expected repetitions yield time. ZPP = RP \cap co-RP, as zero-error algorithms combine one-sided guarantees from both.

Relations to Deterministic Classes

Probabilistic complexity classes exhibit several known inclusions within the deterministic hierarchy. Specifically, the class is contained in ZPP, which in turn is contained in , and is contained in . This containment of in follows from a deterministic : a probabilistic polynomial-time using O(n^k) random bits can be simulated by enumerating all $2^{O(n^k)} possible random strings, computing the majority outcome, and using polynomial space to store the counter and reuse space for each , though the time is exponential. Additionally, the class contains , as any nondeterministic acceptance path for an problem can be extended with dummy paths to ensure the probabilistic majority accepts with probability greater than $1/2. A central open question is whether BPP equals P, known as the derandomization , which posits that provides no asymptotic computational advantage in polynomial time. Resolving = NP would have implications for probabilistic classes: if = NP, then = BPP, since membership in a BPP language can be expressed using existential and universal quantifiers over random bits, which under the assumption. However, tighter space bounds for BPP remain unknown, with the PSPACE simulation not known to be improvable to, say, polynomial space with subexponential time. Relativization techniques demonstrate separations in relativized worlds. There exist oracles A such that P^A \neq BPP^A, constructed by where the oracle encodes computations to force a BPP to err on specific inputs while succeeds. The existence of one-way functions implies pseudorandom generators that fool deterministic polynomial-time distinguishers, allowing BPP algorithms to leverage short pseudorandom seeds for efficiency advantages over purely deterministic computations. Probabilistic classes also interact with hierarchies under assumptions. For instance, if NP \subseteq BPP, then the polynomial hierarchy PH collapses to its second level, \Sigma_2^p, due to the ability to simulate nondeterministic choices with bounded-error randomness propagating through hierarchy levels.

Extensions and Applications

Variants of Probabilistic Machines

Probabilistic Turing machines can be modified to handle different error models, leading to variants that balance accuracy, efficiency, and computational power. A key distinction lies in the treatment of errors: one-sided error models, such as the class , ensure no false positives, meaning that if an input is not in the language, the machine rejects with probability 1, while accepting "yes" instances with probability at least 1/2. In contrast, two-sided error models, exemplified by BPP, allow bounded errors on both acceptance and rejection, with the machine accepting "yes" instances with probability at least 2/3 and rejecting "no" instances with probability at least 2/3. These variants adapt the standard probabilistic acceptance criterion to prioritize specific error bounds, influencing their applicability in decision problems. Another variant incorporates zero-error guarantees, known as algorithms and corresponding to the class ZPP. These machines always produce the correct output when they halt but may run for varying times, with expected runtime; they either accept or reject correctly without error, though they might occasionally output "don't know" and retry. This approach eliminates incorrect decisions at the cost of potentially longer execution, making it suitable for scenarios where accuracy is paramount over worst-case speed. Interactive variants extend the model through Arthur-Merlin protocols, where a probabilistic verifier (Arthur) interacts with an all-powerful prover (Merlin) using public random coins. Introduced as a randomized proof system, these protocols allow Arthur to send random challenges, and Merlin responds, with acceptance based on the verifier's polynomial-time computation; they define complexity classes like AM, sitting between and higher interactive hierarchies. Nondeterministic probabilistic Turing machines combine nondeterminism with probabilistic choices, leading to the class , where acceptance occurs if the majority of computation paths (more than half) accept, effectively using probabilistic branching as a vote over nondeterministic options. This model captures problems where a slight probabilistic determines the outcome, relating closely to through majority acceptance criteria. Quantum Turing machines represent an extension of probabilistic models, incorporating superposition to allow states to evolve in across multiple configurations, enabling computations that outperform classical probabilistic machines on certain tasks while remaining within the bounds of recursive functions. Details of their formal structure and advantages are addressed in .

Practical Uses in Algorithms

Probabilistic Turing machines (PTMs) enable efficient randomized algorithms that achieve significant speedups over deterministic counterparts in various domains, particularly where exact computation is intractable but approximation or high-confidence decisions suffice. These algorithms operate within complexity classes like BPP and , leveraging the PTM's random bits to sample outcomes probabilistically. A key application lies in primality testing, essential for cryptographic . The Solovay-Strassen test, developed in 1977, is a that determines primality with high probability by verifying Euler's criterion using Jacobi symbols and modular exponentiations, placing it in BPP with expected time. Similarly, the Miller-Rabin test from 1976 uses strong pseudoprime witnesses to check compositeness, also in BPP, and is widely implemented due to its simplicity and low error probability when iterated. These PTM-based tests generate large primes for keys by repeatedly testing random odd integers until a probable prime is found, ensuring negligible error rates (e.g., less than $2^{-80}) for 2048-bit keys used in secure communications. In , probabilistic checks extend to verifying key security properties, such as ensuring factors are prime during Diffie-Hellman parameter generation. Although the Agrawal-Kayal-Saxena (AKS) algorithm derandomized primality testing in 2004, proving PRIMES ∈ P with a deterministic polynomial-time procedure, probabilistic tests like Miller-Rabin remain faster in practice for numbers up to thousands of bits due to lower constants and simpler operations, avoiding AKS's heavy polynomial evaluations. In algorithms, PTMs facilitate randomized procedures in the class for problems like and matching. For undirected , random walks starting from a source vertex explore the graph, accepting if the is reached within expected O(m) steps (where m is the number of edges), providing one-sided error and log-space efficiency. For bipartite matching, randomized algorithms using the isolation lemma select edges with low probability to isolate a unique minimum-weight if one exists, verifiable in time, thus in RP. PTMs also support approximation algorithms for #P-hard counting problems, such as estimating the number of satisfying assignments in Boolean formulas or perfect matchings in graphs. The Jerrum-Valiant-Vazirani framework from 1986 reduces approximate counting to uniform random generation via Markov chains, achieving a fully polynomial randomized approximation scheme (FPRAS) in BPP for self-reducible problems. In modern , PTM simulations underpin methods for sampling from complex distributions, such as in (MCMC) for . These methods model probabilistic computations equivalent to PTMs, enabling efficient approximations of posterior probabilities or integrals in high-dimensional spaces, as seen in training generative models.

References

  1. [1]
    [0, 1] where V=UU{R, L, T} and RQ U, LQ U, TQU. The func-
    PROBABILISTIC TURING MACHINES AND COMPUTABILITY. 707 for every ¿-tuple (nt ... E. S. Santos, Maximin sequential-like machines and chains, Math. Systems.
  2. [2]
    Computational Complexity of Probabilistic Turing Machines
    A probabilistic Turing machine is a Turing machine with the ability to make decisions based on the outcomes of unbiased coin tosses.
  3. [3]
    COMPUTABILITY BY PROBABILISTIC TURING MACHINES
    Rabin, Probabilistic automata, Information and Control 6 (1963), 230-245. 9. E. S. Santos, Probabilistic Turing machines and computability, Proc. Amer. Math ...
  4. [4]
    [PDF] Probabilistic Turing machines and complexity classes
    Better answer: For every input w, we get the right answer with high probability. • Definition: A probabilistic TM decider M decides language. L with error ...
  5. [5]
    Computational complexity of probabilistic Turing machines
    Probabilistic Turing machines are Turing ma- chines with the ability to flip coins in order to make random decisions. We allow probabilistic Turing machines ...
  6. [6]
    [PDF] Probabilistic Complexity Classes 1 Introduction
    Aug 25, 2022 · Finally, BPP is a subset of P/poly which we formally show below: Page 4. Probabilistic Complexity Classes. 4. Theorem 3.4. BPP ⊆ P/poly. Proof.
  7. [7]
    Private coins versus public coins in interactive proof systems
    Private coins versus public coins in interactive proof systems. Authors: S Goldwasser ... Probabilistic reasoning algorithms · Markov-chain Monte Carlo methods.
  8. [8]
    [PDF] Computational Complexity: A Modern Approach - Princeton University
    Jan 8, 2007 · We now define probabilistic Turing machines (PTMs). Syntactically, a PTM is no different from a nondeterministic TM: it is a TM with two ...
  9. [9]
    [PDF] Complexity Theory - Lecture 21: Probabilistic Turing Machines
    Jan 16, 2018 · Definition 21.1: A probabilistic Turing machine (PTM) is a Turing machine with two deterministic transition functions, δ0 and δ1. A run of a ...
  10. [10]
    Probabilistic algorithm for testing primality - ScienceDirect.com
    We present a practical probabilistic algorithm for testing large numbers of arbitrary form for primality.
  11. [11]
    [PDF] Lecture 12: Oct 5, 2021 1 Randomness in Computation
    Oct 15, 2021 · Definition 1.1. [Probabilistic Turing machines] A probabilistic Turing machine, abbreviated PTM, is a standard multitape Turing machine M with ...Missing: formal | Show results with:formal
  12. [12]
    [PDF] Basic Derandomization Techniques - Harvard SEAS
    (Derandomizing RP versus BPP) Show that prRP = prP implies that prBPP = prP, and thus also that BPP = P. (Hint: Look at the proof that NP = P ⇒ BPP = P.).Missing: PRGs | Show results with:PRGs
  13. [13]
    [PDF] P=BPP unless E has sub-exponential circuits: Derandomizing the ...
    Often, derandomization requires a new anal- ysis of the probabilistic argument that can be interesting in its own right. The problem of de- randomizing the XOR ...Missing: PRGs | Show results with:PRGs
  14. [14]
    [PDF] Probability and Computing – Randomised Complexity Classes
    The classes RP, co−RP and BPP are not believed ... ZPP ⊆ RP and ZPP ⊆ co−RP. RP ⊆ NP and co−RP ⊆ co−NP. RP ⊆ BPP and co−RP ⊆ BPP ... inclusions are known ...
  15. [15]
    [PDF] Probabilistic Computation - BPP and RP
    BPP ⊆ PSPACE, since using polynomial space we can deterministically simulate a BPP machine by iterating through all possible random seeds.
  16. [16]
    Guest Column: New ways of studying the BPP = P conjecture
    Jun 14, 2023 · In this survey we will describe new approaches to the BPP = P conjecture from recent years, as well as new questions, algorithmic approaches, and ways of ...
  17. [17]
    Oracle separation P and BPP - Computer Science Stack Exchange
    Nov 18, 2019 · Now whenever we want to run a BPP algorithm on a P-machine we just run the algorithm as written, but replace every random bit the BPP-machine ...What is meant by an oracle separation between classes BPP and ...Basic complexity theory (in Oracle Separation of BQP and PH)More results from cs.stackexchange.com
  18. [18]
    [PDF] Three results about BPP
    If NP is in BPP then PH Collapses. Page 8. If NP is in BPP then PH Collapses. • “Collapses” means that PH is contained in ∑. • The proof in two parts: a) If NP ...
  19. [19]
    [PDF] One-sided Versus Two-sided Error in Probabilistic Computation
    We have many variations of probabilistic complexity classes. In this paper, we will concern ourselves with RP, BPP, PromiseRP and PromiseBPP. polynomial-time ...Missing: variants | Show results with:variants
  20. [20]
    [PDF] Lecture 10 (Feb 7): Probabilistic Turing Machines
    Feb 27, 2019 · As we will define them, these Turing machines are allowed to give incorrect outputs, or even loop forever. Due to their random nature, we will ...
  21. [21]
    Trading group theory for randomness - ACM Digital Library
    We define a new hierarchy of complexity classes AM(k) “just above NP”, introducing Arthur vs. Merlin games, the bounded-away version of Papdimitriou's Games ...
  22. [22]
    Quantum theory, the Church–Turing principle and the universal ...
    A class of model computing machines that is the quantum generalization of the class of Turing machines is described, and it is shown that quantum theory and ...
  23. [23]
    [PDF] Riemann's Hypothesis and Tests for
    Received October 20, 1975; revised January 30, 1976. In this paper we present two algorithms for testing primality of integer. The first algorithm in steps ...Missing: Rabin | Show results with:Rabin
  24. [24]
    [PDF] Notes on Primality Testing And Public Key Cryptography Part 1
    The Solovay–Strassen Test. 6.1 Quadratic Residues. The Solovay–Strassen primality test was published in 1977, and thus slightly predates the. Miller–Rabin test.
  25. [25]
    [PDF] PRIMES is in P - Microsoft
    MANINDRA AGRAWAL, NEERAJ KAYAL, AND NITIN SAXENA this problem (referred to as the primality testing problem) has been investi- gated intensively. It is ...
  26. [26]
    [PDF] Comparing and Reviewing Modern Primality Tests
    Primality tests refer to algorithms that determine whether a number is prime or composite. These tests are essential to.<|control11|><|separator|>
  27. [27]
    [PDF] RANDOM GENERATION OF COMBINATORIAL STRUCTURES ...
    Abstract. The class of problems involving the random generation of combinatorial structures from a uniform distribution is considered.
  28. [28]
    [PDF] An introduction to computational complexity in Markov Chain Monte ...
    Apr 14, 2020 · Following the work of Sinclair and Jerrum (1988) we show the MCMC methods are equivalent to probabilistic Turing Machines, and provide an ...