Fact-checked by Grok 2 weeks ago

NP-intermediate

In , the class NP-intermediate, often denoted as NPI, comprises decision problems that belong to the NP but are neither solvable in polynomial time (i.e., not in ) nor NP-complete. This class captures problems whose solutions can be verified in time, yet finding those solutions appears to require superpolynomial resources, without the universal hardness of NP-complete problems. The existence of NP-intermediate problems is established by Ladner's theorem, which proves that if , then there are infinitely many such problems, constructed via a argument over polynomial-time Turing machines. Prominent candidates for NP-intermediate problems include the , which asks whether two given graphs are structurally identical and is known to be in but not NP-complete unless the collapses. Another example is the decision version of , such as determining whether an integer n has a nontrivial factor in a given interval [a, b], which is in and widely conjectured to be neither in P nor NP-complete due to its cryptographic significance and lack of known polynomial-time algorithms. These problems highlight the fine-grained structure within NP, as natural problems rarely fall strictly between P and the NP-complete core, though artificial constructions abound under standard assumptions. The study of NP-intermediate problems underscores unresolved questions in , such as the density of hardness within and the implications for the . While no natural problem has been proven NP-intermediate—due to the difficulty of separating from —advances like quasi-polynomial-time algorithms for reinforce their intermediate status without resolving membership in . Understanding this class could reveal barriers to efficient computation and inform algorithm design for practical applications in and optimization.

Definition and Background

Formal Definition

In , the class consists of all s for which there exists a polynomial-time verifier that can confirm a "yes" instance given a polynomial-sized certificate. The class includes s solvable in polynomial time by a deterministic . A L is NP-intermediate if L \in \mathrm{NP}, L \notin \mathrm{P}, and L is not NP-complete. Under the assumption that \mathrm{P} \neq \mathrm{NP}, the class of NP-intermediate problems is formally defined as \mathrm{NP} \setminus ([\mathrm{P}](/page/P′′) \cup \mathrm{NP}^c), where \mathrm{NP}^c denotes the set of NP-complete problems. A problem L \in \mathrm{NP} is not NP-complete if there exists an NP-complete problem that does not reduce to L via a polynomial-time .

Distinction from P and NP-Complete

NP-intermediate problems are distinguished from those in by the absence of known efficient deterministic algorithms for their solution, despite belonging to , where solutions can be verified in polynomial time. In contrast, every problem in admits a deterministic that decides it in polynomial time. This structural difference highlights that NP-intermediate problems lie strictly outside under the assumption , occupying a position where computational resources for solving exceed those for . Unlike NP-complete problems, which represent the hardest cases within NP—such that every language in NP is polynomial-time many-one reducible to any NP-complete language—NP-intermediate problems do not possess this universal hardness property. No polynomial-time reduction exists from arbitrary NP problems to an NP-intermediate problem in a manner that would classify it as complete for the class. Consequently, NP-intermediate problems resist classification as the "hardest" in NP, lacking the reduction-based equivalence that characterizes . Within the broader landscape of , NP-intermediate problems illustrate a spectrum of hardness degrees, forming a middle ground neither as tractable as nor as intractable as NP-complete sets. Ladner's theorem establishes the existence of such intermediate degrees, demonstrating that the polynomial-time reducibility structure in is dense and non-trivial, with infinitely many incomparable hardness levels assuming P ≠ . This positioning underscores a nuanced inside , beyond a simple dichotomy of easy versus hardest problems. A key implication of these distinctions arises in the context of reductions and class collapses: demonstrating that an problem belongs to would not force the equality = , as it does not imply efficient solvability for all of via reductions from NP-complete problems. In comparison, placing any NP-complete problem in collapses the entire class into . Thus, progress on NP-intermediate problems offers partial insights into 's structure without resolving the overarching versus question.

Historical Development

Early Concepts in Complexity Theory

The foundations of complexity theory in the early 1970s laid the groundwork for recognizing potential intermediate levels within , distinct from both polynomial-time solvable problems and those at the hardest end of the class. In 1971, formalized the class as consisting of decision problems verifiable in polynomial time by a and introduced the concept of , demonstrating that the (SAT) is complete for under polynomial-time reductions. This breakthrough implied that might contain a spectrum of problems, as Cook's reductions showed SAT's centrality but left open the possibility of problems in not reducible to it in a way that equated their hardness. Building on Cook's work, Richard Karp in demonstrated that 21 well-known combinatorial problems, including the traveling salesman problem and , are NP-complete via reductions from SAT, solidifying NP-completeness as a tool for classifying intractability. These results amplified awareness that while many natural problems appeared to cluster at the complete end of , the structure of the class remained unclear, prompting early inquiries into whether all NP problems shared this maximal hardness or if some occupied intermediate positions. Suspicions of such intermediate complexity predated these formalizations, tracing back to logicians like , who in a letter to questioned whether theorem-proving—a problem now known to be —could be accomplished in roughly quadratic time, anticipating the broader P versus NP question and hinting at varying degrees of difficulty within related classes. By the mid-1970s, as proliferated, researchers began noting specific problems in NP, such as , that seemed unlikely to be either in P or NP-complete based on their structural properties and lack of known reductions to or from canonical complete problems.

Ladner's Theorem and Existence Proofs

In 1975, Richard Ladner proved a foundational result in demonstrating the existence of problems in that are neither solvable in polynomial time nor NP-complete, assuming P ≠ NP. This theorem, known as Ladner's theorem, establishes that the structure of is rich and contains intermediate degrees of hardness under the standard conjecture that P ≠ NP. The formal statement of Ladner's theorem is as follows: If , then there exists a L \in \mathrm{[NP](/page/NP)} such that L \notin \mathrm{[P](/page/P′′)} and L is not NP-complete. To prove this, Ladner employs a diagonalization-like construction starting from an NP-complete problem, such as the problem SAT. The core idea involves creating a padded variant of SAT, denoted SAT_H, where instances are augmented with additional padding to control reducibility and computational hardness. Specifically, for a satisfiable formula \psi of length n, the padded instance is \psi 0 1^{n^{H(n)}}, where H(n) is a slowly growing defined recursively based on the behavior of potential polynomial-time Turing machines. (Note: This is a sketch from standard expositions; original in Ladner 1975.) The function H(n) is defined as the smallest integer i < \log \log n such that the i-th polynomial-time Turing machine M_i decides membership in SAT_H for all inputs x with |x| \leq \log n within time i |x|^i; if no such i exists, H(n) = \log \log n. This ensures SAT_H is in NP, as membership can be verified by checking the formula after stripping the padding, which is computable in polynomial time relative to the input size. If SAT_H \in \mathrm{P}, then H(n) is bounded by a constant, leading to a contradiction with P ≠ NP, as it would imply a polynomial-time algorithm for SAT. Conversely, if SAT_H were NP-complete, a polynomial-time reduction from SAT to SAT_H would contradict the unbounded growth of H(n), since the padding grows too rapidly to allow efficient reductions for large n. Thus, SAT_H is neither in P nor NP-complete, establishing the existence of an NP-intermediate problem. Subsequent extensions of Ladner's theorem have refined and expanded its implications, showing that if P ≠ NP, there are infinitely many distinct degrees of polynomial-time reducibility within NP below the NP-complete degree. For instance, using techniques involving sparse sets—sets with at most polynomially many elements up to any input length—researchers have constructed infinite hierarchies of NP problems with strictly increasing hardness, where each level is harder than the previous but easier than NP-complete problems. These proofs often build on Ladner's diagonalization by incorporating sparsity to ensure non-reducibility across levels, demonstrating a dense structure of intermediate complexities.

Theoretical Implications

Role in the P vs NP Question

If P equals NP, then the class of problems would be empty, as every problem in NP would be solvable in polynomial time and thus belong to P. Assuming P does not equal NP, establishes the existence of NP-intermediate problems, demonstrating that NP possesses a rich internal structure with multiple degrees of hardness between polynomial-time solvability and NP-completeness. This theorem, proven via diagonalization over polynomial-time reductions, implies infinitely many such intermediate degrees under the assumption, highlighting that NP is not simply partitioned into P and the NP-complete problems. Consequently, resolving the P versus NP question in favor of inequality would reveal a hierarchy of complexity within NP, complicating efforts to fully map the landscape of decision problems. Oracle separations further underscore the challenges in resolving P versus NP and the role of intermediates. The Baker-Gill-Solovay theorem constructs an oracle A such that P^A = NP^A, while for another oracle B, P^B ≠ NP^B, showing that common proof techniques like diagonalization relativize and thus cannot distinguish the cases. In relativized worlds where P ≠ NP, NP-intermediate problems exist, as per extensions of Ladner's result, but the separations indicate that non-relativizing methods are needed to settle the question, preserving the possibility of intermediates in the unrelativized setting. Even a proof that P ≠ NP would not fully classify problems in NP, leaving NP-intermediate problems as a persistent barrier to complete understanding. Such a resolution would confirm the existence of intermediates but require additional techniques—potentially non-relativizing—to identify and characterize specific ones, suggesting ongoing classification challenges long after the conjecture's settlement.

Connections to NP ∩ coNP

The class NP ∩ coNP consists of decision problems for which both affirmative and negative instances can be verified in polynomial time, meaning there exist polynomial-time verifiers for both membership (from NP) and non-membership (from coNP). Specifically, a language L is in NP ∩ coNP if there is a polynomial-time Turing machine that accepts L with short certificates for "yes" instances and another (or the same) that verifies "no" instances via short refutations. This class contains P, as problems solvable in polynomial time trivially have verifiable yes and no instances without certificates. NP ∩ coNP is particularly relevant to NP-intermediate problems because most natural candidates for NP-intermediate status, such as integer factoring, belong to this intersection, whereas NP-complete problems cannot reside here unless NP = coNP. The absence of NP-complete problems in NP ∩ coNP follows from the fact that if an NP-complete problem were in coNP, then every problem in NP would reduce to it and thus also lie in coNP, implying NP = coNP and a collapse of the polynomial hierarchy. Consequently, any problem proven to be in NP ∩ coNP but outside P would qualify as NP-intermediate, providing a pathway to exhibit such problems without invoking diagonalization arguments like those in Ladner's theorem. The implications of separating NP ∩ coNP from P are profound for complexity theory, as it would establish NP-intermediate problems while avoiding broader collapses; for instance, NP ≠ coNP is widely believed, ensuring that this separation does not force P = NP. Known results confirm that non-P problems in NP ∩ coNP are inherently intermediate, reinforcing the class's role in exploring the fine structure between P and NP-complete without risking the equality of NP and coNP.

Candidate Problems

Number Theory and Algebra

In number theory and algebra, several decision problems are prominent candidates for being NP-intermediate due to their membership in NP ∩ coNP, lack of known polynomial-time algorithms, and resistance to reduction from NP-complete problems. These problems underpin much of modern cryptography and have resisted efficient classical solutions despite extensive study. Their intermediate status is conjectured based on the absence of subexponential algorithms for general instances and the fact that proving them NP-complete would imply surprising collapses in complexity classes, such as P = NP ∩ coNP. The integer factoring decision problem asks, given a positive integer n > 1 and an integer k \geq 1, whether n has a prime at most k. This problem is in because a consists of a prime factor p \leq k dividing n, which can be verified in time by checking primality of p (via AKS ) and the division. It is also in , as a "no" instance can be certified by providing the complete prime of n, all factors exceeding k, verifiable similarly in time. No -time is known for this problem, and it is not believed to be NP-complete, as that would place it outside NP ∩ coNP under standard assumptions. The problem's hardness supports the security of cryptosystems like , where finding small factors reveals private keys. As of 2025, the best classical algorithms, such as the general number field sieve, run in subexponential time L_n(1/3, c) for constant c \approx 1.923, far from , while Shor's solves it in time on a quantum computer. The decision version of the discrete logarithm problem, in the context of multiplicative groups modulo a prime, is: given a prime p, a generator g of \mathbb{Z}_p^*, an element y \in \mathbb{Z}_p^*, and a bound B, does there exist an integer x \leq B such that g^x \equiv y \pmod{p}? This is in NP, as a certificate is the exponent x, verifiable by modular exponentiation in polynomial time using repeated squaring. Membership in coNP follows from providing a factorization of the group order p-1 and certificates that y is not in the subgroup generated by g up to B, or equivalently, via interactive proofs reducible to the problem's structure. Solving it efficiently would break Diffie-Hellman key exchange, a foundational cryptographic primitive. No general polynomial-time classical algorithm exists, and the problem is suspected to be NP-intermediate, as reductions to NP-complete problems are unknown and would contradict cryptographic assumptions. In 2025, the state-of-the-art classical methods, like the number field sieve variant for discrete logs, achieve subexponential time complexity similar to factoring, L_p(1/3, c) with c \approx 1.923, but Shor's algorithm provides a quantum polynomial-time solution. Quadratic residuosity modulo a composite asks: given a composite integer n (product of two distinct odd primes) and a coprime to n, is there an x such that x^2 \equiv a \pmod{n}? The problem is in , certified by providing such an x, verifiable by squaring and modular reduction. It lies in coNP (and more precisely, UP ∩ coUP) because a non-residue can be certified using properties of the and the , ensuring no solution exists modulo the prime factors of n without revealing them, via unambiguous nondeterministic verification. This problem forms the basis for the Goldwasser-Micali , where distinguishing residues from non-residues hides messages. It is not known to be in P, and assuming it is NP-intermediate aligns with its use in zero-knowledge proofs without efficient decision procedures. As of 2025, no classical polynomial-time solves it in general; probabilistic tests like the Tonelli-Shanks work only when n is prime or factored, and the best general approaches rely on factoring n first, inheriting subexponential complexity, while quantum methods via Shor reduce it to factoring.

Graph Theory and Isomorphism

The (GI) asks whether two given finite undirected are isomorphic, meaning there exists a between their vertex sets that preserves adjacency. This lies in , as a consists of the proposed , which can be verified in time by checking adjacency preservation for all edges. GI is not known to be ; in fact, if it were, the would collapse to its second level, a consequence widely believed to be false. Graph non-isomorphism belongs to the Arthur-Merlin AM (and thus GI to coAM), providing further evidence against NP-completeness, as this would imply co ⊆ AM ⊆ Π₂ᵖ. A major advance came in 2015 with László Babai's quasipolynomial-time algorithm for GI, running in time exp(O((log n)^{1/2} (log log n)^O(1))), where n is the number of vertices; a refined analysis yields exp((log n)^O(1)). This places GI in quasipolynomial time (quasi-P), vastly improving prior subexponential bounds like exp(O(√(n log n))). Despite this progress, the algorithm falls short of polynomial time, leaving open whether GI ∈ P; if P ≠ NP and GI ∉ P, it serves as a natural candidate for NP-intermediate status. Variants of subgraph isomorphism also yield candidates for NP-intermediate problems under restrictions. The general —determining if one contains a isomorphic to another—is NP-complete. However, the , requiring the to match exactly (preserving non-adjacencies), remains NP-complete in broad settings but exhibits intermediate complexity in restricted cases. Specifically, if P ≠ NP, there exist polynomial-time decidable classes C of such that isomorphism with patterns restricted to C is neither in P nor NP-complete, as shown via a Ladner-style diagonalization argument constructing a dense of reductions between P and NP. For example, certain hereditary classes excluding specific substructures lead to such intermediate behaviors, highlighting the problem's sensitivity to structural constraints. Related problems like , which checks if two strings admit a yielding the same of substrings, share structural similarities with and are also resolved by Babai's quasipolynomial , reinforcing the intermediate candidacy of exact isomorphism decisions in combinatorial settings.

Games and Logic

games are two-player zero-sum games played on finite directed graphs where vertices are labeled with numbers, and players alternate moves indefinitely. The winning condition for a player is determined by the (even or odd) of the highest seen infinitely often along the play path, with even typically favoring one player (Even) and odd favoring the other (). Determining the winning region for Even (or ) from each vertex, assuming optimal play, is the core . This problem lies in , as a nondeterministic polynomial-time verifier can a positional for the claimed winner by simulating attractors and confirming the parity condition. The complexity of solving parity games remained unresolved for decades, with membership in NP ∩ coNP established early but no polynomial-time algorithm known. A breakthrough came in 2017 with a quasi-polynomial time algorithm running in n^{O(\log n)} time, where n is the number of vertices, using recursive attractor computations and progress measures to peel away determined regions of the graph. Despite this advance, it is unknown whether parity games can be solved in polynomial time, positioning them as a candidate for NP-intermediate status under the assumption P ≠ NP. Recent claims of polynomial-time solvability, such as an O(n^2 (n + m)) algorithm based on refined attractor constructions, remain unverified as of late 2025 and have not been independently confirmed. Simple games extend games by incorporating probabilistic transitions at random , where successors are chosen uniformly at random. In these turn-based games, two maximizer and minimizer players seek to maximize or minimize the probability of reaching a 1-sink (win for maximizer) versus a 0-sink, with optimal values computed assuming mixed strategies. The of determining whether the value from a given exceeds 1/2 is in ∩ coNP, via nondeterministic verification of strategy values using approximations or value iteration bounds. No polynomial-time algorithm is known, and the problem remains open, suspected to be NP-intermediate due to its separation from both and NP-complete problems via Ladner's theorem implications. Certain problems involving formulas and also arise in logical contexts related to , such as evaluating or learning non-monotone under partial assignments or exact learning of formula representations. The minimum size problem (MCSP), which asks whether a given (specified by a or formula) can be computed by a of size at most s, is in and suspected to be NP-intermediate; it is neither known to be in nor NP-complete, with evidence from derandomization barriers suggesting hardness. Variants like exact learning of non-monotone formulas from membership queries face similar suspected intermediate status, as proper learning requires solving hard search problems reducible to minimization. These problems connect to game-theoretic evaluations in , where strategies correspond to satisfying assignments or evaluations in adversarial settings. As of 2025, theoretical classification of these and problems remains unresolved, with no proofs placing them in or as NP-complete. Practical advances include improved solver technologies, such as the implementing quasi-polynomial algorithms with optimizations like symbolic representation and attractor , enabling efficient solutions for instances up to millions of vertices. These solvers are integral to tools for verifying properties in reactive systems, where parity model liveness and fairness objectives in automata-theoretic verification. Despite algorithmic progress, the fundamental P vs. question for these problems persists, highlighting their role in exploring complexity hierarchies.

Open Questions and Future Directions

Challenges in Classification

One of the primary challenges in classifying problems as NP-intermediate stems from the absence of proven natural examples. While Ladner's theorem establishes the existence of such problems under the assumption that P ≠ NP, its proof relies on techniques that construct highly artificial languages by modifying instances to create sparse sets neither in P nor NP-complete. These constructions, often described as "blowing holes" in NP-complete problems, produce contrived problems without intuitive combinatorial structure, leaving open whether any natural problem—such as those arising in or —truly occupies this class. Proving a natural problem NP-intermediate would require demonstrating it is neither in P nor NP-complete, which implicitly resolves the P vs. NP question in the negative, a feat beyond current techniques. Further obstacles arise from barriers to proving key properties like non-membership in or non-NP-completeness. The natural proofs barrier, introduced by Razborov and Rudich, demonstrates that most existing methods for establishing circuit lower bounds—such as those showing a problem requires super-polynomial resources—fail to separate from unless one-way functions do not exist, a strong cryptographic assumption. This barrier particularly hampers efforts to prove problems are not in , as natural proofs encompass constructive, non-relativizing techniques used in complexity separations. For non-NP-completeness, similar reduction barriers apply: showing no polynomial-time reduction exists from an NP-complete problem like SAT requires arguing against the hardness of the candidate, often circling back to unproven lower bounds on reduction complexities, which the natural proofs framework indirectly obstructs by limiting general proof strategies. Algorithmic advancements on candidate problems further complicate classification by blurring boundaries without resolving status. For instance, the , long suspected to be NP-intermediate, was shown solvable in quasi-polynomial time, narrowing but not eliminating the gap to . Similarly, a November 2025 preprint claims that parity games, another candidate in ∩ coNP, can be solved in polynomial time using attractor-based decomposition, which, if verified, would effectively place it in and remove it from intermediate contention. These quasi-polynomial or polynomial breakthroughs highlight incremental progress but fail to confirm membership for core cases like , where no subexponential general algorithm exists despite targeted improvements. Such developments suggest that even if holds, fine-grained barriers may prevent full classification. As of November 2025, while algorithmic progress has resolved some candidates like parity games (pending verification), no major breakthroughs have emerged to overcome the general proof barriers for classifying natural NP-intermediate problems, with the field suspecting that many candidates will remain unclassified even if P ≠ NP is proven, due to the entrenched proof barriers and the artificiality of known intermediates.

Impact on Cryptography and Algorithms

The presumed NP-intermediate status of integer factorization underpins the security of the RSA cryptosystem, where the difficulty of factoring large semiprimes into their prime factors serves as the foundational hard problem for public-key encryption. Similarly, the discrete logarithm problem, believed to lie in NP-intermediate, forms the basis for the Diffie-Hellman key exchange protocol, enabling secure key agreement over public channels by relying on the computational infeasibility of extracting logarithms in finite fields. If these problems are indeed NP-intermediate—as suggested by Ladner's theorem establishing the existence of such problems under the assumption P ≠ NP—then no efficient classical polynomial-time algorithms exist to solve them unless P = NP, thereby ensuring the robustness of these cryptographic primitives against classical attacks. The intermediate complexity of candidate problems like () drives the development of practical and , as exact solutions remain elusive in time. For instance, vertex-invariant-based , which refine representations through iterative labeling to detect non-isomorphisms, provide efficient empirical performance for many real-world instances despite lacking worst-case guarantees. This motivates hybrid approaches in design, such as canonical labeling techniques combined with , which trade theoretical optimality for scalability in applications like network analysis and molecular matching, highlighting how NP-intermediate status encourages innovation in subexponential methods over futile searches for exact -time solvers. Quantum computing introduces significant implications through algorithms like Shor's, which solves both and discrete logarithms in time on a quantum computer, placing them in and underscoring their presumed classical hardness. This disparity reinforces the NP-intermediate conjecture for these problems on classical machines, as no analogous efficient classical algorithm has been found despite decades of research. The broader impact of NP-intermediate problems shapes the design of secure systems by necessitating transitions to , which deliberately avoids reliance on or discrete logarithms in favor of lattice-based or hash-based schemes resistant to quantum attacks. Standardization efforts, such as those by NIST, prioritize these alternatives to maintain long-term security for digital infrastructure, illustrating how the unresolved status of NP-intermediate problems informs proactive cryptographic evolution.

References

  1. [1]
    On the Structure of Polynomial Time Reducibility - ACM Digital Library
    On the Structure of Polynomial Time Reducibility. Author: Richard E. Ladner ... View or Download as a PDF file. PDF. eReader. View online with eReader ...
  2. [2]
    Constructing NP-intermediate problems by blowing holes with ...
    This allows one to define more fine-grained parameters, resulting in NP-intermediate problems where we only blow holes in a controlled subset of the problem. We ...
  3. [3]
    Graph isomorphism is in the low hierarchy - ScienceDirect.com
    It is shown that the graph isomorphism problem is located in level L 2 p of the low hierarchy in NP. This implies that this problem is not NP-complete.
  4. [4]
    [PDF] A Homological Proof of P ̸= NP: Computational Topology ... - arXiv
    Oct 22, 2025 · Abstract. This paper establishes the separation of complexity classes P and NP through a novel ho- mological algebraic approach grounded in ...
  5. [5]
    [PDF] On NP-intermediate, Isomorphism problems, and Polynomial ...
    NP-intermediate problems, if P≠NP, are neither P nor NP-complete. Graph and subgraph isomorphism problems are examples of this class.
  6. [6]
    Landmark Algorithm Breaks 30-Year Impasse - Quanta Magazine
    Dec 14, 2015 · For decades, the graph isomorphism problem has held a special status within complexity theory. While thousands of other computational ...
  7. [7]
    On the Structure of Polynomial Time Reducibility
    Polynomial time reducibility, defined by Cook and Karp, is a time-bounded version of many-one reducibility, where a function is computable in polynomial time.
  8. [8]
    [PDF] reducibility among combinatorial problems - CS@Purdue
    REDUCIBILITY AMONG COMBINATORIAL PROBLEMS. +. Richard M. Karp. University of California at Berkeley. Abstract: A large class of computational problems involve ...
  9. [9]
    [PDF] 3.3 Ladner's Theorem: Existence of NP-intermediate problems
    Ladner's Theorem states that if P != NP, then there exists a language L in NP that is not NP-complete.Missing: original | Show results with:original
  10. [10]
    The complexity of theorem-proving procedures - ACM Digital Library
    A method of measuring the complexity of proof procedures for the predicate calculus is introduced and discussed.
  11. [11]
    [PDF] A Short History of Computational Complexity - Lance Fortnow
    Nov 14, 2002 · The work of Cook and Karp in the early 70's showed a large number of combinatorial and logical problems were NP-complete, i.e., as hard as any ...
  12. [12]
    [PDF] Reducibility Among Combinatorial Problems - Semantic Scholar
    1973. A large class of combinatorial problems have been shown by Cook and Karp to be computationally equivalent to within a polynomial. We exhibit some new ...
  13. [13]
    von Neumann, Godel and Complexity Theory - jstor
    Around 1989, a striking letter written in March 1956 from Kurt Godel to John von Neumann came to light. It poses some problems about the complexity of ...
  14. [14]
    cc.complexity theory - Generalized Ladner's Theorem
    Aug 31, 2010 · Update. Check Ladner's paper On the Structure of Polynomial Time Reducibility. Here is the abstract: Two notions of polynomial time ...
  15. [15]
    [PDF] On Sparse Sets in NP-P - Cornell eCommons
    At the same time we know from Ladner's result [5] that if P 4 NP then incomplete sets exist in NP–P. sets can be sparse.
  16. [16]
    Relativizations of the P = ? N P Question - SIAM Publications Library
    We investigate relativized versions of the open question of whether every language accepted nondeterministically in polynomial time can be recognized ...
  17. [17]
    [PDF] Computational Complexity: A Modern Approach - cs.Princeton
    Now we formalize the intuitive notion of efficiently verifiable solutions by defining a complexity class NP. Definition 2.1 (The class NP). A language L ⊆ {0,1}.
  18. [18]
    [PDF] 1 coNP and good characterizations In these lecture notes we ...
    In these lecture notes we discuss a complexity class called coNP and its relationship to P and NP. This discussion will lead to an interesting notion of “good ...
  19. [19]
    [PDF] Complexity, Combinatorial Positivity, and Newton Polytopes
    Ladner's theorem: P 6= NP =⇒ NP-intermediate 6= ∅. Problems in NP∩coNP are suspects for NP-intermediate since. coNP ∩ NP-complete 6= ∅ =⇒ NP = coNP!
  20. [20]
  21. [21]
    [PDF] Complexity, Combinatorial Positivity, and Newton Polytopes
    I Ladner's theorem: P 6= NP =⇒ NP − intermediate 6= ∅. I NP ∩ coNP is important to this discussion: coNP ∩ NP − complete 6= ∅ =⇒ NP = coNP! I This is ...
  22. [22]
    [PDF] BQP and the Polynomial Hierarchy - arXiv
    Oct 25, 2009 · including Factoring and Discrete Logarithm—are easily seen to be in NP ∩ coNP.3. One notable exception is Recursive Fourier Sampling, the ...
  23. [23]
  24. [24]
    [PDF] arXiv:1202.6641v2 [cs.GT] 6 Mar 2012
    Mar 6, 2012 · (It is widely believed in cryptography that integer factoring is hard. It is well known that if integer factoring is hard then P 6= NP∩coNP.) ...
  25. [25]
    The State of the Art in Integer Factoring and Breaking Public-Key ...
    Aug 9, 2025 · In this column, we will review the current state of the art of cryptanalysis for three number-theoretic problems using classical ...
  26. [26]
    [PDF] An Efficient Quantum Algorithm for some Instances of the Group ...
    Jan 5, 2010 · group isomorphism problem is in NP∩ coNP ... efficient solution for the integer factoring problem (we refer to Shor's paper [31] for a precise.
  27. [27]
    [PDF] A Note on Quadratic Residuosity and UP - cs.wisc.edu
    May 9, 2004 · quadratic nonresidue problem is in NP. We generalize to higher powers and show the higher power residue problem belongs to UP ∩ coUP. Key ...Missing: intermediate | Show results with:intermediate
  28. [28]
    [1512.03547] Graph Isomorphism in Quasipolynomial Time - arXiv
    Access Paper: View a PDF of the paper titled Graph Isomorphism in Quasipolynomial Time, by L\'aszl\'o Babai. View PDF · TeX Source · license ...
  29. [29]
    [PDF] Understanding the Complexity of Induced Subgraph Isomorphisms
    More precisely, there are classes C such that the restricted induced subgraph isomorphism problem is neither in P nor NP-complete. This result is presented ...
  30. [30]
    New deterministic algorithms for solving parity games - ScienceDirect
    Our main result is a fixed-parameter algorithm that solves bipartite parity games in time k O ( k ) ⋅ O ( n 3 ) , and general parity games in time ( p + k ) O ( ...Missing: Advances | Show results with:Advances
  31. [31]
    Attractors Is All You Need: Parity Games In Polynomial Time - arXiv
    Nov 4, 2025 · This paper provides a polynomial-time algorithm for solving parity games that runs in \mathcal{O}(n^{2}\cdot(n + m)) time-ending a search that ...
  32. [32]
    The complexity of stochastic games - ScienceDirect.com
    We consider the complexity of stochastic games—simple games of chance played by two players. We show that the problem of deciding which player has the greatest ...
  33. [33]
    A formula for the value of a stochastic game - PNAS
    This problem is intriguing because the class of simple stochastic games is both NP (nondeterministic polynomial time) and co-NP, and several important ...Missing: intermediate | Show results with:intermediate<|separator|>
  34. [34]
    [PDF] Learning Boolean Formulae - UPenn CIS
    E cient distribution-free learning of Boolean formulae from positive and negative examples is considered. It is shown that classes of formulae that are e ...
  35. [35]
    Oink, an implementation of modern parity game solvers - GitHub
    Oink is a modern implementation of parity game solvers written in C++. Oink aims to provide high-performance implementations of state-of-the-art algorithms.
  36. [36]
    Smaller progress measures and separating automata for parity games
    Parity games have been extensively studied for their practical applications, to determine their complexity status, and to find efficient solutions. From a ...
  37. [37]
    [PDF] The Natural Proofs Barrier and P =?NP - Stanford CS Theory
    Mar 13, 2019 · NP has occupied a central role in the world of theoretical computer science and is such a well known problem that the reader surely knows both ...
  38. [38]
    [PDF] A Method for Obtaining Digital Signatures and Public-Key ...
    We demonstrate in this paper how to build these capabilities into an electronic mail system. At the heart of our proposal is a new encryption method. This ...
  39. [39]
    [PDF] New Directions in Cryptography - Stanford University
    Diffie and M. E. Hellman, “Multiuser cryptographic techniques,” presented at National Computer Conference, New York, June 7-10,. 1976. [6] D. Knuth, The Art of ...Missing: URL | Show results with:URL
  40. [40]
    Post-Quantum Cryptography | CSRC
    Short URL: https://www.nist.gov/pqcrypto For a plain-language introduction to post-quantum cryptography, go to: What Is Post-Quantum Cryptography?NIST PQC standards · Workshops and Timeline · Selected Algorithms · NIST FAQ