Fact-checked by Grok 2 weeks ago

Ultrafinitism

Ultrafinitism is a radical in the foundations of that asserts the existence only of finite and bounded entities, rejecting both actual and potential infinities as meaningless or nonexistent, and emphasizing that mathematical objects must be constructible within practical, resource-limited contexts. Emerging in the mid-20th century as an extreme form of , ultrafinitism critiques traditional arithmetic and for assuming unbounded s or infinite sets, proposing instead that there exists a largest natural number beyond which computations become infeasible due to physical or temporal constraints. Key proponents include Dutch mathematician David van Dantzig, who in 1950 argued that infinite collections surpass any explicitly describable finite size; Russian mathematician Alexander Yessenin-Volpin, who in the 1960s and 1970s developed an "ultra-intuitionistic" program rejecting the uniqueness of the natural number series and advocating finite segments of ; American mathematician , who in 1976 introduced Predicative Arithmetic to formalize finitistic principles without infinite induction; and Indian-American logician Rohit Parikh, who in 1971 incorporated a feasibility predicate into Peano Arithmetic, demonstrating that statements like the non-feasibility of $2^{1000} can be consistently asserted. Paul Bernays in the 1930s had earlier identified ultrafinitism as a coherent philosophical position distinct from . Central to ultrafinitism is the principle of contextual feasibility, where the legitimacy of a mathematical entity depends on whether it can be effectively verified or constructed given finite resources like time, space, or energy, leading to revisions in classical theorems such as the infinitude of primes or the of . Proponents argue that this approach aligns more closely with empirical reality, avoiding ontological commitments to abstract infinities, and explore formal systems like pseudo-finite models or alternative set theories to support bounded reasoning. While often viewed as revisionist and challenging to mainstream , ultrafinitism highlights foundational questions about the limits of and the nature of mathematical truth.

Overview

Definition and Core Tenets

Ultrafinitism is a radical form of constructivism in the philosophy of mathematics that denies the existence of arbitrarily large finite numbers, positing instead that there is a finite upper bound on the sizes of numbers that can be meaningfully manipulated or verified through computation. This view holds that beyond a certain scale, even finite objects become practically inaccessible due to limitations in time, space, or physical resources, rendering them effectively nonexistent for mathematical purposes. The core tenets of ultrafinitism revolve around a profound toward both actual and potential , extending to a rejection of infinitary reasoning in logic, , and . Central to this is the denial of the meaningfulness of "ultrafinite" numbers—those exceedingly large finite quantities that exceed what can be enumerated, computed, or represented within feasible bounds imposed by the physical . Instead, ultrafinitism prioritizes verifiable finitude, insisting that mathematical objects and proofs must be grounded in , resource-bounded constructions rather than abstract totalities. A key illustration of these principles is the claim that basic arithmetic operations, such as exponentiation (e.g., $2^n), are not total functions across all natural numbers, as sufficiently large exponents render computation infeasible within any realistic timeframe or storage capacity. Ultrafinitism thus advocates restricting mathematics to "feasibly finite" domains, where every assertion can be checked through explicit, bounded processes, in contrast to the unlimited finitude accepted by milder finitist positions. Ultrafinitism differs from by imposing stricter boundaries on the natural numbers, rejecting not only infinite sets but also arbitrarily large finite numbers that surpass practical computational limits. While , as articulated in , accepts all finite numbers in principle as long as they avoid actual infinities, ultrafinitism maintains that numbers exceeding a certain feasibility —such as those too vast to compute or verify within physical constraints—do not exist for mathematical purposes. This introduces a upper grounded in real-world , contrasting with finitism's allowance of potentially unbounded finite processes. In comparison to intuitionism, ultrafinitism shares an emphasis on constructivity but extends it by incorporating physical and computational constraints, denying the existence of even mentally constructible large finite numbers if they cannot be verified through feasible means. , developed by Brouwer, permits potential infinities through ongoing mental constructions, viewing as a free creation of the mind without requiring empirical verification. Ultrafinitism's feasibility threshold serves as a boundary beyond which numbers become mathematically non-existent, unlike intuitionism's reliance on abstract mental constructibility, which allows for indefinitely large but finite entities. Ultrafinitism stands in complete opposition to , which fully embraces actual infinities and totalities as foundational to . posits that structures, such as the set of all natural numbers, exist independently and completely, enabling classical without revision. In contrast, ultrafinitism denies even potential infinities by enforcing finite bounds tied to computational reality, rendering infinitist assumptions incoherent within its framework.

Historical Development

Precursors and Early Ideas

The roots of ultrafinitism can be traced to ancient regarding the nature of and large magnitudes, particularly in 's distinction between potential and . In his Physics (Book III), Aristotle argued that exists only potentially—as an ongoing process that can be extended indefinitely but never completed as a whole—while rejecting as incoherent and leading to paradoxes, such as those posed by . This view emphasized that mathematical concepts must align with finite, observable processes in the physical world, prefiguring ultrafinitism's insistence on empirical verifiability over abstract logical possibilities. Aristotle's framework limited to potentiality in time or division, ensuring that all actual entities remain finite and bounded by reality. In the , Leopold Kronecker's finitist inclinations provided further groundwork, advocating a strict reduction of to finite integers and constructive operations. Kronecker famously declared, "God made the integers; all else is the work of man," rejecting non-constructive existence proofs, irrational numbers, and transfinite concepts as unverifiable abstractions. From the 1870s onward, he opposed developments like Georg Cantor's , insisting that mathematical entities must be finitely generatable and empirically grounded, without reliance on infinite processes. His program, outlined in works such as "Über den Zahlbegriff" (1887), prioritized arithmetic's finite foundations, influencing later critiques of totality in and aligning with ultrafinitism's broader rejection of unfeasible infinities. Émile Borel's ideas in the early extended these precursors by questioning the meaningfulness of extremely large finite numbers and transfinite cardinals, tying to physical and al constraints. In the , amid his shift toward probability and measure theory, Borel argued that numbers exceeding practical limits—such as those far beyond the observable universe's scale (e.g., vastly larger than 10^{1000}, akin to his "universe number" )—lose empirical due to the finite resources of time, space, and human . This , rooted in empirical , viewed as an extension of observable reality rather than Platonistic , rendering such "inaccessible numbers" as illusory or irrelevant. A key articulation came in his article "Sur les définitions analytiques et sur l'illusion du transfinite," where Borel critiqued transfinite numbers as non-computable illusions, extending skepticism to large finites by emphasizing verifiability through physical processes. These notions prefigure ultrafinitism's core that mathematical existence demands feasible construction, not mere logical possibility. Finitism, as a broader philosophical stance limiting to finite methods, served as an immediate precursor, bridging Kronecker's to Borel's practical bounds without delving into ultrafinitism's stricter rejection of all large-scale .

Emergence in the

In the 1930s, Paul Bernays identified ultrafinitism as a coherent philosophical position in the foundations of , distinct from , emphasizing the potential rejection of even very large finite numbers due to practical constraints. In the mid-20th century, ultrafinitism began to emerge as a distinct philosophical stance through critiques of that questioned the realizability of even very large finite numbers. David van Dantzig articulated this position in his 1955 paper, where he argued that numbers as large as $10^{10^{10}} exceed any practical or physical means of construction or verification, rendering them effectively unreal within intuitionistic frameworks that emphasize mental construction. This critique echoed earlier precursors like Émile Borel's remarks on the meaninglessness of numbers beyond astronomical scales but formalized a sharper boundary on finitary . In the 1960s and 1970s, Russian mathematician Alexander Yessenin-Volpin developed an "ultra-intuitionistic" program, rejecting the uniqueness of the natural number series and advocating the use of finite segments of to avoid assumptions of totality. In 1971, Indian-American logician Rohit Parikh incorporated a feasibility predicate into Peano Arithmetic, allowing the consistent assertion that certain , such as $2^{1000}, are not feasible to construct. Following these developments, in the 1970s and 1980s, American mathematician developed predicative arithmetic, a "syntactic" approach that rejects impredicative definitions and limits to feasible computational bounds, evolving into a rigorous ultrafinitistic framework by denying the totality of for sufficiently large bases. Complementing this, Yuri Matiyasevich's 1970 resolution of demonstrated fundamental limits on Diophantine equations, underscoring that no can decide solvability for all finite inputs, which ultrafinitists interpret as against assuming infinite computational resources for all . By the and , ultrafinitism gained broader advocacy, particularly through Doron Zeilberger's writings, which promoted it as a response to the impracticality of classical proofs in . Zeilberger's 2011 paper provided a concise historical overview, positioning ultrafinitism as a viable to infinitary assumptions and highlighting its roots in computational realism. Traction grew via experiments in areas like , where algorithms for computing Ramsey numbers R(k,k) break down for modest k (e.g., R(5,5) remains uncomputable within feasible time, bounded between 43 and 49), illustrating how growth rates render many "finite" results practically unattainable. In recent years, ultrafinitism has intersected with and foundational debates, particularly amid discussions of AI's computational constraints. Works from 2023–2025, such as model-theoretic semantics for ultrafinitism using fuzzy initial segments and potentialist interpretations, link it to limits in proofs and large-scale simulations, suggesting that foundational should align with verifiable computational scales rather than idealized infinities.

Key Concepts

Rejection of Totality in

In standard , arithmetic operations such as , , and are assumed to be total functions, meaning they are defined and computable for every pair of natural numbers. Ultrafinitism challenges this assumption, arguing that for sufficiently large inputs, these functions cease to be total due to practical limitations in and . This rejection stems from the view that mathematical objects must be feasible to construct or verify, rendering operations on extremely large numbers meaningless or nonexistent. A concrete example is exponentiation, particularly $2^n, which ultrafinitists contend fails to be total when n exceeds feasible bounds, such as n > 10^{100} (a ). For instance, computing or even writing down $2^{1000} requires more resources than are physically available in the , making the result unverifiable and thus not truly existent in a practical sense. Similarly, the Ackermann function A(m,n), defined recursively as: \begin{align*} A(0,n) &= n+1, \\ A(m+1,0) &= A(m,1), \\ A(m+1,n+1) &= A(m, A(m+1,n)), \end{align*} grows so rapidly that it becomes non-computable beyond small values like m=4, n=1 (yielding A(4,1) = 2^{2^{2^2}} - 3 = 65{,}536 - 3 = 65{,}533), with A(4,2) = 2^{65{,}536} - 3 already far beyond feasible bounds, illustrating how even theoretically total functions break down under ultrafinitist scrutiny. In ultrafinitism, the set of natural numbers is effectively finite, with a practical "cutoff" beyond which further numbers are dismissed as unrealizable. This perspective aligns with computational feasibility as a guiding principle, limiting arithmetic to domains where operations can be explicitly performed. Consequently, proofs relying on mathematical induction over unbounded natural numbers are rejected, as induction presupposes the totality of the entire infinite sequence, which ultrafinitists deny.

Role of Computational Feasibility

Ultrafinitism posits that mathematical truths must be anchored in computations that can be performed using finite physical resources, such as time, space, and energy, often modeled via with bounded operations. This principle rejects abstract infinities or arbitrarily large finite structures that exceed practical realizability, emphasizing that only operations executable within realistic constraints qualify as valid mathematical entities. Central to this view is the concept of "feasible numbers," defined as those that can be generated or manipulated in polynomial time relative to their size or within the scale of the , such as the estimated 10^{80} atoms available for computation. For instance, numbers up to around 2^{1000} (interpreted in , where each doubles the previous power) are considered beyond feasibility due to the immense resources required for even basic operations like or . Ultrafinitism further employs measures like Kolmogorov complexity—the minimal length of a program needed to produce a given finite object—and bounds related to Chaitin's omega, the halting probability of a universal Turing machine, to demonstrate that certain "finite" numbers or proofs are practically uncomputable despite their theoretical finitude. High Kolmogorov complexity implies that describing or verifying such objects demands resources exceeding physical limits, rendering them inaccessible. An illustrative example is multiplying two numbers with 10^6 digits, which is feasible using efficient algorithms like the Karatsuba method in polynomial time, whereas computing 2^{2^{100}} is not, as it would require exponential space and time far beyond universal constraints. This focus on computational feasibility extends to challenging specific finite instances of undecidable problems, such as the for Turing machines with enormous but finite inputs, if their resolution demands infeasible runtime or storage. By tying arithmetic operations to these bounds, ultrafinitism provides a computational rationale for rejecting the totality of even basic principles like over all natural numbers.

Prominent Figures

Doron Zeilberger

Doron Zeilberger is an Israeli-American born in 1950, renowned for his foundational contributions to and , including the development of algorithmic proof methods for hypergeometric identities and the Wilf-Zeilberger theory. As a Board of Governors Professor at since 1987, his work emphasizes and discrete methods, which underpin his advocacy for finitistic approaches in . In the , Zeilberger began promoting "" through essays and opinions that highlighted the superiority of discrete, computable structures over infinite ones, arguing that finite methods yield more natural and beautiful results. A pivotal example is his opinion piece, which celebrated the rise of driven by computers, positioning it as a "quasi-empirical" where mathematical facts are discovered and verified through rather than abstract reasoning. This advocacy extended to viewing uncomputable proofs as inherently suspect, as they often rely on unfeasible infinite processes that evade practical verification. Zeilberger's critique of in crystallized in his paper, where he asserted that is merely a degenerate case of discrete analysis, reducing continuous concepts to limits of finite approximations and dismissing ideals as unnecessary abstractions. He explicitly identified this stance as ultrafinitism, contending that lacks concrete meaning and that calculus's reliance on it represents a historical fluke stemming from 17th-century computational limitations, rather than an essential mathematical truth. His 2019 YouTube talk, "Math Heresy: Ultrafinitism," further popularized these ideas by linking ultrafinitism to a rejection of Kurt Gödel's incompleteness theorems, arguing that the theorems' reliance on infinite sets renders them paradoxical and irrelevant in a finite . Through such public advocacy, Zeilberger positioned ultrafinitism as a viable alternative to mainstream infinitary mathematics, influencing discussions in and .

Other Contributors

Edward Nelson advanced ultrafinitist ideas through his 1970s work on predicative arithmetic, particularly following his 1976 epiphany, emphasizing syntactic finitism that restricts mathematical reasoning to predicative definitions and avoids impredicative assumptions about infinite totalities, thereby aligning with bounds on computational feasibility, culminating in his 1986 book Predicative Arithmetic. In this formal system, functions are defined predicatively, limiting the scope to what can be explicitly constructed within finite resources, which supports ultrafinitist critiques of unrestricted arithmetic. Russian mathematician Alexander Yessenin-Volpin, active from the 1960s to 1970s, developed an "ultra-intuitionistic" program that rejected the uniqueness of the natural number series and advocated for finite segments of set theory, pioneering formal ultrafinitism through set-theoretic investigations limited to bounded finite sets. Indian-American logician Rohit Parikh, in his 1971 paper, incorporated a feasibility predicate into Peano Arithmetic, demonstrating that statements like the non-feasibility of $2^{1000} can be consistently asserted, thus formalizing practical bounds on arithmetic operations central to ultrafinitism. Yuri Matiyasevich contributed indirectly to ultrafinitism via his 1970 resolution of , proving that there is no general algorithm to determine whether Diophantine equations have integer solutions, which highlights undecidability within finite domains and implies inherent limits on even for finite but large numbers. This result underscores that certain finite mathematical questions are unresolvable in principle, reinforcing ultrafinitist arguments that not all "finite" entities are practically accessible or verifiable. Vladimir engaged briefly with ultrafinitist themes in the 2010s through his development of univalent foundations, a constructive approach to based on that expresses skepticism toward impredicative set-theoretic constructions and large infinite cardinals by prioritizing verifiable, computational structures. His 2010 lecture "What if Current Foundations of are Inconsistent?" further echoed this by questioning the reliability of classical foundations, advocating for machine-checked proofs that align with finite computational constraints. Earlier, David van Dantzig critiqued in his 1950 work for overlooking the physical finitude of mathematical practice, arguing that extremely large finite numbers like $10^{10^{10}} may not be meaningfully constructible given real-world resource limits, thus prefiguring ultrafinitist concerns about the boundaries of finitude. These contributors, spanning , , and foundational , offered interdisciplinary reinforcement to ultrafinitism, with figures like Doron Zeilberger later building on their ideas to advocate for its broader acceptance.

Implications and Applications

Connections to Computational Complexity

Ultrafinitism aligns closely with by leveraging resource bounds to delineate feasible from infeasible , emphasizing physical and temporal limitations on computation. Hierarchy theorems provide a foundational link, demonstrating strict separations between complexity classes that preclude universal algorithms for large inputs. Specifically, the time hierarchy theorem establishes that for suitable time-constructible functions f and g where f(n) log f(n) = o(g(n)), the class DTIME(f(n)) is properly contained in DTIME(g(n)), as proven by Hennie and Stearns. This separation, such as between TIME(n) and TIME(n^2), underscores the ultrafinitist perspective that computational power increases non-uniformly with input size, rendering certain operations on large numbers practically nonexistent due to escalating time requirements. Problems in exponential time (EXP) or beyond illustrate this intersection vividly, where theoretical solvability clashes with practical impossibility. For instance, integer factorization of large semiprimes—known to be in NP and believed outside P—exemplifies practical non-existence: factoring a 2048-bit number exceeds the computational capacity of current or foreseeable hardware, with best algorithms like the general number field sieve requiring subexponential but superpolynomial time O(exp(c (log n)^{1/3} (log log n)^{2/3})), far beyond feasible bounds for enormous n. Such examples support ultrafinitism by treating these as boundaries where "solutions" cease to be mathematically meaningful, echoing Rohit Parikh's introduction of a feasibility predicate F(n) to mark computationally attainable numbers, as in the consistent theory PA + ¬F(2^{1000}). The unresolved P versus NP question further ties ultrafinitism to complexity, positing that if P ≠ NP, then large instances of -complete problems lack feasible solutions or verifications, rendering them "non-mathematical" in the ultrafinitist framework. Bounded arithmetic, pioneered by Samuel R. Buss, formalizes this connection by correlating levels of the with fragments of arithmetic that capture feasible proofs, originally motivated by ultrafinitist aims to restrict to polynomial-time verifiable statements. In this , the exponential lower bounds on proof systems like —established by Haken—demonstrate that even verifying tautologies for large n demands infeasible resources, aligning with ultrafinitist skepticism toward unbounded reasoning. A concrete example arises in circuit complexity lower bounds, where even elementary functions demand exponential resources for sufficiently large inputs. The parity function, which computes the sum of n bits modulo 2, cannot be realized by constant-depth, polynomial-size circuits (AC^0); Håstad's switching lemma proves that such circuits require size at least $2^{\Omega(n^{1/(d-1)})} for depth d, establishing exponential lower bounds. This result formalizes ultrafinitist "feasibility" through asymptotic notations like O(n^k), where polynomial growth defines the tractable regime, while superpolynomial demands—common in complexity separations—justify excluding vast swaths of purportedly "finite" but unattainable computations.

Effects on Mathematical Proofs

Ultrafinitism fundamentally alters the nature of mathematical proofs by rejecting non-constructive methods, particularly those relying on the applied to unbounded domains or infinite structures, and instead demanding explicit, feasible algorithms that can be executed within practical computational bounds. In this view, proofs must provide constructive content that aligns with resource-limited , emphasizing finite sequences of symbols that can be checked mechanically without assuming the totality of functions like . This shift prioritizes proofs that are not only logically valid but also practically realizable, transforming traditional into a form of bounded . A concrete illustration of this impact appears in the treatment of computer-assisted proofs, such as the , which ultrafinitists accept provided the verification process remains feasible through explicit algorithmic checking of finite cases, avoiding reliance on uncomputable infinities. In contrast, proofs involving large Ramsey numbers, like R(100,100), are dismissed if their computation exceeds feasible bounds, as the existence claims cannot be explicitly verified without assuming non-existent large-scale totality. This leads to a preference for "executable" proofs that function as finite programs, fostering experimental validation—such as running simulations on bounded inputs—over abstract deduction that presumes infinite accessibility. Ultrafinitism also implies a revision of foundational results like by confining analysis to feasible , where self-referential paradoxes are avoided due to the lack of sufficient expressive power in bounded formal systems. Proofs in such systems remain complete relative to their limited scope, as the theorems' assumptions of full Peano do not hold. Furthermore, ultrafinitism favors predicative definitions over impredicative ones, restricting and to explicitly constructible entities to evade assumptions of totality that could lead to infeasible or illusory infinities. This predicative stance, as developed in frameworks like Edward Nelson's Predicative , ensures proofs build hierarchically from bounded primitives, maintaining consistency within computational limits.

Recent Developments in Physics and Philosophy

In recent years, ultrafinitism has gained attention in and , particularly following the Ultrafinitism: Physics, Mathematics, and Philosophy conference held at from April 11–13, 2025. The event highlighted potential applications to , with physicist Sean Carroll proposing models of an ultrafinite universe grounded in quantum principles, suggesting that bounded mathematical structures could resolve issues in infinite cosmological models. Philosophers and mathematicians explored potentialist conceptions of ultrafinitism, viewing the universe of sets as a potentialist without actual infinities. These discussions underscore ultrafinitism's growing relevance beyond , challenging infinite assumptions in and prompting interdisciplinary debates on the nature of reality.

Criticisms and Debates

Philosophical Objections

One major philosophical objection to ultrafinitism concerns its definition of what constitutes a "feasible" or number, which critics argue makes mathematical truth dependent on contingent factors like technological advancement—tomorrow's supercomputers could render today's "unfeasible" numbers accessible, shifting the of with every . Platonists and other realists in the philosophy of mathematics often dismiss ultrafinitism as anti-mathematical, viewing it as an abdication of mathematics' abstract nature in favor of empirical constraints. By denying the existence of sufficiently large finite numbers, ultrafinitism is seen to undermine established theorems; critics argue that rejecting large instances effectively erodes the universality and permanence of mathematical proofs, reducing the field to a parochial exercise tied to current human capabilities. A key critique is that ultrafinitism conflates —what we can know or compute—with —what mathematical objects actually exist. This blurring suggests that the non-existence of or infinities stems from practical limitations rather than fundamental reality, a position that mainstream mathematicians reject as it improperly imports epistemic barriers into ontological claims about the natural numbers. Ultrafinitism further challenges mathematical realism by questioning the objective existence of large finite entities, implying that mathematics lacks an independent, mind-external structure if its truths are bounded by feasibility. If numbers beyond a certain magnitude "do not exist," this erodes the notion of mathematical objectivity, positioning the discipline as subjective or observer-dependent rather than discovering timeless truths. Ultrafinitism creates tension with , as physical theories like routinely invoke infinities in their formulations, such as infinite-dimensional Hilbert spaces or continuous symmetries. Critics argue that confining to finite, feasible bounds would challenge its foundational role in physics, rendering ultrafinitist models potentially incompatible with empirically successful infinite structures in the natural sciences, though techniques like address some infinities.

Responses from Mainstream Mathematics

Ultrafinitists counter objections regarding feasibility by emphasizing that the bounds they impose on mathematical objects are rooted in the physical universe and empirical reality, rather than arbitrary or observer-dependent. For instance, computational feasibility is limited by fundamental physical constraints, such as the finite speed of information propagation and the universe's scale, which preclude the realization of arbitrarily or processes. This perspective aligns ultrafinitism with empirical reality, arguing that must remain tethered to what is physically verifiable. A prominent comes from Doron Zeilberger, who characterizes mainstream ' reliance on infinities as engaging with "fictions" that are heuristically useful but ontologically unreal, comparable to literary devices that enrich discourse without claiming literal truth. Zeilberger contends that serves as a "fictional ," appealing in its abstraction yet disconnected from concrete computation, much like enjoying a poem about nonexistent stars while preferring the tangible beauty of finite structures such as combinatorial theorems. He advocates discarding these illusions in favor of , finite methods, viewing infinite analysis as a degenerate case of more robust finite alternatives that better reflect computational practice. Ultrafinitism positions itself as an evolution of constructivist traditions, particularly L.E.J. Brouwer's , rather than a outright rejection; it extends intuitionism's emphasis on mental constructions by insisting on physical realizability, thereby refining the rejection of non-constructive proofs to exclude even potential infinities beyond feasible bounds. This alignment underscores ultrafinitism's continuity with Brouwer's program, adapting it to modern computational constraints where proofs must be executable within physical limits. Proponents highlight practical benefits in averting paradoxes arising in large-scale computing applications, such as those in and , where assumptions of unbounded resources lead to theoretical vulnerabilities unresolvable in finite implementations. By "domesticating" —confining it to verifiable, bounded realities—ultrafinitism fosters more reliable frameworks for these fields. While remaining a position in mainstream , recent discussions, including a 2025 conference, have explored its links to and .

References

  1. [1]
    [PDF] A very short history of ultrafinitism - Mathematics Department
    Jun 25, 2024 · In this first of a series of papers on ultrafinitistic themes, we offer a short history and a conceptual pre-history of ultrafinistism.Missing: primary | Show results with:primary
  2. [2]
    [PDF] Ultrafinitist foundations - DiVA portal
    In the 1930s the mathematician Paul Bernays pointed out that ultrafinitism is a possible position to take in the philosophy of mathematics (Dummett, 1975).Missing: primary | Show results with:primary
  3. [3]
    [PDF] Model Theory of Ultrafinitism I - arXiv
    Nov 21, 2006 · Abstract. This article is the first of an intended series of works on the model the- ory of Ultrafinitism. It is roughly divided into two ...
  4. [4]
    None
    ### Summary of Ultrafinitism from Abstract and Introduction
  5. [5]
    Philosophy of Mathematics
    Sep 25, 2007 · On most accounts, ultra-finitism leads, like intuitionism, to revisionism in mathematics. For it would seem that one would then have to say that ...Missing: primary | Show results with:primary
  6. [6]
  7. [7]
    The Infinite | Internet Encyclopedia of Philosophy
    This article concerns the significant and controversial role that the concepts of infinity and the infinite play in the disciplines of philosophy, physical ...
  8. [8]
    Leopold Kronecker (1823 - 1891) - Biography - MacTutor
    He was the first to doubt the significance of non-constructive existence proofs. It appears that, from the early 1870s, Kronecker was opposed to the use of ...Missing: ultrafinitism Aristotle
  9. [9]
    None
    Below is a merged summary of Émile Borel's views on transfinite numbers and his influence in the 1920s, combining all the information from the provided segments into a single, comprehensive response. To maximize detail and clarity, I will use a table in CSV format for key points, followed by a narrative summary that integrates additional context and details not suited for the table. This approach ensures all information is retained while maintaining readability.
  10. [10]
    Philosophie.ch - Is 10^10^10 a Finite Number? - DOI
    van Dantzig, David, 1955. "Is 10^10^10 a Finite Number?" Dialectica 9(35/36), pp. 272–277. DOI: 10.1111/j.1746-8361.1955.tb01332.x · External link · Legal ...
  11. [11]
  12. [12]
    [PDF] Model Theory of Ultrafinitism II - arXiv
    Nov 26, 2023 · Abstract. This paper presents a novel possible worlds semantics, designed to elucidate the underpinnings of ultrafinitism.
  13. [13]
    [PDF] A potentialist conception of ultrafinitism - Joel David Hamkins
    Apr 13, 2025 · Truth in the limit model thereby becomes expressible in the partial worlds. A potentialist conception of ultrafinitism. Joel David Hamkins ...Missing: foundations AI
  14. [14]
    Is there any formal foundation to ultrafinitism? - MathOverflow
    Oct 30, 2010 · According to Wikipedia, it has been primarily studied by Alexander Esenin-Volpin. On his opinions page, Doron Zeilberger has often expressed ...Missing: primary | Show results with:primary
  15. [15]
  16. [16]
    Existence and feasibility in arithmetic | The Journal of Symbolic Logic
    Mar 12, 2014 · Existence and feasibility in arithmetic. Published online by Cambridge University Press: 12 March 2014. Rohit Parikh. Show author details. Rohit ...
  17. [17]
    Zeilberger, Doron - Rutgers Math
    Doron Zeilberger; Doron Zeilberger; Board of Governors Professor; Distinguished Professor of Mathematics; TTF; Faculty, Research; Discrete Mathematics ...
  18. [18]
    Doron Zeilberger
    Sections. Coordinates · Teaching · ExpMath Seminar · Bio (by E. Weisstein) 1984 Bio · Articles · Personal Journal · Opinions · Maple Programs · Links ...Papers by Doron Zeilberger · Doron Zeilberger's Family · Favorite Links of Doron...
  19. [19]
    Opinions of Doron Zeilberger - Mathematics Department
    Opinion 51: It is Important to Keep Looking for Non-Computer Proofs of the Four-Color Theorem, BUT Not Because of the "Usual" Reasons. (Written Feb. 25 ...Missing: advocacy | Show results with:advocacy
  20. [20]
    Doron Zeilberger's 39th Opinion:
    Sep 10, 1999 · The computer revolution has spurred a new kind of math: `quasi-empirical' and experimental (whose central object is the mathematical FACT). Also ...Missing: advocacy | Show results with:advocacy
  21. [21]
    Doron Zeilberger's 94th Opinion
    Jan 4, 2009 · The December 2008 issue of the Notices of the American Mathematical Society was dedicated to the activity of (Computerized) Formal Proof.Missing: advocacy | Show results with:advocacy
  22. [22]
    [PDF] “REAL” ANALYSIS Is A DEGENERATE CASE of DISCRETE ...
    “REAL” ANALYSIS Is A DEGENERATE CASE of DISCRETE ANALYSIS. Doron ZEILBERGER. 1. The ICDEA Conferences: An Asymptotically Stable Recurrence. In one of ...
  23. [23]
    Ep. 97 - Math Heresy: Ultrafinitism | Dr. Doron Zeilberger - YouTube
    Apr 21, 2019 · Dr. Doron Zeilberger is the Distinguished Professor of Mathematics at Rutgers University. He's also a math heretic who thoroughly rejects ...
  24. [24]
    Doron Zeilberger's 108th Opinion
    In my ultrafinitist weltanschauung, the great significance of both Gödel's famous undecidability meta-theorem, and Paul Cohen's independence ...
  25. [25]
    [PDF] Warning Signs of a Possible Collapse of Contemporary Mathematics
    He proposed to establish the consistency of classical mathe- matics, beginning with arithmetic, by concrete syntactical means. It is well known that his program ...Missing: syntactic | Show results with:syntactic
  26. [26]
    Hilbert's 10th Problem - MIT Press
    Oct 13, 1993 · This book presents the full, self-contained negative solution of Hilbert's 10th problem. At the 1900 International Congress of ...
  27. [27]
    [PDF] Univalent Foundations Project - Institute for Advanced Study
    Oct 1, 2010 · Universes are interpreted as sub-objects of U<α classifying Kan fibrations with fibers of the size bounded by large cardinals < α. Dependent sum ...Missing: ultrafinitism | Show results with:ultrafinitism
  28. [28]
    The Origins and Motivations of Univalent Foundations - Ideas
    Vladimir Voevodsky, who joined the School of Mathematics as Professor in 2002, is known for his work in the homotopy theory of schemes, algebraic K-theory, and ...
  29. [29]
    [PDF] Strict Finitism's Unrequited Love for Computational Complexity
    Dec 27, 2021 · 6. The Time Hierarchy Theorem states that if f and g are time-constructible functions such that f(n)log2 f(n) ∈ o(g(n)), then DTIME(f(n)) ( ...
  30. [30]
    [PDF] Strict Finitism's Unrequited Love for Computational Complexity
    Mar 28, 2021 · Why should we reason beyond the limits of what is feasible for our intuition? For the ultrafinitist, this means rejecting the potential infinity ...
  31. [31]
  32. [32]
    [PDF] Bounded Arithmetic - UCSD Math
    Apr 24, 2006 · Bounded Arithmetic. Samuel R. Buss. Department of Mathematics. University of California, Berkeley. (EJ Copyright 1985, 1986. Page 2. Page 3 ...Missing: ultrafinitism | Show results with:ultrafinitism
  33. [33]
    None
    ### Summary of Ultrafinitism and Related Concepts from "Strict Finitism’s Unrequited Love for Computational Complexity"
  34. [34]
    Why mathematicians want to destroy infinity – and may succeed
    Aug 4, 2025 · Dean sees ultrafinitism and computational complexity as two sides of the same coin, one more philosophical and the other more practical.
  35. [35]
    Intuitionism in the Philosophy of Mathematics
    Sep 4, 2008 · Intuitionism is a philosophy of mathematics that was introduced by the Dutch mathematician LEJ Brouwer (1881–1966).