Resolution of singularities is a fundamental technique in algebraic geometry that addresses the challenge of transforming a singular algebraic variety into a smooth (non-singular) one through a proper birational morphism, preserving the variety's essential geometric properties while eliminating singularities. This process typically involves a sequence of blow-ups along smooth subvarieties, introducing exceptional divisors that replace singular points with projective spaces, thereby yielding a resolution that is birationally equivalent to the original variety.[1]The origins of resolution of singularities trace back to the work of Isaac Newton and Bernhard Riemann in the 17th and 19th centuries, respectively, who grappled with desingularizing curves and surfaces, but systematic progress began with Oscar Zariski's foundational contributions in the early 20th century on surfaces. A major breakthrough came in 1964 when Heisuke Hironaka proved the existence of resolutions for algebraic varieties of any dimension over fields of characteristic zero, using iterative blow-ups guided by local invariants like multiplicity and hypersurfaces of maximal contact.[1][2] In positive characteristic, resolutions are known for curves and surfaces since Shreeram Abhyankar's work in 1956, and for threefolds via results from Abhyankar (1966) and later Cossart-Piltant (2008), but the general case remains an open problem for higher dimensions due to obstacles like the absence of maximal contact hypersurfaces.[1]This resolution process is indispensable in algebraic geometry, as it enables the application of powerful tools from smooth manifold theory—such as cohomology and intersection theory—to singular varieties by pulling back structures along the birational map. It also underpins advancements in commutative algebra, number theory, and deformation theory, with algorithmic approaches like the Idealistic Filtration Program providing constructive methods even in challenging characteristics.[1]
Fundamentals
Definition of singularities
An algebraic variety over a field k is a geometric object defined by the zero locus of a collection of polynomials. Specifically, an affine variety is an irreducible algebraic set in affine space \mathbb{A}^n_k, where irreducibility means the defining ideal I(V) in the polynomial ring k[x_1, \dots, x_n] is prime.[3] A projective variety is similarly an irreducible algebraic set in projective space \mathbb{P}^n_k, defined by homogeneous polynomials, ensuring the structure is compact and well-behaved under scaling.[3]A point on an algebraic variety is regular (or smooth) if the dimension of the tangent space at that point equals the dimension of the variety at the point, allowing local parameterization by regular coordinates.[4] Conversely, a point is singular if the tangent spacedimension exceeds the dimension of the variety at the point, indicating a failure of smoothness.[4] For hypersurfaces defined by a single equation f = 0, the Jacobian criterion identifies singular points as those where all first partial derivatives \partial f / \partial x_i vanish; more generally, the rank of the Jacobian matrix of the defining equations must drop below the expected value.[4] In classical algebraic geometry, such singularities obstruct local parameterization, as they prevent the implicit function theorem from applying to express the variety locally as a graph over a smooth subspace.[1]Simple examples illustrate these concepts on curves and surfaces. On curves, a node (ordinary double point) occurs at the origin of y^2 = x^2(x + 1), where two branches cross transversally with distinct tangents.[5] A cusp appears at the origin of y^2 = x^3, where the curve touches itself with a single tangent but higher-order contact, failing to separate locally.[6] On surfaces, an ordinary double point is exemplified by the quadric hypersurface \sum_{i=1}^{3} x_i^2 = 0 at the origin, where the quadratic form is non-degenerate but the first derivatives vanish.[5]The multiplicity of a singularity at a point p on a hypersurface defined by f = 0 quantifies its severity as the minimal integer k such that not all partial derivatives of f of order k vanish at p, while all lower-order ones do; formally, \mathrm{mult}_p(f) = \min \{ k \mid the k-th order partials do not all vanish at p \}.[7] This measures the order of contact with the tangent space and motivates resolution to reduce multiplicities iteratively.[1]
Resolution process
In algebraic geometry, a resolution of singularities for a variety X is a proper birational morphism \pi: Y \to X from a smooth variety Y to X such that \pi is an isomorphism over the regular locus X_{\mathrm{reg}} of X.[8] This morphism ensures that the preimage under \pi of the singular locus \operatorname{Sing}(X) is a closed subset of Y, often consisting of exceptional divisors, while preserving the birational equivalence between Y and X.[9]Key properties of such a resolution include the existence of a birationally inverse rational map \rho: X \dashrightarrow Y, which is defined on a dense open subset and composes appropriately with \pi. The exceptional divisors are the irreducible components of \pi^{-1}(\operatorname{Sing}(X)), which are not contracted by \pi and lie over the singularities of X. The strict transform of a subvariety Z \subset X under \pi is the closure in Y of \pi^{-1}(Z \setminus \operatorname{Sing}(X)), providing a way to track how subschemes behave under the resolution. The primary goal is to obtain a smooth total space Y, even though this introduces an exceptional locus that codimensionally reflects the singularities resolved.[10]In the embedded setting, where X is a singular subvariety of a smooth variety, resolutions are distinguished as weak or strong. A weak resolution requires the strict transform of X to be smooth, while a strong resolution additionally demands that the exceptional divisors intersect the strict transform with simple normal crossings, ensuring a more refined geometric structure suitable for applications like intersection theory.[10]A representative example occurs when resolving a nodal singularity on a curve, such as the curve defined by y^2 = x^3 + x^2 in the affine plane, which has a node at the origin. Blowing up the singular point replaces it with an exceptional \mathbb{P}^1, and the strict transform becomes a smoothcurve isomorphic to the original away from the node, thus yielding a resolution.[9]
Basic operations: Normalization and blowing up
Normalization is a fundamental operation in algebraic geometry that addresses certain types of singularities by replacing a variety with its normal model. For an integralscheme X, the normalization \nu: X^\nu \to X is defined as the relative spectrum of the integral closure of the structure sheaf \mathcal{O}_X in the sheaf of total quotient rings, which for a reduced scheme coincides with the sheaf of meromorphic functions \mathcal{K}_X.[11] In the affine case, if X = \operatorname{Spec}(A) where A is an integral domain, the normalization is \operatorname{Spec}(A'), with A' the integral closure of A in its fraction field K(A).[12] This morphism is integral, birational, and universal among morphisms from normalschemes to X that are dominant on each irreducible component.[11]Normalization reduces singularities to normal ones, where every local ring is normal (integrally closed in its fraction field), often simplifying the structure for further resolution steps.[12]A representative example is the cuspidal curve defined by y^2 = x^3 in the affine plane over an algebraically closed field k of characteristic not 2 or 3. The coordinate ring is A = k[x, y]/(y^2 - x^3), which is not integrally closed since the element t = y/x in the fraction field satisfies the monic polynomial T^2 - x = 0 over A. The integral closure A' = k is obtained by adjoining t, with the normalization map given by x \mapsto t^2, y \mapsto t^3, yielding an isomorphism to the affine line \mathbb{A}^1_k, which is normal and smooth.[13] This process separates the singular point at the origin into a smooth parametrization, illustrating how normalization resolves non-normal singularities while preserving the function field.[11]Blowing up is another essential operation that modifies a variety by replacing a singular subvariety (the center) with its projectivized normal cone, thereby separating tangent directions and potentially reducing singularity complexity. For a scheme X and a closed subscheme defined by an ideal sheaf \mathcal{I}, the blow-up \pi: \operatorname{Bl}_\mathcal{I} X \to X is constructed as the relative Proj over X of the Rees algebra \bigoplus_{n \geq 0} \mathcal{I}^n, a graded \mathcal{O}_X-algebra.[14] In the affine setting, if X = \operatorname{Spec}(A) and \mathcal{I} corresponds to an ideal I \subset A, then \pi^{-1}(\operatorname{Spec}(A)) = \operatorname{Proj}_A(\bigoplus_{d \geq 0} I^d).[15] The blow-up enjoys a universal property: it is the terminal object among X-schemes Y \to X such that the scheme-theoretic inverse image of the center is an effective Cartier divisor.[14]For a centered blow-up along a closed subscheme C \subset X with ideal sheaf \mathcal{I}_C, the morphism is \pi: \operatorname{Bl}_C X \to X, where \operatorname{Bl}_C X = \operatorname{Proj}_X(\bigoplus_{n \geq 0} \mathcal{I}_C^n). The exceptional divisor E = \pi^{-1}(C) is an effective Cartier divisor on \operatorname{Bl}_C X, and its class in the Chow group or Picard group satisfies [E] = c_1(\mathcal{O}_{\operatorname{Bl}_C X}(1)), where \mathcal{O}(1) is the tautological line bundle on the Proj.[14] Blowing up reduces the multiplicity of singularities at points in C by replacing them with the exceptional divisor, which encodes directions in the normal space; for instance, at a point of multiplicity m > 1, the blow-up can decrease the multiplicity along the strict transforms.[15] It also separates intersecting branches of singular loci, aiding in the isolation of components for iterative resolution.[14]These operations provide prerequisites for advanced resolution techniques, as the exceptional divisors arising from blow-ups associate to discrete valuations on the function field of X. Specifically, each prime divisor in the exceptional locus corresponds to a divisorial valuation v_E, defined by the order of vanishing along E, which measures the "depth" of functions relative to the singularity and guides subsequent blow-ups in algorithms like Hironaka's.[14]
Historical methods for low-dimensional varieties
Resolution of curve singularities
Resolution of curve singularities focuses on transforming singular algebraic curves into smooth ones through birational maps, primarily using techniques tailored to dimension one. These methods, developed in the classical era, exploit the low-dimensional structure of curves to parameterize branches, compute invariants like genus, and iteratively simplify singularities via projections or blowups. Unlike higher-dimensional cases, curve resolution is fully achievable algebraically in any characteristic, relying on normalization and local parameterization rather than global minimal models.[16]One of the earliest approaches is Newton's method from the 1670s, which parameterizes singular branches using Puiseux series expansions. For a plane curve defined by f(x, y) = 0, Newton showed that near a singular point, the branches admit expansions of the form y = \sum a_k x^{k/n} for some integer n, allowing explicit computation of intersection multiplicities and separation of branches. This analytic tool resolves the singularity by providing a smooth parameterization, though it requires solving for coefficients iteratively via Newton's polygon. The method applies to both plane and space curves by projecting to a plane.[16]In the 1850s, Riemann extended resolution to analytic curves using complex methods, leveraging the Riemann-Roch theorem to compute the genus of the desingularization. For a singular curve, Riemann constructs the associated Riemann surface by resolving branch points via linear systems on a smooth model, ensuring the map to the original curve is birational and proper. This yields the normalization, where the genus g satisfies $2g - 2 = \deg K - \sum (m_p - 1), with m_p the multiplicity at point p, directly from Riemann-Roch applied to the smooth cover.[17]Albanese's method from the 1920s provides a purely algebraic resolution by projecting the curve to a high-dimensional projective space. For a curve C of degree d, embed it in \mathbb{P}^N with N > 2d; a general projection to \mathbb{P}^2 then yields a birational model with only nodes as singularities, resolvable by normalization since nodes separate into smooth branches. This projection exploits Bertini's theorem for general position, ensuring transversality and avoiding worse singularities.[16]Noether's approach in the 1870s uses successive blowups at singular points and their infinitely near successors to track and reduce singularity complexity. Starting with a plane curve, blow up the singular point P; the strict transform inherits singularities at points infinitely near P, ordered by proximity graphs. The multiplicity drops at each step, and the process terminates after finitely many blowups, yielding an embedded resolution where the total transform is normal crossings. Proximity measures, like the number of blowups needed, quantify the singularity type.[16]Bertini's method, developed in the early 1900s, builds on Noether's blowups but incorporates general position via quadratic transformations. For a singular plane curve, apply a degree-2 map \mathbb{P}^2 \dashrightarrow \mathbb{P}^2 to achieve transversality, transforming cusps or tacnodes into ordinary double points (nodes). Iterating this with linear projections ensures the final model has only nodes, resolvable by normalization, and leverages Bertini's theorem to guarantee generic hypersurfaces intersect transversally.[16]A key result is that every algebraic curve admits a resolution in any characteristic, achievable in a bounded number of steps proportional to the maximum multiplicity of its singularities. For a curve of multiplicity m at a point, at most m-1 blowups suffice locally to resolve it.[16]A representative example is the cusp singularity given by y^2 = x^3 in the plane, with multiplicity 2 at the origin. The blowup at the origin, in the chart with coordinates x = u, y = u v, yields the strict transform equation (u v)^2 = u^3, or v^2 = u after simplification (dividing by u^2). This is a smooth parabola, resolving the singularity after one step, with the exceptional divisor intersecting transversally.[16][18]
Resolution of surface singularities
Resolution of surface singularities, a cornerstone of algebraic geometry in dimension two, involves transforming a singular surface into a smooth one via a birational morphism, typically achieved through a sequence of blow-ups at singular points or curves. Unlike curve singularities, which can often be resolved by parameterization techniques developed in the nineteenth century, surface resolution requires handling intersections and exceptional loci, leading to the emergence of intersection theory and graph-theoretic descriptions of the process. Seminal early methods focused on achieving normal crossings decompositions, where the total inverse image of a singularity consists of smooth components meeting transversally.One of the foundational approaches is Zariski's method from the 1930s, which iteratively applies normalization to eliminate non-normal points and blow-ups to separate components, guided by the fundamental cycle—a minimal positive cycle in the exceptional divisor that captures the multiplicity structure at the singularity. This process decomposes the singularity into normal crossings, ensuring the strict transform and exceptional divisors intersect properly. Zariski demonstrated that this yields a resolution for algebraic surfaces over fields of characteristic zero, with the fundamental cycle providing a computable invariant for tracking progress.[19]Heinrich Jung's method, introduced in 1908, targets rational singularities—those where the geometric genus remains unchanged upon resolution—and employs continued fraction expansions to construct resolution graphs that encode the configuration of exceptional curves. For cyclic quotient singularities, typical of rational types, the continued fraction of the ramification index determines the chain of exceptional divisors, each a rational curve with prescribed self-intersections, facilitating a minimal resolution. This approach prefigures modern toric resolutions and highlights the combinatorial nature of surface desingularization.[20]The Albanese method, originally for curves but adapted for surfaces in the 1920s, relies on birational projections from singular loci to lower-dimensional spaces, iteratively reducing the embedding dimension until smoothness is achieved. For surfaces, this involves projecting from exceptional curves to resolve embedded singularities, though it often requires supplementation with normalization to handle non-invertible sheaves. This geometric projection technique underscores the role of linear systems in birational geometry for dimension two.[17]In positive characteristic, Abhyankar's method from the 1950s and 1960s addresses challenges like wild ramification by combining valuation theory with ramified covers, proving local uniformization for surfaces over fields of characteristic p > 0. By resolving the base and lifting ramifications carefully, even wild ones where the ramification index is divisible by p, Abhyankar constructs a resolutionmorphism that is an isomorphism away from the singular locus, extending Zariski's framework to mixed characteristics.[21][22]Joseph Lipman's contributions in the 1960s introduced partial resolutions, particularly for quotient singularities arising from finite group actions, where a common resolution exists for the singularity and its deformations. Lipman's strategy involves computing the conductor ideal and using Grauert-Remmert criterion to contract exceptional curves, yielding a minimal model; for quotient surfaces, this often results in graphs embeddable in the resolution of the orbifold. This method not only resolves but also preserves key invariants like the dualizing sheaf.[23]Central to these methods are resolution graphs, which depict the dual graph of the exceptional curves in the minimal resolution: vertices represent irreducible components (typically \mathbb{P}^1's), edges indicate intersections, and labels denote self-intersection numbers. For rational double points (ADE singularities), these graphs coincide with the Dynkin diagrams of types A_n, D_n, and E_6, E_7, E_8, featuring chains or trees of curves with self-intersection -2, reflecting the Kleinian singularity structure from quotient by finite subgroups of \mathrm{SL}(2, \mathbb{C}).[24]A key theorem states that every surface singularity admits a resolution in finitely many blow-ups, and among all resolutions, the minimal one—lacking exceptional curves of the first kind (rational with self-intersection -1)—is unique up to isomorphism over the singular point. This minimality ensures a canonical model for studying invariants like the Todd genus or arithmetic genus.[25]As an illustrative example, consider the A_n singularity, the quotient \mathbb{C}^2 / \mu_{n+1} where \mu_{n+1} acts by (x,y) \mapsto (\epsilon x, \epsilon^{-1} y) with \epsilon a primitive (n+1)-th root of unity. Its minimal resolution consists of a chain of n exceptional \mathbb{P}^1's, each with self-intersection -2, intersecting sequentially; the dual graph is the A_n Dynkin diagram, and the resolution map contracts this chain to the origin.[20]
General resolution in higher dimensions
Approaches for threefolds and beyond
In the mid-1940s, Oscar Zariski extended his earlier methods for resolving singularities of curves and surfaces to threefolds over fields of characteristic zero, employing embedded resolution techniques that embed the singular variety into a smooth ambient space and resolve via successive blowups along subvarieties.[1] His approach relied on local uniformization, ensuring that singularities could be resolved locally around each point, but it remained heuristic and incomplete for general threefolds, requiring case-by-case analysis without a uniform algorithm. Zariski's 1944 paper demonstrated resolution for algebraic three-dimensional varieties, but the method's reliance on valuation theory and multiplicity computations highlighted limitations in handling complex interactions of singular loci in higher dimensions.Building on these foundations, Shreeram Abhyankar developed specialized methods in the 1970s for resolving singularities of toric and quasi-homogeneous threefolds, utilizing toric blowups that preserve the combinatorial structure of toric varieties.[1] For toric threefolds, his techniques involved resolving singularities by blowing up along toric ideals, leading to a smooth toric model after finitely many steps, as toric varieties admit explicit fans that guide the process. In the quasi-homogeneous case, Abhyankar extended hypersurface maximal contact methods to threefolds, achieving resolution for singularities invariant under weighted homogeneous scalings, though these were restricted to specific classes and did not generalize to arbitrary threefolds.Heisuke Hironaka's early work in the 1960s, particularly his 1960 Harvard thesis, introduced initial blowup strategies for threefolds that emphasized birational transformations and the control of exceptional loci, laying groundwork for global resolution.[26] These attempts incorporated concepts like order functions and approximate manifolds to track singularity strata, with preliminary ideas on log discrepancies emerging to measure the "severity" of exceptional divisors relative to the canonical divisor during blowups.[1] Hironaka's strategies for threefolds involved iterative blowups along nonsingular centers chosen to reduce invariants like multiplicity, but they revealed the need for more refined tools to ensure termination in higher dimensions.Approaches for threefolds and beyond face escalating challenges, including the explosion of exceptional divisors that can proliferate uncontrollably with each blowup, complicating the geometry and computation in dimensions greater than two.[1] Non-uniqueness of resolution paths arises, as different sequences of blowups may yield non-isomorphic smooth models, and computational demands grow exponentially with dimension due to the intricate intersections of singular strata.[1] Prior to Hironaka's general theorem, no uniform bound existed on the number of blowup steps required for resolution, with threefolds necessitating handling of pinch-point singularities like Whitney umbrellas and more severe types that resist simple normalization.[1]A classic example of the failure of naive blowup on a threefold pinch point is the hypersurface in \mathbb{A}^3 defined by x^2 - y^2 z = 0, where the singular locus is the z-axis.[1] Blowing up the origin replaces the singularity with an exceptional \mathbb{P}^1 \times \mathbb{P}^1, but the strict transform retains a pinch point along a curve, and the order of the singularity does not decrease uniformly, requiring further targeted blowups to resolve.[1] Surface resolution methods, such as those for rational double points, serve as building blocks here but lose finiteness guarantees in threefolds, shifting focus to algorithmic control of exceptional configurations.[1]
Resolution for schemes
In the scheme-theoretic framework, resolution of singularities extends the classical notion from varieties to more general objects, accommodating non-reduced structures and potentially infinite-type or non-Noetherian settings. A resolution of singularities for a scheme X is defined as a proper birational morphism \pi: Y \to X where Y is a regular scheme. A scheme is regular if every local ring \mathcal{O}_{Y,y} at a point y \in Y is a regular local ring, meaning the maximal ideal is generated by a minimal number of elements equal to the Krull dimension of the ring. Equivalently, regular schemes are those that are locally universally Japanese and smooth over the spectrum of their residue fields, ensuring a well-behaved local structure even for non-reduced components.[27]The existence of resolutions is established for excellent schemes in characteristic zero, building on Hironaka's foundational work for varieties, which was later extended to quasi-excellent Noetherian schemes, confirming a conjecture of Grothendieck from EGA IV, via inductive arguments on dimension and embedding into smooth schemes. Excellent schemes, characterized by properties like finite generation of integral closures and regular formal fibers, form a broad class including those of finite type over fields or DVRs. However, the problem remains open for general schemes in positive characteristic, with partial results available for schemes of finite type over perfect fields, such as resolutions in dimensions up to three or for specific classes like surfaces. In mixed characteristic, further restrictions apply, often requiring quasi-excellence or local complete intersections.[28]De Jong's approach in the 1990s provides a weaker but more accessible alternative using alterations: proper, surjective, generically finite morphisms of finite type from a regular scheme, which exist in any characteristic for reduced schemes of finite type over fields and serve as proxies for full resolutions in arithmetic applications like cohomology computations. This method constructs alterations via stacks and universal homeomorphisms, bypassing direct blowups and enabling applications beyond strict resolution. Complementing this, the valuation-theoretic approach employs discrete valuation rings (DVRs) to achieve local uniformization, a process that resolves singularities locally at valuations by embedding into DVR extensions, often combined with ramification theory to control orders of ideals. This technique is particularly useful in positive characteristic, where it informs inductive strategies on valuation ranks but has not yet yielded a full global resolution for higher dimensions.[29][17]A key consequence is that resolution implies weak normalization: since regular schemes are normal (integral domains locally with unique factorization), a birational regular model weakly normalizes the original scheme by separating components in the total ring of fractions. However, counterexamples arise in infinite-dimensional or non-Noetherian settings, such as certain infinite unions of curves where no proper birational regular model exists due to ascending chains of primes preventing finite presentation or uniform control of singularities. For an illustrative example of handling non-reduced structures, consider the scheme X = \operatorname{Spec} k[\epsilon]/(\epsilon^2) over a field k, a 0-dimensional non-reduced point. Blowing up X along its maximal ideal (the non-reduced structure sheaf ideal) yields the reduced scheme \operatorname{Spec} k, a proper birational morphism that "resolves" the nilpotent embedding by separating the infinitesimal structure, demonstrating how blowups in the scheme setting can eliminate nilpotents while preserving birational equivalence.[14]
Proofs and techniques in characteristic zero
Hironaka's theorem
Hironaka's theorem states that every reduced excellent scheme of finite type over a field of characteristic zero admits a resolution of singularities, meaning there exists a proper birational morphism from a regular scheme to the given scheme such that the preimage of the singular locus is a normal crossings divisor. The hypotheses require the scheme to be reduced (no nilpotent elements in the structure sheaf), of finite type over the base field (ensuring quasi-compactness and separatedness), and excellent, which means it is Noetherian, Jacobson (every prime ideal is the intersection of maximal ideals containing it), and catenary (all maximal chains of prime ideals between two primes have the same length). These conditions ensure the scheme behaves well under localization and completion, allowing the resolution process to be controlled effectively.In 1964, Heisuke Hironaka proved this theorem, resolving Oscar Zariski's long-standing conjecture that singularities of algebraic varieties over fields of characteristic zero could always be resolved by a finite sequence of blow-ups. The full proof appeared in two parts in the Annals of Mathematics, marking a landmark achievement in algebraic geometry after decades of partial results for low dimensions.The theorem has profound implications for several areas of mathematics. It enables the computation of étale cohomology groups by replacing singular varieties with smooth models, facilitating the study of topological invariants in algebraic geometry. In motivic integration, it allows the definition of measures on singular spaces via pullback to resolved varieties, with applications to counting rational points and volumes in arithmetic geometry. Furthermore, it forms the foundation of birational geometry, enabling classifications of varieties up to birational equivalence and the study of minimal models.Initially, Hironaka's result applied to projective varieties over characteristic zero fields, providing a non-embedded resolution. It was soon extended to quasi-projective varieties by covering them with affine open sets and patching local resolutions. Embedded resolution, where the ambient space is also resolved, was incorporated in subsequent refinements of the theorem.
Key ideas in the proof
Hironaka's proof proceeds by induction on the dimension of the variety, reducing the general case to the resolution of hypersurface singularities through a series of permissible blowups along smooth centers chosen to control the singularity invariants.[30] In each step, the centers are selected using a stratification based on contact loci with hypersurfaces of maximal contact, where the order of the ideal remains constant after blowup, ensuring that the blowup separates components of different multiplicities without introducing new singularities worse than the original.[31] This stratification leverages local coordinates adapted to the singularity, often involving equiconstant points defined by the constancy of the Hilbert-Samuel function along generic arcs, to identify non-singular subvarieties that intersect the singular locus transversely.[30]A central tool is the principalization of ideals, where successive blowups transform a given ideal sheaf into an invertible sheaf on the blowup, achieved by making its total transform a principal monomial ideal times the weak transform.[32] This process simplifies the equations defining the variety, allowing embedded resolution in ambient smooth space, and is facilitated in characteristic zero by the commutativity properties of coefficient ideals under blowup.[30]The proof employs log discrepancies to measure progress toward a log resolution, defined for an exceptional divisor E over a pair (X, D) as a(E, X, D) = k_E + 1 - \mathrm{val}_E(D), where k_E is the coefficient in the relative canonical divisor and \mathrm{val}_E(D) is the valuation of the divisor D along E. These discrepancies ensure that the blowups produce a log terminal pair after finitely many steps, with the minimal discrepancy guiding the choice of centers to increase overall log smoothness.[32]Inversion of adjunction plays a key role in the induction, stating that if a variety is smooth outside a hypersurface, then the log structure induced by that hypersurface yields a log smooth pair, allowing the resolution to descend to the hypersurface case via generic slices or coefficient ideals.[30]The algorithm terminates because each blowup strictly decreases a local invariant, such as the maximal ideal cycle or the multiplicity measured by the leading term of the Hilbert-Samuel polynomial, in a well-founded order like lexicographic on the sequence of orders and secondary invariants. This decrease is guaranteed for permissible centers in characteristic zero, preventing infinite loops and ensuring a finite sequence of blowups suffices.[31]
Status in positive and mixed characteristic
Challenges and partial results
In positive and mixed characteristic, the resolution of singularities faces significant challenges due to the rigid behavior of tangent bundles, which limits the flexibility in selecting blowup centers that effectively reduce singularity complexity, unlike the more adaptable situation in characteristic zero. Inseparability of field extensions and morphisms introduces wild singularities where standard tangent cone analysis fails, as polynomials modulo p-th powers do not behave as expected. Additionally, wild ramification in extensions leads to "kangaroo points" and oblique polynomials that cause resolution invariants to increase rather than decrease under blowups, obstructing inductive proofs.[33][34]Partial results exist for low-dimensional cases. Resolution of singularities for curves holds in any characteristic, achieved through normalization followed by blowups at singular points using local invariants like order and multiplicity. For surfaces, Lipman established a non-embedded resolution for excellent surfaces over fields of positive characteristic in the 1970s, employing cohomology, duality, and reduction to rational singularities via normalization and successive blowups. In 2024, Hauser and Perlega provided a systematic proof for the embedded resolution of two-dimensional hypersurface singularities over fields of arbitrary characteristic.[35] Toric varieties admit resolutions in any characteristic through combinatorial toric blowups, which refine the fan without relying on field-specific properties.[25]However, higher dimensions reveal obstructions. Moh's example from the 1970s provides a threefold singularity in characteristic p, defined by hybrid polynomials such as f = x^2 + y^7 + t y^4 z^2 + t^2 y z^4 in characteristic 2, where the resolutioninvariant (e.g., from (2,2) to (2,3)) increases under blowup, preventing the multiplicity or order from decreasing and breaking induction on dimension.[34] Local uniformization—a weaker condition requiring only that local rings become regular after birational extension—has been proven for curves and surfaces in any characteristic but remains open for dimensions three and higher. In characteristic p, certain singularities may require infinitely many blowups, as illustrated by examples where order increases bound the process but do not terminate it finitely, such as those involving repeated inseparable extensions.A notable pathology occurs with supersingular elliptic curve singularities, where uniform resolution across families fails due to the inseparable nature of the Frobenius, preventing a single sequence of blowups from resolving all fibers simultaneously in positive characteristic.
Recent advances
In the 2020s, significant progress has been made in positive and mixed characteristic using advanced techniques. Algorithmic and machine learning applications have emerged as a recent frontier. Bérczi et al. (2023) applied reinforcement learning to the Hironaka game, training agents to identify optimal blowup centers for resolving singularities, achieving efficient resolutions for high-dimensional examples that outperform traditional heuristics.[36] As of November 2025, resolution of singularities is fully established for schemes in characteristic zero, with effective algorithms available; in positive characteristic, results remain partial, including complete resolutions for threefolds in characteristic p > 5. The ongoing SINGULAR2025 project explores connections between singularity resolution and cryptography, leveraging partial results in positive characteristic for secure computational schemes.[37]In July 2025, Yi Hu announced a preprint claiming a universal characteristic-free resolution of singularities for singular integral affine varieties of finite presentation over perfect fields defined over \mathbb{Z}, via a smooth morphism to a smooth scheme followed by a projective birational morphism (preprint status).[38]A notable example is the resolution of toric singularities in positive characteristic using weighted fans, where subdividing the fan with appropriate weights yields a smooth toric variety birational to the original, bypassing characteristic-specific obstructions through combinatorial adjustments.[39]
Examples and pathologies
Illustrative resolutions
One simple example of resolution by blowup occurs for the plane nodal cubic curve defined by the equation y^2 = x^2(x + 1) in \mathbb{A}^2, which has a node (ordinary double point) at the origin where the two branches cross transversely with distinct tangent lines y = x and y = -x. Blowing up \mathbb{A}^2 at the origin replaces the point with the exceptional divisor \mathbb{P}^1, parametrized by the projectivized tangent directions. In the chart with coordinates (x, v) where y = v x, the total inverse image of the curve is given by x^2 v^2 = x^2 (x + 1), or v^2 = x + 1 after canceling x^2 (valid away from x = 0); the strict transform is thus the smooth curve v^2 - x - 1 = 0. In the other chart (u, y) where x = u y, the equation becomes y^2 = u^2 y^2 (u y + 1), simplifying to $1 = u^2 (u y + 1), which has no solution at the origin of this chart. The strict transform intersects the exceptional divisor \mathbb{P}^1 at two distinct points corresponding to the slopes v = \pm 1, confirming that the blowup separates the branches and resolves the singularity, with the exceptional divisor being a \mathbb{P}^1 of self-intersection -1.A similar process resolves the cusp singularity on the plane curve y^2 = x^3 at the origin, where the two branches coincide with a common tangent line y = 0. Performing the blowup at the origin, consider the chart with coordinates (x, u) where y = x u; substituting yields (x u)^2 = x^3, or x^2 u^2 = x^3, which simplifies to u^2 = x on the strict transform after canceling x^2. The equation u^2 - x = 0 defines a smooth curve in this chart, as the partial derivatives (-1, 2u) do not vanish simultaneously at any point. The strict transform intersects the exceptional divisor \mathbb{P}^1 at a single point with multiplicity 2, corresponding to the cuspidal tangent direction. This normalization map, parametrized by x = t^2, u = t (hence y = t^3), shows that the resolved curve is isomorphic to \mathbb{A}^1, fully desingularizing the cusp in one blowup.For the Whitney umbrella surface defined by x^2 = y^2 z in \mathbb{A}^3, the singularity locus is the entire z-axis, where the surface pinches. Resolution requires blowing up along this singular locus, the ideal (x, y). The blowup is covered by affine patches of \mathbb{A}^3 \times \mathbb{P}^1; in the patch where the first homogeneous coordinate is 1 (coordinates y, z, u with x = y u), the inverse image is y^2 u^2 = y^2 z, or y^2 (u^2 - z) = 0. The component y = 0 is the exceptional divisor (isomorphic to \mathbb{A}^1 \times \mathbb{P}^1 over the z-axis), while the strict transform is u^2 = z, a smooth quadric surface. In the other patch (coordinates x, z, t with y = x t), the inverse image is x^2 = x^2 t^2 z, or x^2 (1 - t^2 z) = 0, with exceptional divisor x = 0 and strict transform t^2 z = 1, again smooth. The strict transform intersects the exceptional divisor transversely along a conic, achieving resolution in a single blowup with the exceptional set being a ruled surface.In the toric category, the A_1 surface singularity, realized as the affine toric variety \mathrm{Spec}\, k[u, v, w]/(u v - w^2) (the quotient \mathbb{C}^2 / \mathbb{Z}_2 embedded in \mathbb{A}^3), is resolved by a single toric blowup corresponding to subdividing the fan by adding the ray generated by (1,1) in the lattice \mathbb{Z}^2. The original fan is the cone generated by (1,0) and (0,2). In coordinates, blowing up the origin (maximal ideal center) in charts like the u-chart (u = u, v = u s, w = u t) gives the strict transform u s = t^2 after substitution, simplifying to the smooth equation s = t^2 in the chart. The exceptional divisor in the resolved surface is \mathbb{P}^1 with self-intersection -2, and the resolved variety is the total space of \mathcal{O}_{\mathbb{P}^1}(-2).
Counterexamples to naive approaches
Naive strategies for resolving singularities, such as iteratively blowing up points of maximal multiplicity or singular loci without additional considerations, often fail to produce a resolution and can even exacerbate the problem. These approaches overlook the global and historical dependencies in the resolution process, leading to situations where singularities persist, worsen, or require non-local decisions. The following examples illustrate key pathologies that necessitate more sophisticated, memory-dependent algorithms.A fundamental issue is that blowups do not always decrease the multiplicity of singularities, contrary to the expectation that they "unfold" the local structure. For instance, consider the plane curve singularity defined by f = x^2 + y^5 = 0 at the origin, which has multiplicity 2. Blowing up the origin along the maximal ideal (x, y) results in charts where the strict transform is given by f' = x^2 + y^3 = 0 (in the chart with coordinates (u, y) where x = u y), which still has multiplicity 2 at the origin of this chart. This temporary stagnation in multiplicity reduction shows that a purely multiplicity-driven blowup strategy may stall, requiring further blowups to eventually decrease it after multiple steps.Blowing up the most singular points or loci can create worse singularities, as demonstrated in examples of threefolds. Consider the threefold X \subset \mathbb{A}^4 defined by x y - z w = 0, whose singular locus consists of two skew lines L_1: x = z = 0 and L_2: y = w = 0, each of multiplicity 2. Blowing up either line, say L_1, yields a new threefold whose multiplicity-2 locus is now a union of three lines, one of which is singular. Subsequent blowups along these new lines recreate isomorphic copies of the original singularity, leading to an infinite chain of blowups without resolution. This example, originally due to Artin, highlights how local blowups at singular centers can propagate and amplify the singular structure rather than resolving it.Incremental resolution procedures require "memory" of prior blowups because choices at one stage depend non-locally on the history of the process, preventing purely local or symmetric algorithms. In the hypersurface defined by f = x^3 + y z^2 w^4 + y z^4 w^2 = 0 in \mathbb{A}^4, the singularity at the origin has a symmetric structure suggesting blowups along coordinate axes. However, blowing up first along the w-axis and then the y-axis resolves it, while reversing the order creates a new singularity of higher multiplicity. A naive symmetric procedure ignoring this order fails, necessitating tracking of previous centers to make history-dependent, non-local decisions.Resolutions of singularities are generally not functorial, meaning that compatible morphisms between varieties do not necessarily lift to compatible resolutions. A counterexample arises in the context of marked ideals and base change: for a variety X over a field k and an extension K/k, resolutions may not behave well under base change, particularly in positive characteristic where relative multiplicity can increase under field extensions. This non-functoriality implies that resolutions cannot always be constructed compatibly under morphisms, complicating relative resolution problems.In positive characteristic, minimal resolutions may not exist due to the possibility of infinite descending chains of blowups. For example, in characteristic p > 0, certain hypersurface singularities admit sequences of blowups along smooth centers that produce infinitely near points with strictly decreasing orders but never terminate, violating the descending chain condition on ideals. This pathology, absent in characteristic zero, underscores the failure of naive inductive strategies relying on finite termination.Even for toric varieties, whose singularities are combinatorially described by fans, resolution may require blowups along non-toric centers, i.e., subvarieties not invariant under the torus action. Consider the toric threefold obtained as the blowup of \mathbb{A}^3 along two skew lines; the resulting fan has a singular cone, and resolving it demands a center that is not a toric stratum, as toric blowups alone preserve the combinatorial structure without fully separating the orbits. This illustrates that naive toric refinements of the fan can fail to resolve all singularities without incorporating non-invariant adjustments.Finally, resolution does not commute with products: if individual components are not resolved, their product remains singular even after partial resolutions. For the hypersurface f = x^2 + y^4 + z^4 = 0 in \mathbb{A}^3, which is a product-like structure, blowing up the origin yields a strict transform whose equisingular locus is unresolved, equivalent to the product of unresolved cuspidal curves, preventing a smooth total space despite resolving one factor. This counterexample shows that naive product decompositions cannot guarantee resolution without addressing each component fully.
Variants and extensions
Minimal and good resolutions
In algebraic geometry, a minimal resolution of a singular variety is a resolution of singularities through which every other resolution factors uniquely, characterized by the absence of exceptional curves that can be contracted, specifically no (-1)-curves in the exceptional locus for surfaces. For normal surface singularities over a field of characteristic zero, such a minimal resolution exists and is unique up to isomorphism; it is obtained by successively contracting any (-1)-curves (rational curves with self-intersection -1) present in a given resolution until none remain.[40] In higher dimensions, minimal resolutions do not always exist and are generally non-unique, as illustrated by the conifold singularity in dimension three, which admits multiple small resolutions that do not factor through a common minimal one.[41]A good resolution, also known as a resolution with simple normal crossings (snc), is a proper birational morphism \pi: Y \to X from a smooth variety Y to the singular variety X such that the support of the exceptional divisor \pi^{-1}(\operatorname{Sing}(X)) is a simple normal crossings divisor, meaning locally it is defined by coordinate hyperplanes that intersect transversally. Such resolutions always exist in characteristic zero by further blowups along non-snc strata in any initial resolution, ensuring the exceptional locus has snc support without introducing unnecessary complexity. In positive characteristic, achieving snc may require additional techniques, but good resolutions remain attainable under certain conditions.For log pairs (X, D), where X is a variety and D is an effective Weil divisor, a log resolution is a proper birational morphism \pi: Y \to X from a smooth Y such that the total preimage \pi^{-1}(X \cup D) (the strict transform of D union the exceptional divisor) has simple normal crossings support; this setup resolves the pair by making log discrepancies such that K_Y - \pi^*(K_X + D) \sim_{\mathbb{Q}} \sum a_E E with a_E > -1 for all prime exceptional divisors E, ensuring the pair behaves like a smooth log pair locally.[42] Log resolutions are essential for studying singularities of pairs, as they allow computation of invariants like log canonical thresholds via the coefficients a_E.An illustrative example is the minimal resolution of the E_6 Du Val surface singularity, a rational double point given locally by the equation x^2 + y^3 + z^4 = 0 in \mathbb{C}^3; its exceptional locus consists of a tree of six smooth rational curves with self-intersections -2, configured in the E_6 Dynkin diagram (a chain of five curves with one branching off the third).[43] This resolution is minimal, as no -1-curves appear, and it is also good since the curves intersect transversally in simple normal crossings.[43]
Applications to other areas
Resolution of singularities plays a foundational role in birational geometry, particularly in the minimal model program (MMP), where it enables the construction of flips and divisorial contractions by providing smooth models that facilitate the necessary birational transformations. In the MMP for complex projective varieties with klt singularities, resolution allows the program to proceed by resolving singularities along the way, ensuring that operations like flips preserve key invariants such as the canonical class. This reliance on resolution is evident in the birational classification of higher-dimensional varieties, where smooth intermediate models are indispensable for verifying termination and minimal model existence.[44]In cohomology theory, resolution of singularities permits the computation of étale and Hodge cohomology groups for singular varieties by lifting to smooth proper models, where standard vanishing theorems and comparison isomorphisms apply. For a singular variety X over a field of characteristic zero, a resolution \tilde{X} \to X induces an isomorphism in étale cohomology with compact supports after accounting for the exceptional locus, allowing the transfer of computational tools from smooth settings. Similarly, in Hodge theory, the resolution provides a smooth model whose Hodge numbers inform those of the singular variety via pushforward and the Leray spectral sequence.[45]In arithmetic geometry, resolution of singularities underpins semistable reduction theorems for varieties over p-adic fields, enabling the construction of models with controlled singularities that achieve good reduction after base change. For abelian varieties or curves over local fields, Hironaka's resolution in characteristic zero allows reduction to the smooth case, facilitating semistable models where the special fiber has normal crossings, crucial for studying Galois representations and monodromy. This application extends to rigid analytic geometry, where resolutions yield semistable compactifications essential for p-adic cohomology computations.[46]Motivic integration leverages resolution of singularities to define and compute motivic volumes of singular schemes, as developed by Kontsevich and Soibelman in their theory of motivic Donaldson-Thomas invariants. By resolving singularities, one obtains a change-of-variables formula in the motivic setting, equating integrals over singular spaces to those over smooth resolutions weighted by exceptional divisors, which proves rationality and invariance under birational equivalence. This framework has illuminated connections between enumerative geometry and noncommutative algebra, particularly for Calabi-Yau threefolds.[47]The existence of resolution of singularities implies the weak factorization property for birational maps between smooth proper varieties over fields of characteristic zero, as shown using alterations—proper birational morphisms with geometrically unibranch fibers—that decompose maps into blow-ups and blow-downs. De Jong's alterations provide the key tool, bridging resolution to factorization without requiring strong resolution in positive characteristic. This result, extended via toric methods, underpins much of modern birational geometry.[48]A concrete illustration is the computation of the topological Euler characteristic for singular varieties, where resolution allows \chi(X) = \chi(\tilde{X}) - r with r the number of irreducible exceptional components, enabling explicit calculations for quotient singularities or hypersurface singularities via known formulas for smooth resolutions. For instance, resolving an A_n surface singularity yields an Euler characteristic adjustment by the number of exceptional curves, matching Milnor fiber computations in topology.[49]