Potential theory
Potential theory is a branch of mathematical analysis that investigates the properties of harmonic functions and their generalizations, including subharmonic and superharmonic functions, which arise as solutions to Laplace's equation \Delta u = 0 or related inequalities.[1] It originated in the early 19th century from physical models of gravitation and electrostatics, where potentials describe force fields generated by mass or charge distributions.[1] Central to the field is the concept of potentials associated to measures via the Laplacian, such as the Newtonian potential u(x) = \int \frac{1}{|x-y|} d\mu(y) in three dimensions (superharmonic for \mu \geq 0) or the logarithmic potential p_\mu(z) = \int \log |z - w| d\mu(w) in the plane (subharmonic for \mu \geq 0).[2] Historically, potential theory developed through contributions from figures like George Green, Carl Friedrich Gauss, Siméon Denis Poisson, and Peter Gustav Lejeune Dirichlet, who formalized boundary value problems such as the Dirichlet problem for harmonic functions.[1] Green's 1828 essay laid foundational integral representations, while Gauss and Poisson advanced the theory in the context of flux and attraction laws.[1] In the mid-20th century, the field was axiomatized by mathematicians including Marcel Brelot, Gustave Choquet, and Jean Deny, incorporating probabilistic interpretations via Brownian motion and excessive functions, as emphasized by Joseph L. Doob.[3] Key concepts include the maximum principle for harmonic functions, which states that a non-constant harmonic function on a bounded domain attains its maximum on the boundary, and the Riesz decomposition theorem, which decomposes a subharmonic function u on a compact set K as u = p_\mu + h, where h is harmonic and $2\pi \mu = \Delta u|_K.[2] Subharmonic functions are upper semicontinuous and satisfy the sub-mean value property: u(w) \leq \frac{1}{2\pi} \int_0^{2\pi} u(w + r e^{it}) dt.[2] These ideas extend to modern applications in complex analysis, such as pluripotential theory on Riemann surfaces, and in probability, where harmonic functions relate to martingales and Markov processes.[3] Potential theory also aids in computing dimensions of sets, like Hausdorff dimension via Frostman's s-potentials, where \int \phi_s(x) d\mu(x) < \infty implies \dim_H(A) \geq s.[3]Introduction
Definition and Scope
Potential theory is a branch of mathematical analysis that studies harmonic functions and their generalizations, such as subharmonic and superharmonic functions, with a focus on their properties and applications.[4] The term "potential theory" originated in 19th-century physics, stemming from the concept of potentials used to describe fundamental forces like gravity and electrostatic attraction, as developed in works on Newtonian gravitation and Coulomb's law. This mathematical framework emerged to formalize these physical ideas, providing tools for analyzing fields derived from scalar potentials.[5] The scope of potential theory encompasses solutions to elliptic partial differential equations, particularly in Euclidean spaces, where it addresses boundary value problems and integral representations of functions. In two dimensions, it maintains strong connections to complex analysis through the identification of harmonic functions with real parts of holomorphic functions, while in higher dimensions, it integrates with broader partial differential equation theory to study regularity and asymptotic behavior.[4] Harmonic functions serve as the central objects, satisfying the mean value property and maximum principles that underpin the theory's analytical structure.[1] A key application lies in modeling conservative force fields, such as gravitational or electrostatic fields, where the force vector is expressed as the negative gradient of a scalar potential function, ensuring path-independent work.[5] This scalar focus distinguishes potential theory from aspects of electromagnetism involving vector potentials, which account for magnetic effects and non-conservative components not captured by scalar fields alone.Historical Development
Potential theory originated in the physical sciences of the 18th and early 19th centuries, rooted in efforts to model gravitational and electrostatic forces. Isaac Newton's formulation of the inverse-square law of gravitation in his Philosophiæ Naturalis Principia Mathematica (1687) provided the foundational force law that later inspired the concept of gravitational potential, enabling the representation of forces as gradients of scalar fields.[6] Similarly, Charles-Augustin de Coulomb's 1785 experiments using a torsion balance established the analogous inverse-square law for electrostatic forces between charged particles, laying groundwork for electrostatic potentials.[7] These physical laws were mathematically unified in 1828 when George Green published An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism, introducing the potential function as a central tool and Green's theorem, which relates surface integrals of potentials to boundary fluxes, thus bridging force calculations to harmonic functions.[8] Carl Friedrich Gauss further advanced the field in 1813 with his divergence theorem, establishing key integral identities for flux through surfaces, essential for potential representations.[9] In the early 19th century, Pierre-Simon Laplace advanced potential theory through his multi-volume Mécanique Céleste (1799–1825), where he employed potential integrals to analyze perturbations in celestial mechanics, demonstrating the stability of planetary orbits under gravitational influences. Building on this, Siméon Denis Poisson formulated Poisson's equation in 1813 during his studies of electrostatics, expressing the relationship between charge density and the Laplacian of the potential, which generalized Laplace's equation for non-vacuum cases.[10] Lord Kelvin (William Thomson) extended these ideas in the 1840s and 1850s, developing the method of images—a symmetry technique for solving boundary value problems in electrostatics—and the Kelvin transform, which preserves harmonic functions under inversion, facilitating solutions for spherical and planar geometries.[11] The mid-19th century saw potential theory evolve into a rigorous mathematical discipline through connections to complex analysis and boundary value problems. Bernhard Riemann's 1851 doctoral thesis on complex functions linked two-dimensional potentials to conformal mappings, showing how analytic functions generate harmonic potentials via real and imaginary parts, as captured by the Cauchy-Riemann equations.[12] Peter Gustav Lejeune Dirichlet formalized boundary value problems in the 1830s and 1850s, introducing the Dirichlet principle, which posits the existence of harmonic functions minimizing energy integrals subject to boundary conditions, though its proof faced challenges until later rigorization.[13] By the early 20th century, David Hilbert's 1904 work on integral equations provided a spectral approach to solving potential problems, treating them as Fredholm equations and establishing existence via eigenvalue expansions, which influenced operator theory in partial differential equations (PDEs).[14] Twentieth-century developments shifted potential theory toward irregular domains and nonlinear extensions, solidifying its role in pure mathematics. Norbert Wiener's work in the 1920s introduced capacity theory, quantifying the "size" of sets with respect to harmonic measures and enabling solutions for non-smooth boundaries via Wiener's criterion for regularity. Modern extensions to nonlinear potentials, emerging in the mid-20th century, generalized classical theory to p-Laplacian equations and quasilinear PDEs, with foundational contributions addressing subharmonic functions and variational inequalities.[15] In the mid-20th century, the field was axiomatized by mathematicians including Marcel Brelot, Gustave Choquet, and Jean Deny, incorporating probabilistic interpretations via Brownian motion and excessive functions, as emphasized by Joseph L. Doob.[3] This evolution facilitated the transition from physics-inspired methods to abstract tools in PDE analysis, influencing existence proofs, regularity theory, and stochastic processes in pure mathematics.[16]Fundamental Concepts
Harmonic Functions
A harmonic function u on an open domain \Omega \subset \mathbb{R}^n is a real-valued function that is twice continuously differentiable and satisfies Laplace's equation \Delta u = 0, where \Delta = \sum_{j=1}^n \frac{\partial^2}{\partial x_j^2} denotes the Laplacian operator. This condition ensures that u represents a potential without sources or sinks within \Omega.[17][18] A defining property of harmonic functions is the mean value property: for any ball B(a, r) \subset \Omega centered at a with radius r > 0, the value u(a) equals the average of u over the boundary sphere \partial B(a, r), given by u(a) = \frac{1}{\sigma_{n-1}} \int_{\partial B(a, r)} u \, d\sigma, where \sigma_{n-1} is the surface area of the unit sphere in \mathbb{R}^n.[17] An equivalent volume form states that u(a) is the average over the entire ball B(a, r).[18] This property implies that harmonic functions are infinitely differentiable (C^\infty) and real analytic in \Omega, meaning they can be locally represented by converging power series expansions.[17] The set of harmonic functions on \Omega forms a vector space under pointwise addition and scalar multiplication, owing to the linearity of the Laplacian operator: if u and v are harmonic, then so are \alpha u + \beta v for scalars \alpha, \beta.[17] Basic examples include constant functions u(x) = c, which trivially satisfy \Delta u = 0, and linear functions such as u(x) = a \cdot x + b (where a \in \mathbb{R}^n and b \in \mathbb{R}), as their second derivatives vanish.[17] More generally, the fundamental solution in \mathbb{R}^n for n > 2 is \Phi(x) = -\frac{1}{(n-2) \omega_n |x|^{n-2}}, a radial harmonic function away from the origin, where \omega_n is the surface area of the unit sphere.[19] In bounded domains like balls, spherical harmonics—homogeneous harmonic polynomials restricted to the sphere—provide a complete orthogonal basis for expanding harmonic functions via separation of variables in spherical coordinates.[17] In potential theory, harmonic functions serve as the foundational solutions to homogeneous boundary value problems, such as the Dirichlet problem, where they model potentials determined by boundary data without interior sources.[17] They also connect to physical interpretations, representing equilibrium states in fields like gravitation and electrostatics.[18]Laplace's and Poisson's Equations
In potential theory, Laplace's equation is the central partial differential equation governing equilibrium states in source-free regions. It is expressed in vector form as \Delta u = \sum_{i=1}^n \frac{\partial^2 u}{\partial x_i^2} = 0, where u is the potential function in n-dimensional Euclidean space, and \Delta denotes the Laplacian operator. This equation describes the behavior of potential fields, such as electrostatic or gravitational potentials, in regions devoid of sources, representing a balance where the divergence of the field vanishes.[20] The physical derivation of Laplace's equation arises from fundamental laws of field theory. In electrostatics, Gauss's law states that the divergence of the electric displacement field \mathbf{D} equals the free charge density \rho_v, or \nabla \cdot \mathbf{D} = \rho_v, where \mathbf{D} = \epsilon \mathbf{E} and \epsilon is the permittivity. Since the electric field \mathbf{E} = -\nabla u, substitution yields \nabla \cdot (\epsilon \nabla u) = -\rho_v; in homogeneous media where \epsilon is constant and \rho_v = 0, this simplifies to \Delta u = 0. Similarly, in gravitation, Gauss's law for the gravitational field \mathbf{g} gives \nabla \cdot \mathbf{g} = -4\pi G \rho, with \mathbf{g} = -\nabla u and G the gravitational constant; in source-free regions (\rho = 0), this again leads to \Delta u = 0.[20][21] Poisson's equation generalizes Laplace's equation to include distributed sources, taking the form \Delta u = f, where f represents the source term (e.g., f = -\rho_v / \epsilon in electrostatics or f = 4\pi G \rho in gravity). Solutions to Poisson's equation exist and can be constructed using the fundamental solution, which is the potential due to a unit point source; in three dimensions, this is \Phi(\mathbf{x}) = -\frac{1}{4\pi |\mathbf{x}|}, satisfying \Delta \Phi = \delta(\mathbf{x}), where \delta is the Dirac delta function. For bounded domains, Green's functions provide the appropriate framework for solvability, incorporating boundary conditions to yield the general solution as an integral over the source f and boundary data.[22][23] Boundary value problems for these equations are formulated to determine the potential within a domain \Omega given data on its boundary \partial \Omega. The Dirichlet problem prescribes the potential values u = g on \partial \Omega, while the Neumann problem specifies the normal derivative \frac{\partial u}{\partial n} = h on \partial \Omega, related to the flux of the field. For Poisson's equation, the Neumann formulation requires a compatibility condition \int_\Omega f \, dV = \oint_{\partial \Omega} h \, dS to ensure solvability.[24] Uniqueness theorems guarantee that solutions, when they exist, are determined up to additive constants under suitable conditions. For the Dirichlet problem, both Laplace's and Poisson's equations have unique solutions in bounded domains with continuous boundary data, as the difference of any two solutions satisfies Laplace's equation with zero boundary values and must vanish by the maximum principle. For the Neumann problem, solutions are unique up to an additive constant, with the compatibility condition ensuring existence; this follows from integrating the equation over the domain and applying the divergence theorem.[24]Symmetries and Transformations
Conformal Symmetries
Conformal transformations are mappings that preserve angles and locally scale distances uniformly, playing a central role in the study of harmonic functions as solutions to Laplace's equation \Delta u = 0. In two dimensions, these transformations correspond to complex analytic functions, and the invariance of the Laplace equation under such mappings follows from the Cauchy-Riemann equations, which ensure that the real and imaginary parts of an analytic function are harmonic. Specifically, if u is harmonic in a domain U \subset \mathbb{R}^2 and f: V \to U is conformal (holomorphic with non-zero derivative), then u \circ f is harmonic in V, preserving the structure of \Delta (u \circ f) = 0.[25][17] In higher dimensions, the conformal group extends to include Möbius transformations, which are compositions of inversions, translations, rotations, and scalings. These transformations preserve harmonic functions exactly, as exemplified by the Kelvin transform K[u](x) = |x|^{2-n} u(x/|x|^2) for n \geq 3, which maps harmonic functions to harmonic functions. Inversions, a key component of Möbius transformations, act conformally on \mathbb{R}^n \setminus \{0\} by mapping spheres to spheres or planes, and the resulting composition adjusts the Laplacian such that harmonic solutions remain harmonic. This preservation allows for the extension of harmonic functions across domains transformed by such symmetries.[17] Spherical harmonics provide a concrete realization of rotational symmetries in potential theory, serving as eigenfunctions of the Laplace-Beltrami operator on the unit sphere S^{n-1} under the action of the rotation group SO(n). These functions, which are restrictions of homogeneous harmonic polynomials of degree d to the sphere, satisfy \Delta_{S^{n-1}} Y_d = -d(d + n - 2) Y_d, where the eigenvalue reflects the SO(n)-invariance of the spherical Laplacian. As an orthogonal basis for L^2(S^{n-1}), spherical harmonics decompose general harmonic functions in balls or spheres, facilitating the analysis of rotationally symmetric solutions in potential theory.[26][17] Symmetries enable the generation of new harmonic functions from known ones, such as through reflection principles, which extend solutions across hyperplanes or spheres while preserving harmonicity. For instance, reflecting a harmonic function across a hyperplane yields another harmonic function, and combining this with inversions generates solutions in complementary domains, like extending from a ball to its exterior via the Kelvin transform adjusted by reflection. This constructive approach leverages the underlying symmetries to solve boundary value problems without direct integration.[17] From a group-theoretic perspective, Laplace's equation is invariant under the full conformal group, which includes translations, rotations, dilations, and special conformal transformations (inversions composed with translations). Under the conformal group actions (translations, rotations, dilations, and special conformal transformations), the Laplacian applied to the composition u \circ g results in a scaled version of (\Delta u) \circ g, but since \Delta u = 0 for harmonic u, u \circ g is also harmonic. The specific scaling depends on the transformation type. This invariance underscores the conformal group's role in classifying and generating solutions in potential theory across dimensions.[27][17]Kelvin Transform and Method of Images
The Kelvin transform, introduced by William Thomson (later Lord Kelvin) in a 1845 letter to Joseph Liouville,[28] provides a geometric inversion that preserves the harmonicity of functions in potential theory. For a function u that is harmonic in \mathbb{R}^n \setminus \{0\} with n \geq 3, the Kelvin transform is defined as v(x) = |x|^{2-n} u\left(\frac{x}{|x|^2}\right), which remains harmonic in \mathbb{R}^n \setminus \{0\}.[29] This transformation arises from the inversion mapping x \mapsto \frac{x}{|x|^2}, which interchanges points inside and outside the unit sphere while mapping spheres to planes or vice versa, thereby facilitating solutions to boundary value problems by converting exterior domains to interior ones.[30] In applications, the Kelvin transform is particularly useful for solving Laplace's equation around spherical boundaries, such as mapping the potential outside a sphere to an equivalent problem inside an inverted domain.[31] For instance, it allows reduction of unbounded exterior problems to bounded interior ones, preserving key properties like the mean value property of harmonic functions, as detailed in classical treatments of potential theory.[32] This technique extends naturally to Poisson's equation under appropriate scaling, enabling analytical solutions for source distributions symmetric under inversion.[33] The method of images, developed by Lord Kelvin in his 1848 paper on electrostatic induction,[34] constructs solutions to potential problems by introducing fictitious charges (images) that enforce boundary conditions without altering the field in the region of interest. In electrostatics, for a point charge q at distance d from an infinite grounded conducting plane, the image charge is -q placed symmetrically at distance d on the opposite side, yielding zero potential on the plane while matching the original field above it.[35] This approach satisfies the Dirichlet boundary condition (constant potential) on linear boundaries through superposition of the real and image potentials. A key example is the grounded conducting sphere of radius a with a point charge q at distance b > a from the center: the image charge is q' = -q \frac{a}{b} at distance \frac{a^2}{b} inside the sphere, ensuring the sphere's surface is equipotential at zero.[11] For a uniform external field around an uncharged conducting sphere, image dipoles or equivalent multipoles can be derived similarly, modeling induced surface charges.[36] These methods extend to infinite planes for approximating parallel-plate configurations or line charges near cylindrical boundaries via two-dimensional analogs.[37] While effective for planar and spherical geometries, the method of images is limited to linear boundary conditions and simple symmetries, requiring infinite series of images for more complex shapes like wedges.[38] Extensions to curved boundaries, such as circles in two dimensions, often combine the method with inversions akin to the Kelvin transform to generate valid image systems.[39]Dimensional Considerations
Properties in Two Dimensions
In two dimensions, the fundamental solution to Laplace's equation, known as the logarithmic potential, takes the form \Phi(x,y) = -\frac{1}{2\pi} \ln \sqrt{x^2 + y^2}, which exhibits slower decay at infinity compared to the power-law decay \frac{1}{|r|^{n-2}} observed for n \geq 3.[40] This logarithmic behavior arises naturally from the Green's function for the plane and leads to distinct asymptotic properties for potentials generated by compact charge distributions, where the potential grows logarithmically rather than approaching a constant.[41] Consequently, solutions to boundary value problems in unbounded domains often require careful handling of behavior at infinity, influencing applications in electrostatics and fluid dynamics.[42] A hallmark of two-dimensional potential theory is the central role of conformal mappings, enabled by the Riemann mapping theorem, which asserts that any simply connected domain in the complex plane, excluding the entire plane itself, can be conformally mapped onto the unit disk.[43] This uniformization simplifies the solution of Dirichlet and Neumann problems by transforming irregular boundaries into circular ones, preserving harmonicity since conformal maps are angle-preserving and satisfy the Cauchy-Riemann equations.[44] Such mappings facilitate explicit constructions of Green's functions and highlight the deep interconnection between potential theory and complex analysis in this dimension.[43] Harmonic functions in two dimensions admit a canonical complex representation: any real-valued harmonic function u(x,y) on a simply connected domain is the real part of a holomorphic function f(z) = u + iv, where v is the harmonic conjugate of u.[45] This decomposition, unique up to an additive constant, leverages the Cauchy-Riemann equations and enables powerful tools from complex analysis, such as the Schwarz reflection principle, which extends harmonic functions across straight-line boundaries by reflecting the conjugate.[46] Similarly, variants of Morera's theorem apply to verify holomorphicity of the combined function, confirming that closed contours with vanishing integrals for both u and v imply analyticity.[17] The symmetry group governing harmonic functions in two dimensions is the full conformal group, comprising all angle-preserving transformations, which forms an infinite-dimensional Lie group generated by holomorphic and anti-holomorphic functions.[47] This contrasts sharply with the finite-dimensional special orthogonal group in higher dimensions, allowing for a richer class of invariances that preserve solutions to Laplace's equation.[48] These symmetries underpin the extensibility of potentials across domains and the solvability of mixed boundary problems through local adjustments.[43] Special theorems exploit the conjugate structure, such as the argument principle for harmonic functions, which counts the winding number of level curves around singularities by integrating the gradient of the conjugate along the boundary.[49] This principle, analogous to its holomorphic counterpart, quantifies zeros and poles in the sense-reversing regions of harmonic mappings, providing topological insights into the global structure of solutions.[50] Local regularity properties, including real-analyticity away from singularities, align with those in higher dimensions.[17]Behavior in Higher Dimensions
In dimensions n \geq 3, the fundamental solution to Laplace's equation \Delta \Phi = -\delta takes the form \Phi(x) = \frac{1}{|x|^{n-2}} (up to a dimensional constant), which facilitates the construction of multipole expansions for representing potentials generated by localized charge distributions.[51] This power-law decay contrasts with the logarithmic singularity in two dimensions and enables efficient asymptotic approximations for far-field behaviors in higher-dimensional settings.[17] The group of conformal transformations preserving the Laplace equation in \mathbb{R}^n for n \geq 3 is the finite-dimensional Möbius group, generated by inversions, isometries, translations, and dilations, which imposes limitations on generating new solutions compared to the infinite-dimensional conformal group available in two dimensions.[52] A key tool within this framework is the Kelvin transform, defined for a function u harmonic in a domain excluding the origin as Ku(x) = |x|^{2-n} u(x/|x|^2), which preserves harmonicity throughout \mathbb{R}^n \setminus \{0\} and extends classical methods for solving boundary value problems.[17] Regarding asymptotic behavior at infinity, harmonic functions in \mathbb{R}^n (n \geq 3) exhibit at most polynomial growth, meaning that if |u(x)| \leq C (1 + |x|)^k for some constants C, k > 0, then u is a harmonic polynomial of degree at most k.[51] Liouville-type theorems further imply that bounded entire harmonic functions in \mathbb{R}^n are constant, providing strong rigidity results for global solutions.[17] These properties find direct applications in modeling Newtonian gravity and electrostatics in three dimensions, where the potential decays as $1/r and the field as $1/r^2, leading to faster dissipation of influences at large distances than in lower-dimensional analogs.[51]Analytic Properties
Local Behavior and Regularity
Harmonic functions, defined as twice continuously differentiable solutions to Laplace's equation \Delta u = 0 in an open domain \Omega \subset \mathbb{R}^n, exhibit exceptional smoothness properties throughout the interior of \Omega. A fundamental result in potential theory is the regularity theorem, which states that any such harmonic function u is infinitely differentiable (C^\infty) in \Omega.[53] Furthermore, harmonic functions are real analytic in \Omega, allowing local representation by convergent power series expansions.[17] The proof of this C^\infty regularity leverages the mean value property of harmonic functions: for any ball B_r(x) \subset \Omega with x \in \Omega and radius r > 0, u(x) = \frac{1}{|B_r(x)|} \int_{B_r(x)} u(y) \, dy = \frac{1}{|\partial B_r(x)|} \int_{\partial B_r(x)} u(y) \, d\sigma(y), where | \cdot | denotes the respective measure. By applying Taylor expansions to u around x and differentiating the mean value integrals with respect to parameters such as r, one inductively verifies the existence and continuity of all higher-order partial derivatives at x.[17] This process demonstrates that the smoothness order can be arbitrarily increased, yielding C^\infty regularity without relying on the original C^2 assumption for the full result. A consequential aspect of this local regularity is the expansion of a harmonic function u around any interior point x_0 \in \Omega in terms of homogeneous harmonic polynomials. Specifically, in spherical coordinates centered at x_0, u admits the series representation u(x) = \sum_{k=0}^\infty |x - x_0|^k P_k\left( \frac{x - x_0}{|x - x_0|} \right), where each P_k is a homogeneous harmonic polynomial of degree k on the unit sphere, and the series converges uniformly on compact subsets of \Omega.[17] This expansion underscores the analytic nature of harmonic functions and facilitates the study of their local geometry. In the broader framework of elliptic partial differential equations, the regularity of harmonic functions extends to weak solutions in Sobolev spaces W^{1,2}(\Omega). Elliptic regularity theory employs bootstrap arguments: starting from L^2 or L^p estimates on \Delta u = 0, one iteratively applies interior Schauder or Calderón-Zygmund estimates to upgrade the solution from W^{2,p} to C^{k,\alpha} and ultimately to C^\infty. These techniques confirm that weak harmonic functions coincide with classical ones in the interior.[53] The local uniqueness of harmonic functions in balls follows from the maximum modulus principle, which prohibits interior local maxima or minima unless u is constant. Thus, within any ball B \subset \Omega, a harmonic function is uniquely determined by its values on \partial B, barring the constant case.[53]Singularities and Bôcher's Theorem
In potential theory, an isolated singularity of a harmonic function u at a point p in \mathbb{R}^n (with n \geq 2) is removable if u is bounded in a punctured neighborhood of p, allowing u to be extended to a harmonic function on the full neighborhood.[17] This result, analogous to Riemann's removable singularity theorem for holomorphic functions, ensures that the singularity can be "filled in" harmonically without altering the function's behavior elsewhere.[17] Isolated singularities of harmonic functions are classified as removable, poles, or essential. A singularity at p is removable if \lim_{x \to p} |x - p|^{n-2} |u(x)| = 0 (for n > 2); it is a pole if there exists an integer M \geq 0 such that $0 < \limsup_{x \to p} |x - p|^{M + n - 2} |u(x)| < \infty; otherwise, it is essential. Near a pole, u can be expressed as h(x) + \sum_{j=1}^{m} |x - p|^{-k_j} g_j(\theta) plus higher-order terms, where the h is harmonic at p, the k_j are positive integers increasing, \theta are angular coordinates, and each g_j is a spherical harmonic on the unit sphere. Essential singularities involve infinitely many such negative power terms, leading to more complex behavior.[17] Bôcher's theorem provides a specific classification for positive harmonic functions near an isolated singularity. For a positive harmonic function u defined and twice continuously differentiable in a punctured neighborhood O \setminus \{p\} of p \in \mathbb{R}^n, there exists a harmonic function v on the full neighborhood O and a constant a \geq 0 such that, near p = 0 in the punctured unit ball B_n \setminus \{0\},- if n > 2, u(x) = a |x|^{2-n} + v(x);
- if n = 2, u(x) = a \log(1/|x|) + v(x).[54][55] In this case, the singular term is a non-negative multiple of the fundamental solution, and the angular dependence reduces to a constant (the degree-0 spherical harmonic). This theorem originates from Maxime Bôcher's analysis of solutions to elliptic PDEs, including Laplace's equation, and parallels pole-like behavior but restricted to positive functions.[54]