In mathematics, particularly in the fields of algebra and analysis, a homogeneous function is a function of multiple variables such that the function value scales by a fixed power of the scaling factor when all inputs are scaled by that factor. Formally, a function f: V \to \mathbb{R} (where V is a vector space) is homogeneous of degree k if, for all \lambda > 0 and x \in V,f(\lambda x) = \lambda^k f(x).This property generalizes to positive homogeneity when the scaling factor is restricted to positive scalars, and more broadly in contexts like economics or physics.[1]
Definitions
Positive homogeneity
A function f: V \to W between vector spaces is positively homogeneous of degree k if it satisfies f(tx) = t^k f(x) for all scalars t > 0 and all x \in V.[2] This property captures the scaling behavior of the function under positive multiplication of its input, where the output scales by the k-th power of the scalar.The degree k may be any real number, not necessarily an integer; for instance, linear functions are positively homogeneous of degree 1, while quadratic forms are of degree 2, and more generally, functions like the Cobb-Douglas production function f(x,y) = x^\alpha y^\beta have degree \alpha + \beta, which can be fractional.[3] This flexibility allows the concept to apply broadly in analysis and economics, where non-integer degrees model phenomena like returns to scale.Unlike general homogeneity, which extends the scaling to all real scalars t (including negatives), positive homogeneity restricts to t > 0 to ensure well-defined behavior on domains like the positive orthant and to sidestep sign inconsistencies that arise when t < 0 and k is non-integer.[2]To verify positive homogeneity of degree k, substitute arbitrary x \in V and t > 0 into the scaling equation and confirm equality holds identically.[3]The notion of homogeneous functions, including the positive case, was introduced by Leonhard Euler in his 1755 work Institutiones Calculi Differentialis, where he explored their properties in the context of differential calculus.[4]
General homogeneity
A function f: \mathbb{R}^n \to \mathbb{R} is homogeneous of degree k if it satisfies the scaling equationf(tx) = t^k f(x)for all t \in \mathbb{R} and all x \in \mathbb{R}^n.[5][6] This general form of homogeneity extends the concept beyond positive scalars and applies the scaling property uniformly across the real line.For t < 0, the condition requires that t^k remains real-valued to preserve the real output of f.General homogeneity implies positive homogeneity, as the equation holds for all t > 0. The converse holds if f(-x) = (-1)^k f(x) for all x \in \mathbb{R}^n (assuming k is an integer), meaning the function is even when k is even and odd when k is odd.Functions satisfying general homogeneity may not be defined at t = 0, or the property may fail there directly, since f(0 \cdot x) = 0^k f(x) leads to inconsistencies unless f(0) = 0 for k > 0. However, the limit as t \to 0 often provides insight into behavior near the origin, with \lim_{t \to 0} f(tx)/t^k = f(x) confirming the scaling in the limit.[5]
Properties
Euler's theorem
If f is a differentiable homogeneous function of degree k on \mathbb{R}^n, then Euler's theorem states that\sum_{i=1}^n x_i \frac{\partial f}{\partial x_i}(x) = k f(x)for all x in the domain where the partial derivatives exist. This follows by differentiating the homogeneity relation f(tx) = t^k f(x) with respect to t and setting t = 1:\frac{d}{dt} f(tx) = k t^{k-1} f(x) \implies \sum_{i=1}^n x_i \frac{\partial f}{\partial x_i}(tx) = k t^{k-1} f(x).Substituting t=1 yields the result. The theorem is particularly useful in multivariable calculus and applications like thermodynamics, where extensive properties are homogeneous of degree 1.[7]
Degrees and scaling
A homogeneous function of degree k satisfies the scaling relation f(tx) = t^k f(x) for all scalars t > 0 and arguments x in the domain, meaning the function value scales by a power of t under uniform multiplication of all inputs by t.[7] This single-degree case captures isotropic scaling behavior, where the exponent k is constant across all directions.A generalization to functions on \mathbb{R}^n introduces multi-degrees (k_1, \dots, k_n), where homogeneity holds under independent scalings of each variable:f(t_1 x_1, \dots, t_n x_n) = t_1^{k_1} \cdots t_n^{k_n} f(x_1, \dots, x_n)for all t_i > 0. This allows anisotropic scaling, with each variable contributing its own exponent, and is employed in multi-graded algebraic structures such as Demazure modules.[8] The total degree is then \sum k_i, reflecting the combined scaling effect.Scaling laws govern how degrees behave under operations like composition. For single-degree functions, if g is homogeneous of degree m and f of degree k, the composition f \circ g is homogeneous of degree k m:(f \circ g)(tx) = f(g(tx)) = f(t^m g(x)) = t^{m k} f(g(x)) = t^{k m} (f \circ g)(x).To arrive at this, substitute the homogeneity of g into the argument of f, then apply f's homogeneity to the resulting scaled input. In the multi-degree case, the output degree vector depends on the component-wise scalings induced by the inner function, often requiring compatible multi-degrees for the composition to remain multi-homogeneous.The general scaling equation for multi-homogeneous functions is the defining relation given above, which extends the single-degree case by decoupling the scalings t_i.Key properties include additivity of degrees under summation: if f and g are both homogeneous of the same single degree k, then f + g is homogeneous of degree k, since(f + g)(tx) = f(tx) + g(tx) = t^k f(x) + t^k g(x) = t^k (f + g)(x).The reasoning follows directly from linearity of the operation and matching exponents. For multi-degrees, summation preserves the multi-degree (k_1, \dots, k_n) only if f and g share the same vector, as mismatched exponents would violate the independent scaling relation.
Examples
Basic scalar examples
For univariate functions from \mathbb{R}_+ \to \mathbb{R}, power functions are canonical examples of homogeneous functions. The function f(x) = x^k for x > 0 and any real k satisfies f(tx) = (tx)^k = t^k x^k = t^k f(x) for t > 0, making it homogeneous of degree k. For instance, f(x) = x^3 is homogeneous of degree 3, while f(x) = \sqrt{x} is homogeneous of degree $1/2.[3]
Multivariable functions
In multivariable calculus, a homogeneous function of degree k is defined for a mapping f: \mathbb{R}^n \to \mathbb{R} by the scaling relation f(t \mathbf{x}) = t^k f(\mathbf{x}) for all scalars t > 0 and vectors \mathbf{x} \in \mathbb{R}^n \setminus \{\mathbf{0}\}.[9] This condition captures the function's behavior under uniform radial scaling of its inputs, highlighting how the output scales proportionally to the k-th power of the scaling factor. Such functions are fundamental in analyzing symmetry and scaling laws in higher dimensions, building on the univariate case by applying the transformation simultaneously to all coordinates.[10]A representative example is the quadratic form f(x, y) = x^2 + y^2, which satisfies f(tx, ty) = (tx)^2 + (ty)^2 = t^2 (x^2 + y^2) = t^2 f(x, y) and is thus homogeneous of degree 2.[9] This extends naturally to \mathbb{R}^n, where f(x_1, \dots, x_n) = \sum_{i=1}^n x_i^2 is homogeneous of degree 2, as direct substitution yields f(t x_1, \dots, t x_n) = t^2 \sum_{i=1}^n x_i^2 = t^2 f(x_1, \dots, x_n).[10] To verify homogeneity for any candidate function, one computes the scaled version explicitly: for instance, with f(x, y) = x^2 + y^2, the equality f(tx, ty) = t^2 f(x, y) holds for all t > 0, confirming the degree.[9] This verification process is straightforward and directly tests the defining property without requiring additional tools.The radial nature of homogeneous functions becomes evident in polar coordinates, where the scaling aligns with the radial direction. In \mathbb{R}^2, with x = r \cos \theta and y = r \sin \theta, a homogeneous function of degree k takes the separated form f(r, \theta) = r^k g(\theta), where g depends only on the angular coordinate \theta.[11] For purely radial functions, which depend solely on the Euclidean norm \|\mathbf{x}\| = r such that f(\mathbf{x}) = g(r), homogeneity of degree k requires g(r) = r^k h(\theta) with h(\theta) constant (independent of \theta), reducing to g(r) = c r^k for some constant c.[9] This structure underscores the directional invariance in the angular component for general cases, while pure radial functions exhibit full rotational symmetry.Functions may also exhibit partial homogeneity, being homogeneous of degree k with respect to a proper subset of their variables while remaining unaffected by scaling in the others. For example, consider f(x, y, z) = x^2 + y^2; scaling only the first two variables gives f(tx, ty, z) = (tx)^2 + (ty)^2 = t^2 (x^2 + y^2) = t^2 f(x, y, z), confirming homogeneity of degree 2 in x and y alone.[12] This selective scaling applies the property directionally in the variable space, useful for modeling systems with grouped inputs, such as certain production functions where outputs depend homogeneously on specific factors.[13] Verification follows similarly by fixing the unaffected variables and checking the scaling relation on the subset.[9]
Polynomials and rationals
Homogeneous polynomials consist of terms all of the same total degree. For example, f(x, y, z) = 2x^2 y + 3 x y z + z^3 is a homogeneous polynomial of degree 3, as each monomial has total degree 3, and f(t x, t y, t z) = t^3 f(x, y, z).[5] More generally, any sum of monomials of degree k is homogeneous of degree k.Rational functions can also be homogeneous. A ratio of two homogeneous polynomials of degrees m and n is homogeneous of degree m - n. For instance, f(x, y) = \frac{x^2 - y^2}{x^2 + y^2} has both numerator and denominator homogeneous of degree 2, so f(tx, ty) = f(x, y) and it is homogeneous of degree 0. Another example is f(x, y) = \frac{x + y}{x^2 + y^2}, where the numerator is degree 1 and denominator degree 2, yielding homogeneity of degree -1: f(tx, ty) = t^{-1} f(x, y).[5]
Norms and linear maps
Vector norms are homogeneous of degree 1. The Euclidean norm \|\mathbf{x}\|_2 = \sqrt{\sum_{i=1}^n x_i^2} satisfies \|t \mathbf{x}\|_2 = t \|\mathbf{x}\|_2 for t > 0, confirming degree 1. Similarly, the p-norm \|\mathbf{x}\|_p = \left( \sum_{i=1}^n |x_i|^p \right)^{1/p} for $1 \leq p < \infty is homogeneous of degree 1.[14]Linear maps between vector spaces are homogeneous of degree 1. For a linear transformation T: \mathbb{R}^n \to \mathbb{R}^m represented by a matrix A, T(t \mathbf{x}) = A (t \mathbf{x}) = t (A \mathbf{x}) = t T(\mathbf{x}) for t > 0, so T is homogeneous of degree 1. This property holds for any linear operator.[15]
Min-max and non-examples
If f and g are homogeneous functions of the same degree k, then the pointwise minimum \min(f, g) is also homogeneous of degree k, since \min(f(tx), g(tx)) = \min(t^k f(x), t^k g(x)) = t^k \min(f(x), g(x)) for t > 0.[3] The same holds for the pointwise maximum \max(f, g).[3]A classic example is the Leontief production or utility function f(\mathbf{x}) = \min_i x_i for \mathbf{x} \in \mathbb{R}^m_+, which is homogeneous of degree 1 because f(t\mathbf{x}) = \min_i (t x_i) = t \min_i x_i = t f(\mathbf{x}) for t > 0.[3] For scalar arguments, this property simplifies to \min(tx, ty) = t \min(x, y) when t > 0, demonstrating positive homogeneity of degree 1.[3]Non-examples help delineate the boundaries of homogeneity. Consider f(x, y) = x^2 + y^3; scaling yields f(tx, ty) = t^2 x^2 + t^3 y^3, which does not equal t^k (x^2 + y^3) for any fixed k across all x, y > 0, as the degrees differ.[3] Similarly, f(x) = |x| + 1 fails homogeneity: f(tx) = t |x| + 1, which mismatches t^k (|x| + 1) for k = 1 (yielding t |x| + t) and other k values, since the constant term disrupts uniform scaling.[5]A frequent misconception arises from conflating homogeneity with additivity. Additive functions satisfy f(x + y) = f(x) + f(y), but without the scaling condition f(tx) = t^k f(x), they are not necessarily homogeneous; for instance, pathological solutions to Cauchy's equation over \mathbb{R} (non-linear under the axiom of choice) violate homogeneity despite additivity.[5] In contrast, standard linear functions like f(x, y) = x + y are both additive and homogeneous of degree 1, but this coincidence does not generalize.[3]
Applications
Differential equations
Homogeneous first-order ordinary differential equations (ODEs) take the form y' = f\left(\frac{y}{x}\right), where f is a homogeneous function of degree zero, meaning the equation remains unchanged under the scaling (x, y) \to (tx, ty) for t > 0.[16] These equations are nonlinear but can be reduced to separable form through the substitution v = \frac{y}{x}, or equivalently y = vx. Differentiating y = vx with respect to x using the product rule yields y' = v + x \frac{dv}{dx}. Substituting into the original equation gives v + x \frac{dv}{dx} = f(v), which rearranges to the separable equation x \frac{dv}{dx} = f(v) - v, or \frac{dv}{f(v) - v} = \frac{dx}{x}. Integrating both sides produces \int \frac{dv}{f(v) - v} = \ln |x| + c, where c is the constant of integration, and back-substituting v = \frac{y}{x} yields the implicit solution.[17][16]A simple example is the equation y' = \frac{y}{x}, where f(v) = v. The substitution v = \frac{y}{x} leads to v + x \frac{dv}{dx} = v, simplifying to x \frac{dv}{dx} = 0, so \frac{dv}{dx} = 0 and v = c. Thus, \frac{y}{x} = c, or y = cx, which is the general solution for x > 0 or x < 0 (adjusting the absolute value as needed).[18] In contrast, non-homogeneous equations like y' = \frac{y}{x} + g(x) cannot be solved by this substitution alone and typically require integrating factors or other methods.[16]For higher-order linear ODEs, homogeneous Euler-Cauchy equations have the form x^2 y'' + a x y' + b y = 0, where the coefficients scale with powers of x matching the derivative orders, reflecting homogeneity under x \to tx. The solution method assumes a trial form y = x^m (for x > 0), leading to derivatives y' = m x^{m-1} and y'' = m(m-1) x^{m-2}. Substituting gives the characteristic equation m(m-1) + a m + b = 0, a quadratic in m. The roots determine the general solution: for distinct real roots m_1, m_2, it is y = c_1 x^{m_1} + c_2 x^{m_2}; for repeated roots m, y = (c_1 + c_2 \ln x) x^m; and for complex roots \alpha \pm i \beta, y = x^\alpha (c_1 \cos(\beta \ln x) + c_2 \sin(\beta \ln x)).[19][20]An example is $4x^2 y'' + 17x y' - 12 y = 0. Assuming y = x^m, the characteristic equation is $4m(m-1) + 17m - 12 = 0, or $4m^2 + 13m - 12 = 0, with roots m = \frac{3}{4}, -4. The general solution is y = c_1 x^{3/4} + c_2 x^{-4} for x > 0.[20] Non-homogeneous Euler-Cauchy equations, such as x^2 y'' + a x y' + b y = g(x), add a particular solution found via undetermined coefficients or variation of parameters, but the homogeneous part follows the same method.[19]
Economics and optimization
In economic theory, homogeneous functions play a central role in modeling production processes, particularly through production functions that exhibit returns to scale. The Cobb-Douglas production function, introduced in 1928, takes the form f(x, y) = A x^\alpha y^\beta, where x and y represent inputs such as labor and capital, A > 0 is a productivity parameter, and \alpha, \beta > 0 are elasticities. This function is homogeneous of degree \alpha + \beta, meaning that scaling all inputs by a factor \lambda > 0 scales output by \lambda^{\alpha + \beta}. If \alpha + \beta = 1, the function displays constant returns to scale, a property widely assumed in neoclassical growth models to analyze long-term economic expansion.[21][22]Another prominent example is the constant elasticity of substitution (CES) production function, developed in 1961, given by f(x, y) = \left( \alpha x^\rho + (1 - \alpha) y^\rho \right)^{1/\rho}, which is homogeneous of degree 1 for linear homogeneity when the exponent is adjusted accordingly. The CES form generalizes the Cobb-Douglas by allowing a constant but variable elasticity of substitution \sigma = 1/(1 - \rho) between inputs, facilitating analysis of factor substitutability in production. In utility theory, homogeneous utility functions imply that marginal rates of substitution remain constant along rays from the origin, ensuring consistent consumer preferences under scaling. For cost functions, homogeneity of degree 1 in input prices ensures that doubling all prices doubles total cost, aligning with no-arbitrage conditions.[23][24]Homogeneity also invokes Euler's theorem, which states that for a production function f homogeneous of degree \gamma, \sum_i x_i \frac{\partial f}{\partial x_i} = \gamma f(x). In economics, when \gamma = 1 under constant returns, this equates total output to the sum of marginal products valued at input levels, justifying factor payments exhausting revenue in competitive markets—such as wages equaling labor's marginal product and rents equaling capital's. This application underpins distribution theory and firm profit maximization.[3][25]In optimization contexts, homogeneous objectives in economic problems, like maximizing linearly homogeneous production subject to linear constraints, yield solutions along rays from the origin, where optimal input proportions remain fixed regardless of scale. This ray property simplifies solving linear programming formulations in resource allocation, as seen in production planning where input ratios are invariant to output levels. More recently, post-2020 applications in machine learning scaling laws model loss landscapes using homogeneous growth assumptions, where neuron allocation scales proportionally with network size, leading to predictable performance improvements as \ell \propto N^{-\alpha} for model size N and exponent \alpha. This framework, assuming homogeneous resource distribution across subtasks, aligns empirical observations in large language models with theoretical predictions.[26][27]
Generalizations
Monoid actions
The concept of a homogeneous function can be generalized to settings where a monoid G acts on a space V. A function f: V \to \mathbb{R} (or more generally to another space) is said to be homogeneous of degree k with respect to the action if there exists a character \chi: G \to \mathbb{R}^\times such that f(g \cdot v) = \chi(g)^k f(v) for all g \in G, v \in V. In the classical case, G = (\mathbb{R}_{>0}, \cdot) acts by scaling, and \chi(g) = g. This framework encompasses actions by other monoids, such as those arising in graded structures or representation theory.[28]
Distributions and generalized functions
In the theory of distributions, a distribution T on \mathbb{R}^n is said to be homogeneous of degree k if, for every test function \phi \in \mathcal{D}(\mathbb{R}^n) and every \lambda > 0,\langle T, \phi(\cdot / \lambda) \rangle = \lambda^{k + n} \langle T, \phi \rangle,where n is the dimension of the space.[29] This weak definition extends the classical notion of homogeneity to generalized functions, allowing for singularities at the origin that prevent pointwise evaluation.[30]Prominent examples include the Dirac delta distribution \delta, which is homogeneous of degree -n, satisfying \langle \delta, \phi(\cdot / \lambda) \rangle = \phi(0) = \lambda^0 \langle \delta, \phi \rangle, consistent with k + n = 0.[29] Derivatives of homogeneous distributions inherit adjusted degrees; if T is homogeneous of degree k, then its distributional derivative \partial_j T is homogeneous of degree k - 1 for each coordinate direction j.[29] For instance, principal value distributions like \mathrm{p.v.} (1/|x|^m) in \mathbb{R}^n (for suitable m) are homogeneous of degree -m.[30]A keyproperty involves the Euler operator E = \sum_{j=1}^n x_j \partial_j, defined distributionally by \langle E T, \phi \rangle = -\sum_{j=1}^n \langle T, x_j \partial_j \phi \rangle. If T is homogeneous of degree k, then E T = k T.[30] This eigenvalue relation characterizes homogeneity in the distributional sense, analogous to the smooth case via Euler's theorem. The Fourier transform preserves this structure up to a shift: if T is homogeneous of degree k, its Fourier transform \hat{T} is homogeneous of degree -k - n.[30] For example, the Fourier transform of |x|^s (homogeneous of degree s, \mathrm{Re}(s) > -n) is proportional to |\xi|^{-n-s}.[30]In partial differential equations, homogeneous distributions play a central role as fundamental solutions. For the wave operator \Box = \partial_t^2 - \Delta in d+1 dimensions, the forward fundamental solution E_+ is a homogeneous distribution of degree $1 - d, supported in the forward light cone, satisfying \Box E_+ = \delta_{(0,0)}.[31] This enables explicit representation formulas for solutions to the inhomogeneous wave equation \Box \phi = F with initial data, such as Kirchhoff's formula in three spatial dimensions.[31]Since the 1980s, homogeneous distributions have been integral to microlocal analysis, where they describe singularities via wave front sets in phase space.[32] In this framework, the principal symbol of pseudodifferential operators acts on homogeneous components of asymptotic expansions near singularities.[32] More recently, in the 2020s, microlocal techniques involving homogeneous distributions have advanced quantum field theory, particularly in analyzing the microlocal spectrum condition and renormalization of interacting fields on curved spacetimes, ensuring Hadamard states satisfy sharp singularity structures.[33]
Terminology
Name variants
The concept of a homogeneous function traces its origins to the 18th century, when Leonhard Euler introduced the term "functiones homogeneæ" in Latin to describe polynomials where all terms have the same degree, as detailed in his 1748 work Introductio in analysin infinitorum.[34] This terminology evolved over time to encompass more general functions satisfying scaling relations, shifting from Euler's polynomial-focused definition to the modern broader usage in multivariable calculus and analysis by the 19th century.[35]Common variants include "homogeneous of degree k," which specifies the scaling exponent in the relation f(\lambda \mathbf{x}) = \lambda^k f(\mathbf{x}) for \lambda > 0 and \mathbf{x} \in \mathbb{R}^n.[1] In contexts involving renormalization or critical phenomena, such functions are termed "scaling functions," reflecting their role in describing self-similar behaviors under rescaling.[36]Field-specific nomenclature further diversifies the term. In economics, functions homogeneous of degree 1 are dubbed "linearly homogeneous," signifying constant returns to scale in production models, as seen in analyses of the Solow growth model. In physics, especially in statistical mechanics and field theory, they are referred to as "scale-invariant" functions, capturing systems without intrinsic length scales, such as in critical phenomena.[37]A key distinction in terminology involves positive versus absolute homogeneity. Positive homogeneity requires the scaling f(\lambda \mathbf{x}) = \lambda^k f(\mathbf{x}) to hold only for \lambda > 0, allowing functions like certain norms or the absolute value to qualify even if behavior under negative scaling differs.[38]Absolute homogeneity, by contrast, extends the property to all real \lambda using |\lambda|^k, which is standard for vector norms to ensure consistency with the absolute value.[39] These variants avoid conflation with "homogeneous equations," particularly in differential equations, where the term denotes equations with zero right-hand side (linear case) or functions of degree zero in the nonlinear case, unrelated to the scaling of the function itself.[40]
Related concepts
Homogeneous functions are closely related to quasi-homogeneous functions, which extend the scaling property by incorporating weighted degrees for different variables, such that the function satisfies a modified Euler relation with respect to these weights.[41] This weighted approach allows for more flexible scaling behaviors in multivariable settings, often applied in singularity theory and algebraic geometry.[42]Subhomogeneous functions provide a relaxation of the strict scalingequality in homogeneous functions, typically requiring f(tx) ≤ t f(x) for t ≥ 1 and x in the domain, with the exponent k potentially varying to capture inequalities in degree.[43] This inequality form contrasts with the precise equality of homogeneous functions and is prevalent in optimization and norm theory, where it ensures bounded growth under scaling.[44]In contrast to homogeneous functions, which emphasize scaling invariance, additive functions satisfy Cauchy's functional equation f(x + y) = f(x) + f(y), focusing on linearity under addition rather than multiplication by scalars.[45] While continuous solutions to Cauchy's equation are linear and thus both additive and homogeneous of degree one, pathological solutions using Hamel bases are additive but fail homogeneity, highlighting the distinct structural properties.[45]In fractal geometry, self-similarity embodies a form of homogeneity under iterative scaling transformations, connecting the scaling invariance of homogeneous functions to the repetitive structures observed in natural and mathematical fractals since the late 20th century.[46] 21st-century developments have further integrated these ideas, exploring parametric homogeneity to describe non-classical self-similar patterns in discrete and continuous settings.[47]