Fact-checked by Grok 2 weeks ago

Homogeneous function

In , particularly in the fields of and , a homogeneous function is a function of multiple variables such that the function value scales by a fixed power of the scaling when all inputs are scaled by that . Formally, a function f: V \to \mathbb{R} (where V is a vector space) is homogeneous of degree k if, for all \lambda > 0 and x \in V, f(\lambda x) = \lambda^k f(x). This property generalizes to positive homogeneity when the scaling factor is restricted to positive scalars, and more broadly in contexts like economics or physics.

Definitions

Positive homogeneity

A function f: V \to W between vector spaces is positively homogeneous of degree k if it satisfies f(tx) = t^k f(x) for all scalars t > 0 and all x \in V. This property captures the scaling behavior of the function under positive multiplication of its input, where the output scales by the k-th power of the scalar. The degree k may be any , not necessarily an ; for instance, linear functions are positively homogeneous of 1, while quadratic forms are of 2, and more generally, functions like the Cobb-Douglas f(x,y) = x^\alpha y^\beta have \alpha + \beta, which can be fractional. This flexibility allows the to apply broadly in and , where non-integer degrees model phenomena like . Unlike general homogeneity, which extends the scaling to all real scalars t (including negatives), positive homogeneity restricts to t > 0 to ensure well-defined behavior on domains like the and to sidestep sign inconsistencies that arise when t < 0 and k is non-integer. To verify positive homogeneity of degree k, substitute arbitrary x \in V and t > 0 into the scaling equation and confirm equality holds identically. The notion of homogeneous functions, including the positive case, was introduced by Leonhard Euler in his 1755 work Institutiones Calculi Differentialis, where he explored their properties in the context of .

General homogeneity

A f: \mathbb{R}^n \to \mathbb{R} is homogeneous of degree k if it satisfies the scaling equation f(tx) = t^k f(x) for all t \in \mathbb{R} and all x \in \mathbb{R}^n. This general form of homogeneity extends the concept beyond positive scalars and applies the scaling property uniformly across the real line. For t < 0, the condition requires that t^k remains real-valued to preserve the real output of f. General homogeneity implies positive homogeneity, as the equation holds for all t > 0. The converse holds if f(-x) = (-1)^k f(x) for all x \in \mathbb{R}^n (assuming k is an integer), meaning the function is even when k is even and odd when k is odd. Functions satisfying general homogeneity may not be defined at t = 0, or the property may fail there directly, since f(0 \cdot x) = 0^k f(x) leads to inconsistencies unless f(0) = 0 for k > 0. However, the limit as t \to 0 often provides insight into behavior near the origin, with \lim_{t \to 0} f(tx)/t^k = f(x) confirming the scaling in the limit.

Properties

Euler's theorem

If f is a differentiable homogeneous function of degree k on \mathbb{R}^n, then states that \sum_{i=1}^n x_i \frac{\partial f}{\partial x_i}(x) = k f(x) for all x in the domain where the partial derivatives exist. This follows by differentiating the homogeneity relation f(tx) = t^k f(x) with respect to t and setting t = 1: \frac{d}{dt} f(tx) = k t^{k-1} f(x) \implies \sum_{i=1}^n x_i \frac{\partial f}{\partial x_i}(tx) = k t^{k-1} f(x). Substituting t=1 yields the result. The theorem is particularly useful in and applications like , where extensive properties are homogeneous of degree 1.

Degrees and scaling

A homogeneous function of degree k satisfies the scaling relation f(tx) = t^k f(x) for all scalars t > 0 and arguments x in the domain, meaning the function value scales by a power of t under uniform multiplication of all inputs by t. This single-degree case captures isotropic scaling behavior, where the exponent k is constant across all directions. A generalization to functions on \mathbb{R}^n introduces multi-degrees (k_1, \dots, k_n), where homogeneity holds under independent scalings of each variable: f(t_1 x_1, \dots, t_n x_n) = t_1^{k_1} \cdots t_n^{k_n} f(x_1, \dots, x_n) for all t_i > 0. This allows anisotropic scaling, with each variable contributing its own exponent, and is employed in multi-graded algebraic structures such as Demazure modules. The total degree is then \sum k_i, reflecting the combined scaling effect. Scaling laws govern how degrees behave under operations like . For single-degree functions, if g is homogeneous of m and f of k, the f \circ g is homogeneous of k m: (f \circ g)(tx) = f(g(tx)) = f(t^m g(x)) = t^{m k} f(g(x)) = t^{k m} (f \circ g)(x). To arrive at this, substitute the homogeneity of g into the argument of f, then apply f's homogeneity to the resulting scaled input. In the multi-degree case, the output degree vector depends on the component-wise scalings induced by the inner function, often requiring compatible multi-degrees for the to remain multi-homogeneous. The general equation for multi-homogeneous functions is the defining relation given above, which extends the single-degree case by the scalings t_i. Key properties include additivity of degrees under : if f and g are both homogeneous of the same single k, then f + g is homogeneous of k, since (f + g)(tx) = f(tx) + g(tx) = t^k f(x) + t^k g(x) = t^k (f + g)(x). The reasoning follows directly from of the operation and matching exponents. For multi-degrees, preserves the multi-degree (k_1, \dots, k_n) only if f and g share the same , as mismatched exponents would violate the independent relation.

Examples

Basic scalar examples

For univariate functions from \mathbb{R}_+ \to \mathbb{R}, power functions are canonical examples of homogeneous functions. The function f(x) = x^k for x > 0 and any real k satisfies f(tx) = (tx)^k = t^k x^k = t^k f(x) for t > 0, making it homogeneous of degree k. For instance, f(x) = x^3 is homogeneous of degree 3, while f(x) = \sqrt{x} is homogeneous of degree $1/2.

Multivariable functions

In multivariable calculus, a homogeneous function of degree k is defined for a mapping f: \mathbb{R}^n \to \mathbb{R} by the scaling relation f(t \mathbf{x}) = t^k f(\mathbf{x}) for all scalars t > 0 and vectors \mathbf{x} \in \mathbb{R}^n \setminus \{\mathbf{0}\}. This condition captures the function's behavior under uniform radial scaling of its inputs, highlighting how the output scales proportionally to the k-th power of the scaling factor. Such functions are fundamental in analyzing symmetry and scaling laws in higher dimensions, building on the univariate case by applying the transformation simultaneously to all coordinates. A representative example is the f(x, y) = x^2 + y^2, which satisfies f(tx, ty) = (tx)^2 + (ty)^2 = t^2 (x^2 + y^2) = t^2 f(x, y) and is thus homogeneous of degree 2. This extends naturally to \mathbb{R}^n, where f(x_1, \dots, x_n) = \sum_{i=1}^n x_i^2 is homogeneous of degree 2, as direct substitution yields f(t x_1, \dots, t x_n) = t^2 \sum_{i=1}^n x_i^2 = t^2 f(x_1, \dots, x_n). To verify homogeneity for any candidate , one computes the scaled version explicitly: for instance, with f(x, y) = x^2 + y^2, the f(tx, ty) = t^2 f(x, y) holds for all t > 0, confirming the degree. This verification process is straightforward and directly tests the defining property without requiring additional tools. The radial nature of homogeneous functions becomes evident in polar coordinates, where the scaling aligns with the radial direction. In \mathbb{R}^2, with x = r \cos \theta and y = r \sin \theta, a homogeneous function of degree k takes the separated form f(r, \theta) = r^k g(\theta), where g depends only on the angular coordinate \theta. For purely radial functions, which depend solely on the Euclidean norm \|\mathbf{x}\| = r such that f(\mathbf{x}) = g(r), homogeneity of degree k requires g(r) = r^k h(\theta) with h(\theta) constant (independent of \theta), reducing to g(r) = c r^k for some constant c. This structure underscores the directional invariance in the angular component for general cases, while pure radial functions exhibit full . Functions may also exhibit partial homogeneity, being homogeneous of degree k with respect to a proper of their variables while remaining unaffected by in the others. For example, consider f(x, y, z) = x^2 + y^2; only the first two variables gives f(tx, ty, z) = (tx)^2 + (ty)^2 = t^2 (x^2 + y^2) = t^2 f(x, y, z), confirming homogeneity of 2 in x and y alone. This selective applies the directionally in the variable space, useful for modeling systems with grouped inputs, such as certain production functions where outputs depend homogeneously on specific factors. Verification follows similarly by fixing the unaffected variables and checking the relation on the .

Polynomials and rationals

Homogeneous polynomials consist of terms all of the same total . For example, f(x, y, z) = 2x^2 y + 3 x y z + z^3 is a of 3, as each has total 3, and f(t x, t y, t z) = t^3 f(x, y, z). More generally, any sum of of k is homogeneous of k. Rational functions can also be homogeneous. A of two of degrees m and n is homogeneous of m - n. For instance, f(x, y) = \frac{x^2 - y^2}{x^2 + y^2} has both numerator and denominator homogeneous of 2, so f(tx, ty) = f(x, y) and it is homogeneous of 0. Another example is f(x, y) = \frac{x + y}{x^2 + y^2}, where the numerator is 1 and denominator 2, yielding homogeneity of -1: f(tx, ty) = t^{-1} f(x, y).

Norms and linear maps

Vector norms are homogeneous of degree 1. The Euclidean norm \|\mathbf{x}\|_2 = \sqrt{\sum_{i=1}^n x_i^2} satisfies \|t \mathbf{x}\|_2 = t \|\mathbf{x}\|_2 for t > 0, confirming degree 1. Similarly, the p-norm \|\mathbf{x}\|_p = \left( \sum_{i=1}^n |x_i|^p \right)^{1/p} for $1 \leq p < \infty is homogeneous of degree 1. Linear maps between vector spaces are homogeneous of degree 1. For a linear transformation T: \mathbb{R}^n \to \mathbb{R}^m represented by a matrix A, T(t \mathbf{x}) = A (t \mathbf{x}) = t (A \mathbf{x}) = t T(\mathbf{x}) for t > 0, so T is homogeneous of degree 1. This property holds for any linear operator.

Min-max and non-examples

If f and g are homogeneous functions of the same degree k, then the minimum \min(f, g) is also homogeneous of degree k, since \min(f(tx), g(tx)) = \min(t^k f(x), t^k g(x)) = t^k \min(f(x), g(x)) for t > 0. The same holds for the maximum \max(f, g). A classic example is the Leontief production or utility function f(\mathbf{x}) = \min_i x_i for \mathbf{x} \in \mathbb{R}^m_+, which is homogeneous of degree 1 because f(t\mathbf{x}) = \min_i (t x_i) = t \min_i x_i = t f(\mathbf{x}) for t > 0. For scalar arguments, this property simplifies to \min(tx, ty) = t \min(x, y) when t > 0, demonstrating positive homogeneity of degree 1. Non-examples help delineate the boundaries of homogeneity. Consider f(x, y) = x^2 + y^3; scaling yields f(tx, ty) = t^2 x^2 + t^3 y^3, which does not equal t^k (x^2 + y^3) for any fixed k across all x, y > 0, as the degrees differ. Similarly, f(x) = |x| + 1 fails homogeneity: f(tx) = t |x| + 1, which mismatches t^k (|x| + 1) for k = 1 (yielding t |x| + t) and other k values, since the constant term disrupts uniform scaling. A frequent misconception arises from conflating homogeneity with additivity. Additive functions satisfy f(x + y) = f(x) + f(y), but without the scaling condition f(tx) = t^k f(x), they are not necessarily homogeneous; for instance, pathological solutions to over \mathbb{R} (non-linear under the ) violate homogeneity despite additivity. In contrast, standard linear functions like f(x, y) = x + y are both additive and homogeneous of degree 1, but this coincidence does not generalize.

Applications

Differential equations

Homogeneous first-order ordinary differential equations (ODEs) take the form y' = f\left(\frac{y}{x}\right), where f is a homogeneous function of degree zero, meaning the equation remains unchanged under the scaling (x, y) \to (tx, ty) for t > 0. These equations are nonlinear but can be reduced to separable form through the substitution v = \frac{y}{x}, or equivalently y = vx. Differentiating y = vx with respect to x using the product rule yields y' = v + x \frac{dv}{dx}. Substituting into the original equation gives v + x \frac{dv}{dx} = f(v), which rearranges to the separable equation x \frac{dv}{dx} = f(v) - v, or \frac{dv}{f(v) - v} = \frac{dx}{x}. Integrating both sides produces \int \frac{dv}{f(v) - v} = \ln |x| + c, where c is the constant of integration, and back-substituting v = \frac{y}{x} yields the implicit solution. A simple example is the equation y' = \frac{y}{x}, where f(v) = v. The substitution v = \frac{y}{x} leads to v + x \frac{dv}{dx} = v, simplifying to x \frac{dv}{dx} = 0, so \frac{dv}{dx} = 0 and v = c. Thus, \frac{y}{x} = c, or y = cx, which is the general solution for x > 0 or x < 0 (adjusting the absolute value as needed). In contrast, non-homogeneous equations like y' = \frac{y}{x} + g(x) cannot be solved by this alone and typically require integrating factors or other methods. For higher-order linear ODEs, homogeneous Euler-Cauchy equations have the form x^2 y'' + a x y' + b y = 0, where the coefficients scale with powers of x matching the derivative orders, reflecting homogeneity under x \to tx. The solution method assumes a trial form y = x^m (for x > 0), leading to derivatives y' = m x^{m-1} and y'' = m(m-1) x^{m-2}. Substituting gives the m(m-1) + a m + b = 0, a in m. The determine the general solution: for distinct real m_1, m_2, it is y = c_1 x^{m_1} + c_2 x^{m_2}; for repeated m, y = (c_1 + c_2 \ln x) x^m; and for complex \alpha \pm i \beta, y = x^\alpha (c_1 \cos(\beta \ln x) + c_2 \sin(\beta \ln x)). An example is $4x^2 y'' + 17x y' - 12 y = 0. Assuming y = x^m, the characteristic equation is $4m(m-1) + 17m - 12 = 0, or $4m^2 + 13m - 12 = 0, with roots m = \frac{3}{4}, -4. The general solution is y = c_1 x^{3/4} + c_2 x^{-4} for x > 0. Non-homogeneous Euler-Cauchy equations, such as x^2 y'' + a x y' + b y = g(x), add a particular solution found via undetermined coefficients or variation of parameters, but the homogeneous part follows the same method.

Economics and optimization

In economic theory, homogeneous functions play a central role in modeling production processes, particularly through production functions that exhibit . The Cobb-Douglas production function, introduced in 1928, takes the form f(x, y) = A x^\alpha y^\beta, where x and y represent inputs such as labor and capital, A > 0 is a productivity parameter, and \alpha, \beta > 0 are elasticities. This function is homogeneous of degree \alpha + \beta, meaning that scaling all inputs by a factor \lambda > 0 scales output by \lambda^{\alpha + \beta}. If \alpha + \beta = 1, the function displays constant , a property widely assumed in neoclassical growth models to analyze long-term economic expansion. Another prominent example is the constant (CES) production function, developed in 1961, given by f(x, y) = \left( \alpha x^\rho + (1 - \alpha) y^\rho \right)^{1/\rho}, which is homogeneous of degree 1 for linear homogeneity when the exponent is adjusted accordingly. The CES form generalizes the by allowing a but variable \sigma = 1/(1 - \rho) between inputs, facilitating analysis of factor in . In utility theory, homogeneous utility functions imply that marginal rates of remain along rays from the , ensuring consistent preferences under scaling. For cost functions, homogeneity of degree 1 in input prices ensures that doubling all prices doubles , aligning with no-arbitrage conditions. Homogeneity also invokes , which states that for a f homogeneous of degree \gamma, \sum_i x_i \frac{\partial f}{\partial x_i} = \gamma f(x). In , when \gamma = 1 under constant returns, this equates total output to the sum of s valued at input levels, justifying payments exhausting in competitive markets—such as wages equaling labor's and rents equaling capital's. This application underpins distribution theory and firm . In optimization contexts, homogeneous objectives in economic problems, like maximizing linearly homogeneous production subject to linear constraints, yield solutions along rays from the origin, where optimal input proportions remain fixed regardless of scale. This ray property simplifies solving linear programming formulations in resource allocation, as seen in production planning where input ratios are invariant to output levels. More recently, post-2020 applications in machine learning scaling laws model loss landscapes using homogeneous growth assumptions, where neuron allocation scales proportionally with network size, leading to predictable performance improvements as \ell \propto N^{-\alpha} for model size N and exponent \alpha. This framework, assuming homogeneous resource distribution across subtasks, aligns empirical observations in large language models with theoretical predictions.

Generalizations

Monoid actions

The concept of a homogeneous function can be generalized to settings where a G acts on a space V. A function f: V \to \mathbb{R} (or more generally to another space) is said to be homogeneous of degree k with respect to the action if there exists a character \chi: G \to \mathbb{R}^\times such that f(g \cdot v) = \chi(g)^k f(v) for all g \in G, v \in V. In the classical case, G = (\mathbb{R}_{>0}, \cdot) acts by scaling, and \chi(g) = g. This framework encompasses actions by other monoids, such as those arising in graded structures or .

Distributions and generalized functions

In the theory of distributions, a distribution T on \mathbb{R}^n is said to be homogeneous of degree k if, for every test function \phi \in \mathcal{D}(\mathbb{R}^n) and every \lambda > 0, \langle T, \phi(\cdot / \lambda) \rangle = \lambda^{k + n} \langle T, \phi \rangle, where n is the dimension of the space. This weak definition extends the classical notion of homogeneity to generalized functions, allowing for singularities at the origin that prevent pointwise evaluation. Prominent examples include the Dirac delta distribution \delta, which is homogeneous of degree -n, satisfying \langle \delta, \phi(\cdot / \lambda) \rangle = \phi(0) = \lambda^0 \langle \delta, \phi \rangle, consistent with k + n = 0. Derivatives of homogeneous distributions inherit adjusted degrees; if T is homogeneous of degree k, then its distributional derivative \partial_j T is homogeneous of degree k - 1 for each coordinate direction j. For instance, principal value distributions like \mathrm{p.v.} (1/|x|^m) in \mathbb{R}^n (for suitable m) are homogeneous of degree -m. A involves the Euler E = \sum_{j=1}^n x_j \partial_j, defined distributionally by \langle E T, \phi \rangle = -\sum_{j=1}^n \langle T, x_j \partial_j \phi \rangle. If T is homogeneous of k, then E T = k T. This eigenvalue relation characterizes homogeneity in the distributional sense, analogous to the smooth case via . The preserves this structure up to a shift: if T is homogeneous of k, its \hat{T} is homogeneous of -k - n. For example, the of |x|^s (homogeneous of s, \mathrm{Re}(s) > -n) is proportional to |\xi|^{-n-s}. In partial differential equations, homogeneous distributions play a central role as fundamental solutions. For the wave operator \Box = \partial_t^2 - \Delta in d+1 dimensions, the forward fundamental solution E_+ is a homogeneous distribution of degree $1 - d, supported in the forward light cone, satisfying \Box E_+ = \delta_{(0,0)}. This enables explicit representation formulas for solutions to the inhomogeneous wave equation \Box \phi = F with initial data, such as Kirchhoff's formula in three spatial dimensions. Since the 1980s, homogeneous distributions have been integral to , where they describe singularities via wave front sets in . In this framework, the principal symbol of pseudodifferential operators acts on homogeneous components of asymptotic expansions near singularities. More recently, in the 2020s, microlocal techniques involving homogeneous distributions have advanced , particularly in analyzing the microlocal spectrum condition and of interacting fields on curved spacetimes, ensuring Hadamard states satisfy sharp structures.

Terminology

Name variants

The concept of a homogeneous function traces its origins to the 18th century, when Leonhard Euler introduced the term "functiones homogeneæ" in Latin to describe polynomials where all terms have the same degree, as detailed in his 1748 work Introductio in analysin infinitorum. This terminology evolved over time to encompass more general functions satisfying scaling relations, shifting from Euler's polynomial-focused definition to the modern broader usage in multivariable calculus and analysis by the 19th century. Common variants include "homogeneous of degree k," which specifies the scaling exponent in the relation f(\lambda \mathbf{x}) = \lambda^k f(\mathbf{x}) for \lambda > 0 and \mathbf{x} \in \mathbb{R}^n. In contexts involving or , such functions are termed "scaling functions," reflecting their role in describing self-similar behaviors under rescaling. Field-specific nomenclature further diversifies the term. In , functions homogeneous of degree 1 are dubbed "linearly homogeneous," signifying in production models, as seen in analyses of the Solow growth model. In physics, especially in and field theory, they are referred to as "scale-invariant" functions, capturing systems without intrinsic length scales, such as in . A key distinction in involves positive versus homogeneity. Positive homogeneity requires the f(\lambda \mathbf{x}) = \lambda^k f(\mathbf{x}) to hold only for \lambda > 0, allowing functions like certain norms or the to qualify even if behavior under negative differs. homogeneity, by contrast, extends the property to all real \lambda using |\lambda|^k, which is standard for vector norms to ensure consistency with the . These variants avoid with "homogeneous equations," particularly in differential equations, where the term denotes equations with zero right-hand side (linear case) or functions of degree zero in the nonlinear case, unrelated to the of the itself. Homogeneous functions are closely related to quasi-homogeneous functions, which extend the scaling property by incorporating weighted degrees for different variables, such that the function satisfies a modified Euler relation with respect to these weights. This weighted approach allows for more flexible scaling behaviors in multivariable settings, often applied in singularity theory and algebraic geometry. Subhomogeneous functions provide a relaxation of the strict in homogeneous functions, typically requiring f(tx) ≤ t f(x) for t ≥ 1 and x in the domain, with the exponent k potentially varying to capture in . This form contrasts with the precise of homogeneous functions and is prevalent in optimization and norm theory, where it ensures bounded growth under . In contrast to homogeneous functions, which emphasize scaling invariance, additive functions satisfy f(x + y) = f(x) + f(y), focusing on linearity under addition rather than multiplication by scalars. While continuous solutions to are linear and thus both additive and homogeneous of degree one, pathological solutions using Hamel bases are additive but fail homogeneity, highlighting the distinct structural properties. In fractal geometry, self-similarity embodies a form of homogeneity under iterative transformations, connecting the invariance of homogeneous functions to the repetitive structures observed in natural and mathematical fractals since the late 20th century. 21st-century developments have further integrated these ideas, exploring homogeneity to describe non-classical self-similar patterns in and continuous settings.