Mean-preserving spread
A mean-preserving spread is a transformation of a probability distribution that increases its riskiness—typically measured by greater variance or dispersion—while preserving the expected value, or mean, of the distribution. This concept allows for the comparison of lotteries or random variables with identical means but differing degrees of uncertainty, where the spread-out distribution is deemed riskier.[1][2] The notion of mean-preserving spread was formalized by economists Michael Rothschild and Joseph E. Stiglitz in their seminal 1970 paper, providing a rigorous framework for analyzing increases in risk under uncertainty. They defined one distribution G as a mean-preserving spread of another distribution F if G can be obtained from F by adding noise with conditional mean zero, ensuring the overall mean remains unchanged while the second moments (related to variance) increase. Equivalently, F second-order stochastically dominates G, meaning the integral of the cumulative distribution function of G from the lower bound to any point exceeds that of F, with equality at the upper bound, implying that all risk-averse decision-makers prefer outcomes from F over G.[3][1]Conceptual Foundations
Informal Definition
A mean-preserving spread (MPS) is a transformation applied to a probability distribution of a random variable, which disperses the possible outcomes more widely around the central tendency while ensuring that the expected value—the long-run average outcome—remains unchanged.90038-4) This concept, introduced by Rothschild and Stiglitz, provides a way to compare the riskiness of different distributions that share the same mean, emphasizing how variability can intensify without shifting the overall average payoff.90038-4) At its core, an MPS heightens uncertainty by reallocating probability mass toward more extreme values, either higher or lower, at the expense of probabilities near the mean, thereby increasing the perceived risk without any compensatory gain in expectation.90038-4) Probability distributions outline the relative frequencies of outcomes for a random variable, such as returns on an investment or income levels, while the expected value quantifies the distribution's central location under repeated realizations.90038-4) This preservation of the mean distinguishes MPS from other changes, like those that might simultaneously alter both mean and variance. Intuitively, an MPS can be visualized as shaking a bag of marbles—each representing an outcome—such that they end up more scattered in position but with the same total weight, amplifying the unpredictability of drawing any single marble.[4] This notion underpins broader analyses of risk in decision-making, connecting informally to ideas like second-order stochastic dominance, where one distribution is preferred by all risk-averse agents over another with equal mean but greater spread.90038-4)Intuition and Motivation
In economics, the concept of a mean-preserving spread (MPS) provides a framework for analyzing how the addition of noise or variability to economic outcomes influences decision-making while keeping the expected value unchanged, thereby isolating the effects of risk from changes in anticipated returns.[5] This approach is particularly useful in modeling scenarios where agents face uncertain prospects, such as investment returns or income fluctuations, allowing researchers to examine pure increases in uncertainty without confounding shifts in the mean. The intuition underlying MPS lies in its connection to risk preferences: risk-averse individuals or firms typically prefer distributions with less spread (lower risk) when the mean outcome remains the same, as greater variability amplifies the potential for unfavorable realizations despite the unchanged expectation.[6] This preference reflects a fundamental aversion to uncertainty, where the downside of heightened dispersion outweighs any symmetric upside, leading agents to value stability in outcomes like wealth or profits.[7] In expected utility theory, such spreads are systematically disliked by concave utility functions, underscoring the role of MPS in characterizing risk-averse behavior.[8] Historically, the formalization of MPS emerged from efforts to rigorously define "more risky" prospects in economic analysis, building on earlier work in portfolio theory by Harry Markowitz, who emphasized variance as a measure of risk in mean-variance optimization. Markowitz's 1952 framework laid the groundwork by treating increased variance at fixed means as heightened risk, but it was Michael Rothschild and Joseph Stiglitz who introduced MPS in 1970 to provide a distribution-based ordering that captures intuitive notions of risk augmentation beyond simple variance.[3] Their definition enabled precise comparisons of lotteries or investments solely on riskiness, distinguishing pure risk escalation from alterations in expected payoffs. This distinction is crucial in economic theory and practice, as it forms the basis for evaluating choices under uncertainty, such as insurance decisions or asset allocation, where policymakers and investors seek to mitigate risk without altering baseline expectations.[9] By focusing on spreads that preserve the mean, MPS has become foundational for assessing the welfare implications of risk in diverse contexts, from financial markets to public policy.[6]Mathematical Formulations
Definition via Stochastic Dominance
A mean-preserving spread formalizes the notion of increased riskiness while preserving the expected value. Specifically, a random variable with cumulative distribution function (CDF) F is a mean-preserving spread of another random variable with CDF G if both have the same mean and G second-order stochastically dominates F.[3] This dominance implies that G is preferred by all risk-averse decision makers to F, reflecting the added dispersion in F without altering the mean.[3] Second-order stochastic dominance (SSD) provides the mathematical foundation for this concept. A distribution G second-order stochastically dominates F if, for all x, \int_{-\infty}^{x} [F(t) - G(t)] \, dt \geq 0, with equality as x \to \infty when the means are equal.[3] This condition ensures that the accumulated probability mass up to any point x under F is no smaller than under G in an integrated sense, capturing a mean-equivalent increase in spread for F. The non-negativity of the integral reflects less "downside risk" in G compared to F, as the area between the CDFs—where F places more weight on extreme outcomes—is accounted for cumulatively.[3] This framework builds on first-order stochastic dominance, which serves as a prerequisite for understanding SSD. First-order stochastic dominance occurs when G first-order dominates F if G(x) \leq F(x) for all x, meaning G assigns no higher probability to outcomes below any threshold than F does.[3] SSD extends this by integrating the CDF differences, allowing for crossings in the CDFs themselves but penalizing greater variability in the tails, which aligns directly with the mean-preserving property.[3]Integral and Moment Conditions
A distribution F represents a mean-preserving spread (MPS) of another distribution G if the following integral condition holds: \int_{-\infty}^{x} F(t) \, dt \geq \int_{-\infty}^{x} G(t) \, dt \quad \text{for all } x \in \mathbb{R}, with equality in the limit as x \to \infty. This formulation captures the notion that F exhibits greater dispersion than G while preserving the mean, as the limiting equality ensures \mathbb{E}[X_F] = \mathbb{E}[X_G].[1][2] An equivalent characterization arises through moments: the first moments (means) of F and G are identical, and the second central moment (variance) of F is at least as large as that of G, reflecting increased riskiness. Specifically, \mathrm{Var}(X_F) \geq \mathrm{Var}(X_G).[1][2] To derive these from the second-order stochastic dominance (SSD) condition—where G SSD F implies the integral inequality—the limiting equality follows directly from the expression for the mean, \mathbb{E}[X] = \int_{0}^{\infty} (1 - F(x)) \, dx - \int_{-\infty}^{0} F(x) \, dx, ensuring equal expectations. The variance inequality then emerges via integration by parts applied to the SSD integral, yielding \mathrm{Var}(X_F) - \mathrm{Var}(X_G) = 2 \int_{-\infty}^{\infty} (x - \mu) [F(x) - G(x)] \, dx \geq 0, where \mu is the common mean.[1][2] For distributions with finite means, the integral and moment conditions are fully equivalent to the SSD-based definition of MPS.[1][2]Examples and Illustrations
Discrete Case
In the discrete case, mean-preserving spreads are illustrated using finite probability mass functions (PMFs) over countable outcomes, allowing for straightforward computation of means and variances to demonstrate increased risk without altering the expected value. A simple example begins with a degenerate distribution where the outcome is $100 with probability 1, which has mean $100 and variance 0. A mean-preserving spread of this distribution is one with 50% chance of $50 and 50% chance of $150, preserving the mean at $100 = 0.5 \times 50 + 0.5 \times 150 while increasing the variance to 2500 = 0.5 \times (50 - 100)^2 + 0.5 \times (150 - 100)^2.[10] Another illustrative binary example shifts from an initial distribution with 50% chance of $0 and 50% chance of $200 (mean $100 = 0.5 \times 0 + 0.5 \times 200, variance 10000 = 0.5 \times (0 - 100)^2 + 0.5 \times (200 - 100)^2) to a mean-preserving spread with 50% chance of -$50 and 50% chance of $250 (mean $100 = 0.5 \times (-50) + 0.5 \times 250, variance 22500 = 0.5 \times (-50 - 100)^2 + 0.5 \times (250 - 100)^2). This transformation spreads the outcomes further while maintaining the same mean, resulting in higher variance that quantifies the added risk.[10] To visualize these discrete distributions, bar charts of the PMFs highlight the spread. For the first example:| Outcome | Original PMF | Spread PMF |
|---|---|---|
| $50 | 0 | 0.5 |
| $100 | 1 | 0 |
| $150 | 0 | 0.5 |