Fact-checked by Grok 2 weeks ago

Average

In statistics and mathematics, an average is a measure of that summarizes a with a single representative value, indicating what is typical or central within the . The most commonly used average is the , calculated by summing all values in the dataset and dividing by the number of observations, providing a balanced summary when data are symmetrically distributed. Other key types of averages include the and the . The median is the middle value in an ordered list of numbers, making it robust to outliers and useful for skewed distributions. The mode identifies the most frequently occurring value, applicable to both numerical and categorical data, though a dataset may have no mode, one mode, or multiple modes. These measures, alongside the mean, form the primary tools for describing , with selection depending on data characteristics like symmetry or presence of extreme values. Beyond the arithmetic mean, specialized means serve specific applications: the geometric mean averages ratios or growth rates by taking the nth root of the product of values, ideal for multiplicative processes; the harmonic mean weights values inversely, commonly used for rates like speeds over equal distances; and the weighted mean adjusts for varying importance of data points. Each type addresses limitations of the simple arithmetic mean, such as in financial returns or physical measurements. Averages are foundational in , enabling data summarization, comparison, and inference across fields like , , and . However, their interpretation requires caution, as misuse—such as relying solely on the in skewed datasets—can mislead; combining multiple averages often provides a fuller picture.

Fundamentals

Definition

In statistics, an average is a measure of that summarizes a with a single representative value, often indicating the "middle" or typical value among the observations. This concept allows for the condensation of complex into a more understandable form, facilitating analysis and interpretation. Averages play a key role in , where they characterize the central location of a sample without making broader generalizations. In contrast, within inferential statistics, sample averages are used to estimate unknown population parameters, enabling conclusions about an entire population based on partial . For a finite dataset denoted as \{x_1, x_2, \dots, x_n\}, where n is the number of observations, an average A serves as a central value that reflects the overall tendency of these data points. Multiple types of averages exist to accommodate varying data characteristics, such as skewness, which can distort the representativeness of certain measures in non-symmetric distributions. Classical examples include the Pythagorean means, which encompass foundational approaches to averaging positive real numbers.

General Properties

Averages, in their general form, can be expressed as combinations of the input values, where each value is weighted by a non-negative summing to one, or more broadly as generalized means that satisfy for functions. This property implies that for a f, the average of f applied to the inputs is at least f applied to the average, providing a foundational link between averages and convexity in optimization and . Key shared properties across types of averages include , homogeneity, and monotonicity. ensures that applying the average operation twice to the same set yields the identical result, reflecting under repetition. Homogeneity means that scaling all inputs by a positive constant c scales the average by the same factor, preserving . Monotonicity guarantees that if each input in one set is greater than or equal to the corresponding input in another, the average of the first set is at least as large as that of the second, maintaining order preservation. The exemplifies these traits as a prototypical case. Averages exhibit in the presence of outliers, where values disproportionately influence the result, pulling it away from the central cluster and potentially leading to over- or underestimation. Conversely, they play a crucial role in reducing variance, as the demonstrates that the variance of the sample average decreases proportionally to $1/n with increasing sample size n, converging to the true . Among , averages satisfy fundamental inequality relations, such as the arithmetic mean-geometric mean (AM-GM) inequality, which states that the is greater than or equal to the , with equality if and only if all numbers are equal. This extends to the full chain: \geq \geq , encapsulating the ordering of Pythagorean means.

Pythagorean Means

The Pythagorean means consist of the arithmetic mean (AM), geometric mean (GM), and harmonic mean (HM). These means, studied by the ancient Pythagorean school in the context of proportions and music theory, satisfy the inequality HM ≤ GM ≤ AM for positive real numbers, with equality if and only if all values are equal.

Arithmetic Mean

The arithmetic mean, also known as the average, is the most basic of the Pythagorean means and serves as a fundamental measure of central tendency in mathematics and statistics. It represents the sum of a set of numbers divided by the number of values in the set, providing a single value that summarizes the data. For a finite dataset consisting of n observations x_1, x_2, \dots, x_n, the arithmetic mean \bar{x} is calculated using the formula: \bar{x} = \frac{1}{n} \sum_{i=1}^{n} x_i = \frac{x_1 + x_2 + \dots + x_n}{n}. This unweighted formula assumes equal importance for each data point and is widely used in descriptive statistics for its simplicity and interpretability. Conceptually, the arithmetic mean acts as the balance point of a dataset, analogous to the fulcrum on a seesaw where the moments on both sides are equal, ensuring equilibrium. In probability theory, it corresponds to the expected value of a discrete random variable, defined as the weighted average of all possible outcomes, each multiplied by its probability, which estimates the long-run average over many trials. For example, to find the average daily temperature over a week, one sums the seven recorded temperatures and divides by 7, yielding a representative central value for weather analysis. Similarly, the arithmetic mean of household incomes in a population offers insight into overall economic conditions, though it must be interpreted cautiously in diverse datasets. Despite its utility, the is highly sensitive to outliers, as a single extreme value can disproportionately influence the result and distort the . It is most appropriate for symmetric distributions without significant , where it aligns closely with other measures like the , providing a robust summary of the data's center. In cases of skewed data, alternatives such as the may better capture the typical value.

Geometric Mean

The geometric mean of n positive real numbers x_1, x_2, \dots, x_n is defined as the nth root of their product: \text{GM} = (x_1 x_2 \cdots x_n)^{1/n}. This measure aggregates values multiplicatively, making it suitable for datasets where relative changes or proportions are key, such as ratios or growth factors. In practical applications, the is commonly used to average percentages, returns over time, or biological rates. For instance, when evaluating in investments, it provides the constant rate that would yield the same overall return as the varying rates observed. Similarly, in , it models average rates across generations or environmental conditions, capturing multiplicative effects effectively. An equivalent formulation leverages logarithms, interpreting the geometric mean as \text{GM} = \exp\left( \frac{\ln x_1 + \ln x_2 + \cdots + \ln x_n}{n} \right). This shows that the geometric mean is the exponential of the arithmetic mean of the logarithms, which simplifies computation for large datasets and highlights its role in handling log-normally distributed data. The has limitations: it is undefined for zero or negative values, as the product or root would not yield a , restricting its use to strictly positive data. In the context of the arithmetic mean-geometric mean inequality, equality holds precisely when all values are identical, underscoring the measure's sensitivity to variation in the .

Harmonic Mean

The , one of the Pythagorean means, is particularly suitable for averaging rates or ratios where the data represent reciprocals, such as speeds or efficiencies, and is defined only for to avoid or negative values. For a of n x_1, x_2, \dots, x_n, the (HM) is given by the formula \mathrm{HM} = \frac{n}{\sum_{i=1}^n \frac{1}{x_i}}. This formulation arises as the reciprocal of the of the reciprocals, \frac{1}{\mathrm{HM}} = \frac{1}{n} \sum_{i=1}^n \frac{1}{x_i}, which inherently emphasizes the influence of smaller values in the , as their reciprocals are larger and thus contribute more to the sum in the denominator. In practical applications, the harmonic mean excels in scenarios involving equal denominators, such as distances or volumes. For instance, when calculating the average speed over equal distances traveled at varying speeds, the harmonic mean provides the correct total distance divided by total time, rather than the , which would overestimate the average. A classic example is a round trip where the outbound speed is 30 mph and the return is 60 mph over the same distance; the harmonic mean yields 40 mph, reflecting the actual average speed. Similarly, in , the equivalent of resistors connected in is the harmonic mean of their individual resistances, as derived from the reciprocal sum in Kirchhoff's laws: for two resistors R_1 and R_2, the parallel resistance is \frac{2}{\frac{1}{R_1} + \frac{1}{R_2}}. In transportation and environmental analysis, the harmonic mean is used for fleet when averaging miles per (mpg) over equal distances, ensuring accurate representation of overall consumption without bias toward higher-efficiency segments. The harmonic mean relates to the other Pythagorean means through the AM-GM-HM inequality, which states that for , \mathrm{HM} \leq \mathrm{GM} \leq \mathrm{AM}, with equality holding if and only if all values are equal; this ordering underscores the harmonic mean's tendency to produce the smallest value among the three means, further highlighting its sensitivity to lower data points. As a special case of the broader family of power means (with exponent p = -1), it generalizes to weighted forms but remains distinct in its focus on averaging.

Other Measures of Central Tendency

Median

The median is the value separating the higher half from the lower half of a arranged in ascending order. For a with an odd number of observations, it is the middle value; for an even number, it is the of the two central values. To compute the , arrange the data in non-decreasing order and identify the position given by \frac{n+1}{2}, where n is the number of observations. If this yields an , the value at that position is the ; otherwise, average the values at the adjacent positions. The median's primary advantage lies in its robustness to extreme values, as it relies on relative ordering rather than absolute magnitudes, preventing outliers from distorting the measure of . This property makes it ideal for skewed distributions, where extreme values in the tail can mislead other measures. It is also well-suited to , which features ranked categories without assuming equal intervals between them. A common application is in reporting, where distributions are typically right-skewed due to a small number of high earners; for instance, the was $83,730 in 2024. In educational contexts, consider test scores of 55, 72, 80, 88, and 200: the of 80 remains representative despite the , whereas other central measures would be inflated. In asymmetric distributions, the often better captures the typical value compared to alternatives sensitive to .

Mode

The mode is defined as the value or values that occur most frequently in a dataset. A dataset is unimodal if it has a single mode, bimodal if it has two modes, or multimodal if it has more than two modes. To identify the mode in discrete data, one counts the frequency of each distinct value and selects those with the highest count. In continuous data, the mode is typically estimated by examining peaks in the probability density, often visualized through histograms where the bin with the highest frequency indicates the modal region. The mode is particularly useful for analyzing categorical or nominal , such as identifying the most common color in a survey or the predominant category in a set of labels. However, in distributions where all values occur with equal frequency, no unique mode exists. Key limitations of the mode include the possibility that it may not exist in datasets with all unique values, or that multiple modes can complicate interpretation; additionally, it remains insensitive to the overall spread or of other values in the .

Mid-range

The midrange is a basic measure of defined as the of the minimum and maximum values in a . It provides a simple positional estimate of the center by focusing exclusively on the extremes. The formula for the midrange, denoted as MR, is: MR = \frac{\min(x) + \max(x)}{2} where \min(x) and \max(x) are the smallest and largest observations in the x. This measure is valued for its computational simplicity, requiring only identification of the two extreme values, which makes it suitable for quick rough estimates or preliminary exploration before more detailed analysis. It is particularly effective in applications involving uniform distributions, where the data points are evenly spread, as the then aligns closely with the true center. In symmetric cases, such as uniform distributions, the coincides with the . Despite its ease, the has significant drawbacks, as it is highly sensitive to outliers that can drastically alter the minimum or maximum values and thus distort the overall estimate. By disregarding all intermediate data points, it fails to capture the distribution's internal structure, making it the least robust among common measures of and generally unsuitable for most practical statistical analyses.

Advanced and Weighted Averages

Weighted Arithmetic Mean

The , also known as the weighted average, extends the by assigning different levels of importance to individual data points through weights, allowing for a more nuanced of the . It is particularly useful when some observations are more significant than others, such as in scenarios where data points represent varying sample sizes or priorities. The formula for the weighted arithmetic mean of n values x_1, x_2, \dots, x_n with corresponding positive weights w_1, w_2, \dots, w_n is given by \overline{x}_w = \frac{\sum_{i=1}^n w_i x_i}{\sum_{i=1}^n w_i}, where the weights w_i > 0 reflect the relative importance of each x_i. If all weights are equal, the weighted arithmetic mean reduces to the standard arithmetic mean. Weights are often normalized such that their sum equals 1, simplifying the denominator to 1, or they may sum to the number of observations n for convenience in certain computations. A common example is the calculation of grade point average (GPA) in educational systems, where course credits serve as weights: the GPA is the sum of (grade points multiplied by credits) divided by total credits, emphasizing courses with higher credit hours. This normalization ensures the result remains on the same scale as the original data while proportionally adjusting influence. The inherits key properties of the , such as —meaning the mean of a of data sets is the of their weighted means—but introduces flexibility to emphasize specific elements, for instance, by assigning higher weights to more recent data in time series analysis. However, it can exhibit counterintuitive behaviors compared to unweighted means, such as when weights amplify outliers. In applications like , weights adjust for unequal probabilities of selection or non-response, ensuring the mean better represents the target population. Similarly, in , it computes returns by weighting individual asset returns according to their allocation proportions, providing a value-weighted measure. This weighting approach can be generalized to power means for broader families of averages.

Power Means

Power means constitute a parametric family of means that generalize and unify several classical notions of average, parameterized by a real number r that determines the type of aggregation performed on the input values. For a set of positive real numbers x_1, \dots, x_n, the power mean of order r \neq 0 is defined as M_r(x_1, \dots, x_n) = \left( \frac{1}{n} \sum_{i=1}^n x_i^r \right)^{1/r}. When r = 0, the power mean is taken as the limit M_0(x_1, \dots, x_n) = \lim_{r \to 0} M_r(x_1, \dots, x_n) = \exp\left( \frac{1}{n} \sum_{i=1}^n \ln x_i \right), which coincides with the geometric mean. Furthermore, \lim_{r \to \infty} M_r(x_1, \dots, x_n) equals the maximum value among the x_i, while \lim_{r \to -\infty} M_r(x_1, \dots, x_n) equals the minimum. For fixed positive x_i and r < s, the power means exhibit monotonicity: M_r(x_1, \dots, x_n) \leq M_s(x_1, \dots, x_n), with equality if and only if all x_i are equal. Specific cases within this family include the (r=1), the (r=2), and the (r=-1). This parameterization allows power means to capture varying sensitivities in data aggregation; for instance, larger r amplifies the influence of larger x_i (such as outliers), whereas smaller or negative r gives greater weight to smaller values. The monotonicity property can be established via the Minkowski inequality by normalizing the data such that M_s = 1 for s > r > 0 and applying the inequality to the vectors involved in the \ell_p-norm interpretation of the means, yielding \left( \frac{1}{n} \sum x_i^r \right)^{1/r} \leq 1.

Quadratic Mean

The quadratic mean, also known as the (RMS), of a of real numbers x_1, x_2, \dots, x_n is given by the formula \text{QM} = \sqrt{\frac{x_1^2 + x_2^2 + \dots + x_n^2}{n}}. This measure computes the square root of the arithmetic mean of the squares of the values, providing a way to quantify the magnitude of a varying quantity by emphasizing larger deviations through squaring. For a random variable X, the population quadratic mean is \sqrt{\mathbb{E}[X^2]}, which extends the concept to continuous distributions. The quadratic mean is always greater than or equal to the of the same set of numbers, with equality holding if and only if all the x_i are equal; this follows from the quadratic mean- (QM-AM) , a special case of the power mean . Specifically, for a with \mu, the relationship \text{QM}^2 = \mu^2 + \sigma^2 holds, where \sigma^2 is the variance, demonstrating how the quadratic mean incorporates both the and the spread of the data. This squared difference highlights the quadratic mean's sensitivity to variability, making it larger than the unless the data are constant. In , the serves as the value to represent the effective power or energy of an oscillating signal, such as in audio or ; for a sinusoidal (AC) voltage V(t) = V_0 \sin(\omega t), the RMS voltage is V_0 / \sqrt{2}, equivalent to the (DC) value that would produce the same heating effect in a . In physics, it is applied to calculate the RMS speed of particles in a gas under kinetic theory, given by v_{\text{rms}} = \sqrt{3kT / m}, where k is Boltzmann's constant, T is the , and m is the ; this speed reflects the of the squared velocity, aiding in derivations of and relations. The 's connection to variance also underpins its use in error analysis, where the RMS error measures the typical magnitude of deviations in predictions or measurements. As a special case of the power mean family, the quadratic mean corresponds to the exponent r = 2, prioritizing larger values through squaring while maintaining the monotonicity properties of power means.

Specialized Applications

Moving Average

A is a statistical method for analyzing by computing the average of successive subsets of observations that slide across the sequence, thereby smoothing short-term fluctuations to reveal underlying trends. This technique is particularly useful for sequential data where each new point updates the window of values considered. The simple moving average of order k, often denoted as \text{MA}_k(t), calculates the of the most recent k observations at time t: \text{MA}_k(t) = \frac{1}{k} \sum_{i=0}^{k-1} x_{t-i} where x_t represents the observation at time t. Common types of moving averages include the simple, cumulative, and exponential variants, each suited to different smoothing needs. The cumulative moving average incorporates all observations from the beginning of the series up to the current point, providing a running total average that grows with the dataset: \text{CMA}_t = \frac{1}{t} \sum_{i=1}^{t} x_i This type emphasizes long-term accumulation but becomes less sensitive to recent changes as the series lengthens. In contrast, the exponential moving average (EMA) applies decreasing weights to older data, prioritizing recency through a smoothing parameter \alpha (where $0 < \alpha \leq 1): \text{EMA}_t = \alpha x_t + (1 - \alpha) \text{EMA}_{t-1} Originally developed by Robert G. Brown in 1959 for inventory demand forecasting, the EMA offers quicker adaptation to new information compared to simple averages. Moving averages find broad applications in time series analysis, particularly for trend detection and noise reduction. In finance, they smooth stock price data to identify buy/sell signals, such as when short-term averages cross long-term ones, helping traders filter market volatility. For forecasting, these averages estimate future values by extrapolating smoothed trends, as seen in economic indicators or sales predictions. In signal processing, moving average filters act as finite impulse response (FIR) low-pass filters to attenuate high-frequency noise while retaining step-like signal edges, commonly applied in audio, image, and sensor data. Despite their utility, moving averages have notable limitations. They introduce because calculations rely on historical , causing delayed responses to sudden trend shifts or reversals. problems also occur, especially at the series start, where insufficient prior observations prevent full-window computations, resulting in undefined or partial averages that may bias early trend estimates. These issues can be mitigated by centering windows or using one-sided averages, but they underscore the method's reliance on complete sequential context.

Compound Annual Growth Rate

The Compound Annual Growth Rate (CAGR) measures the smoothed annual growth rate of an over a multi-year period, assuming steady each year to reach the final value from the initial . It provides a standardized way to compare growth across different time spans or investments by expressing the overall return on an annualized basis. The for CAGR is: \text{CAGR} = \left( \frac{V_{\text{final}}}{V_{\text{initial}}} \right)^{\frac{1}{t}} - 1 where V_{\text{final}} is the ending value, V_{\text{initial}} is the starting value, and t is the number of years in the period. This calculation standardizes multi-period returns by focusing on the net compounded effect, effectively ignoring interim to highlight the consistent annual rate that would produce the observed outcome. For instance, consider an that grows from $100 to $200 over 5 years; the CAGR is calculated as (200 / 100)^{1/5} - 1 \approx 0.1487, or 14.87%, meaning the investment effectively at this rate annually to in value. In contrast to the average return, which sums and divides periodic returns without and thus overstates achievable growth, CAGR incorporates to yield the true rate of expansion over the full period. This distinction arises because CAGR relies on the of growth factors, ensuring it accurately reflects reinvested gains rather than a simple average.

Average Percentage Return

The average percentage return, also known as the arithmetic average of periodic returns, is calculated as the sum of individual percentage returns divided by the number of periods: \frac{r_1 + r_2 + \dots + r_n}{n}. This provides a straightforward measure of for percentage changes over time, but it assumes additivity of returns, which does not hold for compounded investments. As a result, it often overstates the true compounded growth, particularly when returns vary significantly across periods. A classic illustration of this pitfall involves an initial investment that rises by 50% followed by a 50% decline. The arithmetic average is \frac{50\% + (-50\%)}{2} = 0\%, suggesting no net change. However, starting from $100, the value becomes $150 after the gain and then $75 after the loss, yielding a true overall of -25%. This discrepancy arises because percentage changes are relative to the current value, not the original amount, leading to asymmetric effects in volatile sequences. The average is suitable for short-term approximations or when assessing expected single-period returns in isolation, such as in models. For multi-period evaluations, the is preferred to accurately reflect compounded . In cases of high , the overestimation becomes more pronounced, as greater variance amplifies the difference between and geometric averages, necessitating adjustments like techniques for reliable . As a corrected for long-term growth assessment, the compound annual growth rate (CAGR) accounts for compounding effects more precisely.

Historical Context

Origins

The concept of averages traces its roots to ancient around 2000 BCE, where astronomers employed values to predict planetary positions and periodic motions. In their mathematical texts, Babylonians calculated synodic periods—such as the synodic month for lunar cycles—to model celestial phenomena with greater precision, using arithmetic operations on observational data recorded in tablets. These early applications of averaging helped reconcile irregular observations with expected patterns, laying foundational techniques for empirical prediction in the absence of advanced . By approximately 500 BCE, the Pythagoreans in formalized the , , and s as part of their philosophical and mathematical framework, viewing them as expressions of cosmic harmony. These means were derived from ratios in music theory and , with the representing equitable division, the proportion in spatial relations, and the intervals in sound. The Pythagoreans' emphasis on these concepts influenced subsequent , integrating numerical averaging into broader studies of proportion and balance. In the medieval period, Arabic scholars advanced these ideas through algebraic methods. Their works on and astronomy transformed geometric constructions of means into algorithmic calculations, facilitating applications in inheritance division and celestial table compilations. This computational approach bridged ancient traditions and enabled more practical uses in scholarly centers like Baghdad's . The term "average" itself emerged from Arabic influences on medieval European trade practices. During the , in the 16th century extended means into precursors of , applying weighted averages to analyze outcomes and expected values in his unpublished manuscript Liber de ludo aleae. Cardano's calculations of fair stakes based on favorable outcomes represented an early use of averaging to quantify uncertainty, influencing later developments in decision-making under risk. The transition to modern statistical uses of averages occurred in the among astronomers, with employing the to reduce observational errors in celestial measurements. In works like his 1774 on planetary perturbations, Laplace demonstrated that averaging multiple observations minimizes random errors, assuming equal and probabilistic , which provided a rigorous justification for the method's reliability in astronomy. This application marked a shift toward probabilistic foundations, establishing averages as tools for precision in empirical science.

Etymology

The term "average" traces its roots to the word "awāriyya," which referred to damaged or defective merchandise in medieval . This concept entered languages through interactions in the Mediterranean, evolving into the "avaria" and "avarie," denoting loss or damage to goods during voyages. In these contexts, "average" initially described a proportional charge or contribution levied on shipowners and merchants to cover shared losses from perils at sea, embodying principles of equitable distribution. By the late , the word had entered English via Anglo-French, primarily in commercial and legal senses related to trade duties and . This usage gained prominence in 18th-century British maritime practices, such as those at in , where the "general average" rule formalized the fair apportionment of sacrifices made for the of a and its cargo. The origins of "average" thus highlight its foundational tie to notions of fairness and in apportioning risks and costs among parties. The shift to a mathematical connotation occurred in the 18th century, when "average" began signifying an equal distribution of quantities, akin to an arithmetical leveling or balancing. This evolution paralleled the related term "mean," derived from Middle English "mene," meaning "middle" or "intermediate," which stemmed from Old French "moien" and ultimately Latin "medianus" (of the middle). By the 19th century, "average" and "mean" were often used interchangeably in statistical and mathematical discourse to denote a central representative value.

Broader Uses

In Statistics and Data Analysis

In statistics and , averages play a fundamental role in summarizing datasets through measures of , which include the , , and . The , calculated as the sum of values divided by the number of observations, provides a balanced summary in symmetrical distributions where it coincides with the median and mode, making it the preferred choice for such cases. However, in skewed distributions, the mean can be distorted by outliers or asymmetry; for positively skewed , the mean exceeds the median, which lies between the mean and mode, while the reverse holds for negative . Consequently, the median is often selected for skewed datasets or when outliers are present, as it is less sensitive to extreme values and better represents the typical value. The mode, the most frequent value, is particularly useful for nominal data but less so for continuous variables unless patterns exist. For inferential statistics, the sample mean serves as an unbiased of the \mu, meaning its equals \mu across repeated samples. This property ensures that, on average, the sample mean accurately reflects the population parameter without systematic bias. The precision of this estimator is quantified by the (SE), given by SE = \frac{\sigma}{\sqrt{n}}, where \sigma is the population standard deviation and n is the sample size; larger samples reduce the SE, improving the reliability of the estimate. As n increases, the sample mean also becomes a , converging in probability to \mu. Averages are central to hypothesis testing procedures that compare group differences. The independent samples t-test assesses whether the means of two groups differ significantly from each other, assuming and equal variances, to determine if observed differences are due to chance or a true effect. For comparing means across three or more groups, analysis of variance (ANOVA) extends this by partitioning total variance into between-group and within-group components, testing the of equal population means while controlling the that multiple t-tests would inflate. In modern applications, averages underpin key techniques in and processing. Loss functions in machine learning models, such as mean squared error (MSE)—the average of squared differences between predictions and targets—or (MAE), quantify model performance and guide optimization by minimizing average prediction errors. In aggregation, frameworks like compute distributed averages across massive datasets by mapping values to key-value pairs and reducing them to , enabling scalable analysis of terabyte-scale . Moving averages, such as simple or exponential variants, briefly aid in trend detection within analysis.

In Finance and Economics

In , the of a is determined by taking the weighted average of the expected returns of its individual assets, with weights corresponding to each asset's allocation proportion in the . This approach allows investors to assess the overall profitability of diversified holdings based on historical or forecasted returns for each component. Similarly, within the (CAPM), a 's —which quantifies its relative to the —is computed as the weighted average of the betas of its constituent securities, enabling the estimation of required returns adjusted for market exposure. Economic indicators frequently rely on specialized averages to capture changes in prices and output. The (CPI), as calculated by the , applies a to aggregate price ratios within most basic item categories since January 1999, which approximates consumer behavior and reduces upward in inflation measurements compared to arithmetic means. For (GDP), the employs chained-dollar methodology to derive real GDP growth rates, using annually updated weights from adjacent periods to form a chain-type index that mitigates bias arising from fixed base-year pricing. Averages play a central role in financial risk assessment metrics. (VaR), a standard tool for quantifying potential losses, incorporates the mean return in its parametric (variance-covariance) calculation, where the estimated loss threshold is derived from the 's average return minus a multiple of its standard deviation, assuming of returns. The , meanwhile, evaluates risk-adjusted performance by dividing the 's average excess return (over the ) by the standard deviation of those excess returns, highlighting how effectively returns compensate for total volatility. Behavioral finance highlights how cognitive biases involving averages can influence market dynamics. Anchoring bias causes investors to fixate on historical price levels or past average returns as reference points when forecasting future values, often leading to underreaction to new information and inefficient pricing in equity markets. This reliance on initial anchors can amplify market volatility, as seen in studies of stock return predictability where adjustments from historical benchmarks distort consensus expectations.

As a Rhetorical Tool

Averages serve as powerful rhetorical devices in , enabling speakers, writers, and policymakers to simplify complex for persuasive effect, though this often introduces opportunities for or . By presenting an "average" outcome, communicators can frame narratives that obscure variability, subgroup differences, or contextual nuances, thereby influencing or policy decisions without revealing the full picture. This rhetorical utility stems from the apparent objectivity of numerical summaries, which lend an air of scientific authority to arguments, even when selectively deployed. One prominent example of such misuse is , where trends observed in subgroups of data reverse upon aggregation into an overall average, leading to misleading conclusions. Named after statistician Edward H. Simpson, who formalized the phenomenon in a 1951 analysis of contingency tables, this paradox arises when a variable affects subgroup sizes or rates unevenly. A classic illustration involves baseball batting averages for players and in 1995 and 1996: Justice outperformed Jeter in each year individually (.253 vs. .250 in 1995; .321 vs. .314 in 1996), yet Jeter's combined average (.310) exceeded Justice's (.270) due to differing numbers of at-bats across seasons. This reversal can persuade audiences to draw erroneous inferences about overall performance if subgroup details are omitted, as highlighted in probabilistic analyses of . Cherry-picking the type of average further exemplifies rhetorical , particularly when the is favored over the to exaggerate central tendencies in skewed distributions. The , being sensitive to s, can inflate perceptions of typical values, whereas the better represents the middle without from extremes—a point briefly referencing the general property of outlier sensitivity in averages. In discussions of , for instance, reports citing the "average" CEO pay often rely on the , which is disproportionately pulled upward by a handful of exceptionally high earners, creating an illusion of widespread affluence. Economic analyses show that in 2020, the CEO compensation among top U.S. firms reached $15.3 million, far exceeding the of about $12.7 million, allowing advocates to overstate earnings relative to workers in policy debates on . Additional fallacies compound these issues, such as the , which erroneously infers individual behaviors or traits from group-level averages, and the invalid averaging of incompatible units, which combines dissimilar measures to produce nonsensical results. The , first delineated by sociologist W. S. Robinson in his 1950 examination of between race, nativity, and illiteracy rates across U.S. states, warns against assuming that a group's average reflects individual members' characteristics; for example, a high state-level average illiteracy among Black populations (77% with proportion Black) masked lower individual rates when controlling for socioeconomic factors. Similarly, averaging incompatible units—like combining income figures with expenditure rates or percentages from varying bases—violates basic statistical principles, akin to the "" error critiqued in classic expositions on data misrepresentation, yielding aggregates that mislead on overall trends or comparisons. Ethically, the rhetorical deployment of averages demands to mitigate , especially in and arenas where incomplete can sway or decisions. Investigative guidelines emphasize disclosing calculation methods, subgroup breakdowns, and alternative measures (e.g., alongside ) to avoid cherry-picking, as opaque averages have fueled biased narratives in coverage of economic disparities or health outcomes. In debates, such as those on gaps or environmental impacts, ethicists urge full contextualization to prevent ecological inferences from justifying discriminatory , underscoring that rhetorical requires verifiable, balanced presentation over selective persuasion.

References

  1. [1]
    2.6 Measures of Center – Significant Statistics
    The technical term is “arithmetic mean” and “average” is technically a center location. However, in practice among non-statisticians, “average” is commonly ...
  2. [2]
    Distribution Statistics - Bureau of Labor Statistics
    Average, mean, or arithmetic mean. The sum of a set of numbers divided by the number of members in the set. Example: the average of 2, 4, 12 is 6. (2+4+12)/ ...
  3. [3]
    [PDF] Describing Distributions with Numbers
    The mean of a data set is the average of all its values. ❖ Calculate the mean by finding the sum of all the values, then dividing by the.
  4. [4]
    Averages | Educational Research Basics by Del Siegle
    Average, or central tendency, can be the mean (sum and divide), the mode (most frequent), or the median (middle of ordered numbers).
  5. [5]
    [PDF] What Does Average Really Mean? Making Sense of Statistics - ERIC
    A measure of central tendency provides a single number that is most representative of your data points. It is an indicator of what value is “typical.” There are ...
  6. [6]
    understanding numbers: Week 4: 4.2.1 Types of average | OpenLearn
    The three main types of average are mean, median, and mode. The mean is calculated by adding all values and dividing by the count. The median is the middle ...
  7. [7]
    Measures of central tendency: The mean - PMC
    Mean is the most commonly used measure of central tendency. There are different types of mean, viz. arithmetic mean, weighted mean, geometric mean (GM) and ...
  8. [8]
    [PDF] Chapter 3 Statistical Averages
    Statistical averages measure quantities obscured by random variations. The empirical average is a random variable, while the expected value is the mean.
  9. [9]
    Descriptive Statistics - Purdue OWL
    Descriptive statistics include the mean, mode, median, range, and standard deviation. The mean, median, and mode are measures of central tendency.
  10. [10]
    Neuroscience for Kids - Statistics
    All three types of averages describe the data truthfully. However, depending on the data, the mean, median and mode can be very different from one another.
  11. [11]
    Estimating a Population Mean (1 of 3) – Concepts in Statistics
    In “Estimating a Population Mean,” we focus on how to use a sample mean to estimate a population mean. This is the type of thinking we did in Modules 7 and 8.
  12. [12]
    Descriptive statistics | SPSS Annotated Output - OARC Stats - UCLA
    Mean – This is the arithmetic mean across the observations. It is the most widely used measure of central tendency. It is commonly called the average. The mean ...
  13. [13]
    2.2.4.1 - Skewness & Central Tendency | STAT 200
    For distributions that have outliers or are skewed, the median is often the preferred measure of central tendency because the median is more resistant to ...
  14. [14]
    [PDF] Bounded Generalized Mixture Functions
    ... properties like: idempotency, symmetry, homogeneity and diretional monotonicity. ... averaging property we say that they are averaging ... idempotent, homogeneous ...
  15. [15]
    On Global Bounds for Generalized Jensen's Inequality - Project Euclid
    ‎Applications to integral‎ ‎inequalities and the theory of means are pointed out‎. Citation. Download Citation. József Sándor. "On Global Bounds for Generalized ...
  16. [16]
    Comparison of Arithmetic, Geometric, and Harmonic Means
    Aug 23, 2021 · The main purpose of the paper is to strengthen the results of PR Mercer (2003) concerning the comparison of arithmetic, geometric, and harmonic weighted means.
  17. [17]
    Statistical data preparation: management of missing values and ...
    Outliers significantly affect the process of estimating statistics (e.g., the average and standard deviation of a sample), resulting in overestimated or ...
  18. [18]
    7.1.1 Law of Large Numbers - Probability Course
    It states that if you repeat an experiment independently a large number of times and average the result, what you obtain should be close to the expected value.
  19. [19]
    [PDF] an exposition on means - LSU Scholarly Repository
    As for the name given to the geometric mean, it appears that the Pythagorean school coined the term mean proportional, i.e., the geometric mean, to refer to ...
  20. [20]
    Lesson 2: Summarizing Data | Principles of Epidemiology | CSELS
    The arithmetic mean is the value that is closest to all the other values in a distribution. Method for calculating the mean. Step 1. Add all of the observed ...
  21. [21]
    [PDF] Expected Value, Variance and Covariance (Sections 3.1-3.3)1
    This is the common average, or arithmetic mean. Suppose there are ties ... The expected value is the physical balance point. 12 / 31. Page 13. Expected ...
  22. [22]
    [PPT] Probability Distributions
    Expected value is just the weighted average or mean (µ) of random variable x. Imagine placing the masses p(x) at the points X on a beam; the balance point of ...
  23. [23]
    Describing Distributions - Andrews University
    Find the overall average (arithmetic mean). Solution: (10×72.5+20×73.7)/30=73.3. The arithmetic mean has two important properties which make it the most ...
  24. [24]
    [PDF] Describing Distributions with Numbers - Arizona Math
    For this reason, the mean is said to be sensitive to outliers while the median is not. ... (b) For a symmetric distribution, the mean and the median are equal.
  25. [25]
    [PDF] The Geometric Mean and the AM-GM Inequality - UCI Mathematics
    Feb 27, 2017 · ... average of n numbers, is the most common type of mean and it is defined by. AM(a1, ...,an) = a1 + a2 + ··· + an n . The geometric mean (GM) ...
  26. [26]
    [PDF] Arithmetic Mean, Harmonic Mean and Geometric Mean - Duke People
    Jan 26, 2005 · Geometric Mean The geometric mean of n values is the n-th root of the product of the values, n √Πxi In a formula: the geometric mean of x1,x2, ...
  27. [27]
    [PDF] Worksheet 81 Geometric Mean
    In fields like finance and biology, the geometric mean is used to calculate average growth rates over time, such as population growth or investment returns.
  28. [28]
    Understanding the Key Concepts Behind Mathematics Product Means
    Jul 22, 2025 · Deep understanding of the geometric mean illuminates its role in modeling multiplicative processes such as financial growth and biological ...
  29. [29]
    Question Corner -- Applications of the Geometric Mean
    The geometric mean answers the question, "if all the quantities had the same value, what would that value have to be in order to achieve the same product?"
  30. [30]
    [PDF] Lecture 14 - Biostatistics
    Oct 23, 2015 · • As the log of the geometric mean is an average, the LLN and ... interpretation of the geometric mean. • If a and b are the lengths of ...
  31. [31]
    Lesson 5: Averages (Arithmetic Mean and Median)
    The geometric mean is a mean, or average, which indicates the central ... The geometric mean is defined as the nth root of the product of n numbers.
  32. [32]
    Computing Arithmetic, Geometric and Harmonic Means
    Since computing geometric mean requires taking square root, it is further required that all input data values must be positive.Missing: formula | Show results with:formula
  33. [33]
    [PDF] The Art of Insight in Science and Engineering: Mastering Complexity
    ... parallel-resistance formula: that the resistance of 𝑅1 and 𝑅2 in parallel ... harmonic mean: 𝑙 = 2. 𝑟min𝑟max. 𝑟min + 𝑟max = 2 (𝑟min ∥ 𝑟max) ...
  34. [34]
    [PDF] What Does it Mean to Be Average? The Miles per Gallon versus ...
    The paradox results from the fact that fuel efficiency is a derived measure ... In this case, the correct average is the harmonic mean, not the arithmetic mean.
  35. [35]
    [PDF] Mean Inequalities - UCLA Math Circle
    May 2, 2020 · 2. The four quantities above have the following names (in order): harmonic mean (HM), ge- ometric mean (GM), arithmetic mean (AM), and quadratic ...
  36. [36]
    Median: What It Is and How to Calculate It, With Examples
    Sep 18, 2025 · The median is the middle number in a group of numbers when they are put in order from smallest to biggest.What Is the Median? · How It Works · Median vs. Mean · Example
  37. [37]
    Mean, median, and mode review (article) | Khan Academy
    The median is the middle point in a dataset—half of the data points are smaller than the median and half of the data points are larger. To find the median:.
  38. [38]
    How to Find the Median Value - Math is Fun
    To find the Median, place the numbers in value order and find the middle. Example: find the Median of 12, 3 and 5. Put them in order: 3, 5, 12. The middle is ...
  39. [39]
    Mean, Mode and Median - Measures of Central Tendency
    The mean, median and mode are all valid measures of central tendency, but under different conditions, some measures of central tendency become more appropriate ...Mean (arithmetic) · Median · Mode
  40. [40]
    What Does It Mean If A Statistic Is Resistant? - Statology
    Jun 15, 2021 · A statistic is said to be resistant if it is not sensitive to extreme values. Two examples of statistics that are resistant include:
  41. [41]
    FAQs on Measures of Central Tendency - Mean, Mode and Median
    The median is usually preferred to other measures of central tendency when your data set is skewed (i.e., forms a skewed distribution) or you are dealing with ...
  42. [42]
    Ordinal Data | Definition, Examples, Data Collection & Analysis
    Aug 12, 2020 · In an odd-numbered data set, the median is the value at the middle of your data set when it is ranked. In an even-numbered data set, the median ...Levels of measurement · How to collect ordinal data · How to analyze ordinal data
  43. [43]
    Income in the United States: 2024 - U.S. Census Bureau
    Sep 9, 2025 · Highlights · Median household income was $83,730 in 2024, not statistically different from the 2023 estimate of $82,690 (Figure 1 and Table A-1).Missing: example | Show results with:example
  44. [44]
    2.7: Skewness and the Mean, Median, and Mode - Statistics LibreTexts
    Jul 28, 2023 · If the distribution of data is skewed to the right, the mode is often less than the median, which is less than the mean.
  45. [45]
    mode
    MODE. Definition: The mode is the most frequently occurring score value. Illustration : If the scores for a given sample distribution are: 32. 32. 35. 36. 37.
  46. [46]
    Histograms - University of Texas at Austin
    A unimodal distribution only has one peak in the distribution, a bimodal distribution has two peaks, and a multimodal distribution has three or more peaks.
  47. [47]
    2.2.4 - Measures of Central Tendency | STAT 200
    Mode: The most frequently occurring value(s) in the distribution, may be used with quantitative or categorical variables. Example: Hours Spent Studying Section.
  48. [48]
    13.1 - Histograms | STAT 414 - STAT ONLINE
    To create a histogram of continuous data, section First, you have to group the data into a set of classes, typically of equal length.
  49. [49]
    How do I use mean, median, mode, and standard deviation?
    Aug 11, 2023 · A low standard deviation means the data are mostly close to the mean, while a high standard deviation means the data are spread out. Note ...
  50. [50]
    [PDF] Basic Statistics Terms and Calculations
    Both Qualitative and Quantitative variables can have a mode. Mean – The average value of a data set. This can be calculated by summing up all of the ...<|control11|><|separator|>
  51. [51]
    [PDF] What is a typical score like? - CSUN
    How to find the mode? One Mode. Two Modes. Psy 320 - Cal State Northridge. 6. Problems with the Mode. □ However, there are three disadvantages with the mode: 1 ...Missing: limitations | Show results with:limitations
  52. [52]
    Stats: Measures of Central Tendency
    Midrange. The midrange is simply the midpoint between the highest and lowest values.
  53. [53]
    [PDF] Measures of Central Tendency - MATH 130, Elements of Statistics I
    Definition. The mean of a variable is computed by the sum of all of the values of the variable in the data set divided by the number of observations.<|control11|><|separator|>
  54. [54]
    1.4 Statistical Measures of the Middle - Mathematics
    The midrange is very easy to compute but ignores the relative differences for all terms but the two extremes. A similar collection of features and drawbacks are ...
  55. [55]
    [PDF] COMPARISON OF CENTRALITY ESTIMATORS FOR SEVERAL ...
    ... uniform distribu ... Yet, a measure of the center of the Cauchy distribution can be found by ... median midrange. Figure 3.3.5: Uniform Distribution 4. 23.
  56. [56]
    [PDF] the midrange estimator in symmetric distributions - K-REx
    The classical properties of consistency, unbiasedness, sufficiency, and efficiency are then used to compare the relative merits of the midrange estimator with ...
  57. [57]
    Shape, Center, and Spread of a Distribution
    If there appear to be two "mounds", we say the distribution is bimodal. If there are more than two "mounds", we say the distribution is multimodal.
  58. [58]
    [PDF] Chapter 3 Summary - CSUN
    There are advantages and disadvantages to each of them, depending on the nature of the data set. These are listed below. Measure. Advantages. Disadvantages.
  59. [59]
    Weighted Arithmetic Mean - an overview | ScienceDirect Topics
    The weighted arithmetic mean is defined as a measure of central tendency that takes into account the relative importance or weight of each value in a ...
  60. [60]
    [PDF] Weighted Means and Means as Weighted Sums
    A weighted mean is a sum of coefficients times numbers, where the coefficients, called weights, sum to 1. The ordinary mean is a special case.Missing: definition | Show results with:definition
  61. [61]
    Weighted averages - Department of Mathematics at UTSA
    Oct 24, 2021 · If all the weights are equal, then the weighted mean is the same as the arithmetic mean. While weighted means generally behave in a similar ...
  62. [62]
    Weighted Mean Formula - BYJU'S
    Weighted means generally behave in a similar approach to arithmetic means, they do have a few counter-instinctive properties. ... weighted arithmetic mean: Step 1 ...
  63. [63]
    How Do I Calculate My Grade Point Average (GPA)?
    The GPA is calculated as follows: Sum of all (grade point values x units) divided by Sum of units for all courses graded = GPA.
  64. [64]
    3.5.1 Weighting - Statistique Canada
    Sep 2, 2021 · The design weight of a unit usually refers to the average number of units in the population that each sampled unit represents.<|separator|>
  65. [65]
    Why You Need Weighted Average for Calculating Total Portfolio ...
    For calculating average return of a portfolio or basket of stocks, arithmetic average is only suitable when all stocks have equal weights in the portfolio, ...
  66. [66]
    Power Mean -- from Wolfram MathWorld
    A power mean is a mean of the form M_p(a_1,a_2,...,a_n)=(1/nsum_(k=1)^na_k^p)^(1/p), (1) where the parameter p is an affinely extended real number and all ...Missing: definition | Show results with:definition
  67. [67]
    [PDF] inequalities-hardy-littlewood-polya.pdf - mathematical olympiads
    The definition may naturally be extended to functions f(a,b,...) of several sets of variables. We shall be occupied throughout this volume with problems.
  68. [68]
    A Converse of Minkowski's Type Inequalities
    Oct 24, 2010 · Proof of Theorem 1.1. In our proof we often use the well-known fact that the scale of power means is nondecreasing (see [2]). More precisely, if ...
  69. [69]
    [PDF] 1. The AM-GM inequality - Berkeley Math Circle
    is the arithmetic mean,. P2 = r x2. 1 + ··· + x2 n n is sometimes called the root mean square. For x1,...,xn > 0,. P−1 = n. 1 x1+ ··· + 1 xn is called the ...
  70. [70]
    [PDF] TOPIC. Expectations, continued. This lecture continues our
    Oct 19, 2000 · This is the arithmetic mean of X; it always exists. • Suppose g(x) ... This is the root mean square, or L2-norm, of X; it always exists ...
  71. [71]
    [PDF] Average and RMS Values of a Periodic Waveform
    square root. This is known as the “Root Mean Square” or RMS value of any time- varying (or spatially-varying) waveform, and is defined as: { }. ∫. = ≡ τ τ 0.
  72. [72]
    2.2 Pressure, Temperature, and RMS Speed - UCF Pressbooks
    (2.8) v r m s = v 2 ¯ = 3 k B T m . The rms speed is not the average or the most likely speed of molecules, as we will see in Distribution of Molecular Speeds, ...
  73. [73]
    6.4.2.1. Single Moving Average - Information Technology Laboratory
    Taking a moving average is a smoothing process, An alternative way to summarize the past data is to compute the mean of successive smaller sets of numbers ...Missing: statistics | Show results with:statistics
  74. [74]
    Moving average and exponential smoothing models - Duke People
    Hence, we take a moving (local) average to estimate the current value of the mean and then use that as the forecast for the near future. This can be considered ...
  75. [75]
    Cumulative Moving Average - Analytics Database
    The cumulative moving average, also known as the running average, is calculated by adding all of the data points up to a certain point in time and dividing ...Missing: definition | Show results with:definition
  76. [76]
    [PDF] Exponential Smoothing: The State of the Art
    This paper is a critical review of exponential smoothing since the original work by Brown and Holt in the 1950s. Exponential smoothing is based on a.Missing: Robert | Show results with:Robert
  77. [77]
    [PDF] SOME REMARKS ON EXPONENTIAL SMOOTHING - DTIC
    The term "exponential smoothing" seems to have been coined for the first time by R. G. Brown [l] in 1959 for a particular time series forecasting technique (or ...
  78. [78]
    Moving Average (MA): Purpose, Uses, Formula, and Examples
    Simple moving averages (SMAs) use a simple arithmetic average of prices over some timespan, while exponential moving averages (EMAs) place greater weight on ...Crossover · Average Price · Golden Cross vs. Death Cross · Golden Cross Pattern
  79. [79]
    6.2 Moving averages | Forecasting: Principles and Practice (2nd ed)
    The first step in a classical decomposition is to use a moving average method to estimate the trend-cycle, so we begin by discussing moving averages.
  80. [80]
    [PDF] Moving Average Filters
    In spite of its simplicity, the moving average filter is optimal for a common task: reducing random noise while retaining a sharp step response.
  81. [81]
    The 7 Pitfalls of Moving Averages - Investopedia
    Moving averages are rarely effective as standalone tools because of at least seven disadvantages.
  82. [82]
    [PDF] Growth Math - DePauw University
    We compute the compound annual growth rate with this formula: CAGR = [. Final Value. Initial Value. ]1/Number of Time Periods − 1. The geometric mean (Excel ...
  83. [83]
    Compound Annual Growth Rate (CAGR) Formula and Calculation
    The most important limitation of the CAGR is that because it calculates a smoothed rate of growth over a period, it ignores volatility and implies that the ...Missing: standardizes | Show results with:standardizes
  84. [84]
    What Compound Annual Growth Rate (CAGR) Tells Investors
    Apr 23, 2025 · CAGR is the best formula for evaluating how different investments perform over time. It helps fix the limitations of the arithmetic average ...What Is CAGR? · How to Calculate the CAGR · CAGR and Risk
  85. [85]
    Average Return - Overview, How to Calculate, and Limitations
    Despite its preference as an easy and effective measure for internal returns, the average return has several pitfalls. It does not account for different ...What is an Average Return? · Calculating Average Return...
  86. [86]
    Arithmetic vs. Geometric Mean: Key Differences in Financial Returns
    Arithmetic mean provides a simple average and is not about compounding, whereas geometric mean better reflects investment realities. Geometric mean is crucial ...
  87. [87]
    Paradoxes & pitfalls of measuring average returns - Te Ahumairangi
    Jan 16, 2024 · Arithmetic average returns can overestimate returns, especially for volatile investments, and may lead to over-estimation of central tendency. ...
  88. [88]
    Geometric Average vs. Arithmetic Average: Which is Correct For ...
    Here is a simple example to illustrate how volatility lowers your investment returns. Example: A 50% gain and a 50% loss. It does not matter the order, the ...
  89. [89]
    Geometric vs Arithmetic Return | CFA Level 1 - AnalystPrep
    Arithmetic vs. Geometric Mean Returns. The media and investment institutions can mislead an investor if they incorrectly use the arithmetic return. Considering ...
  90. [90]
    (PDF) Babylonian Mathematical Astronomy - ResearchGate
    Mar 21, 2017 · The earliest known form of mathematical astronomy of the ancient world was developed in Babylonia in the 5th century BCE.
  91. [91]
    Earliest Known Uses of Some of the Words of Mathematics (G)
    Mar 20, 2000 · The geometric mean is one of the three Pythagorean means. See MEAN ... It is defined in Elements, Book VI, definition 3 (although the ...
  92. [92]
    Contribution of Al-Khwarizmi to Mathematics and Geography
    Dec 27, 2006 · Besides his founding the science of jabr, he made major contributions in astronomy and mathematical geography. In this article, focus is laid on ...Missing: averages | Show results with:averages
  93. [93]
    [PDF] The Early Development of Mathematical Probability - Glenn Shafer
    Cardano formulated the principle that the stakes in an equitable wager should be in proportion to the number of ways in which each player can win, and he ...<|control11|><|separator|>
  94. [94]
    (PDF) Laplace's theory of errors - Academia.edu
    The research elucidates Laplace's theory of errors, focusing on his major works including 'Théorie analytique des probabilités' (TAP) and earlier ...
  95. [95]
    AVERAGE Definition & Meaning - Merriam-Webster
    The word average came into English from Middle French avarie, a derivative of an Arabic word meaning “damaged merchandise.” Avarie originally meant damage ...Average out · Batting average · Dow Jones average · Unaverage
  96. [96]
    Average - Etymology, Origin & Meaning
    Originating in late 15c. Mediterranean trade, "average" stems from French avarie and Italian avaria, meaning damage or charge on goods; later, from 1770, ...
  97. [97]
    History - Lloyd's
    From those beginnings in a coffee house in 1688, Lloyd's has been a pioneer in insurance and has grown over 300 years to become the world's leading market ...Sweeping change, new... · Lloyd's Buildings · Coffee and commerce
  98. [98]
    How 'Mean' Became Nasty - Merriam-Webster
    The one that refers to a mathematical average (“the mean temperature”) came to English from medieval French and derives from the Latin word medianus, which was ...Missing: arithmetic | Show results with:arithmetic
  99. [99]
    [PDF] Chapter 7 Portfolio Theory
    The portfolio return is a weighted average of the individual returns: ˜rp ... Expected portfolio return: ¯rp = w1¯r1 + w2¯r2. Unexpected portfolio ...
  100. [100]
    8.2 The Derivation of the CAPM - Duke People
    The portfolio beta is: The beta of the portfolio is the weighted average of the individual asset betas where the weights are the portfolio weights.
  101. [101]
    Handbook of Methods Consumer Price Index Calculation
    Jan 30, 2025 · Most item strata use the geometric mean index formula, which is a weighted geometric mean of price ratios (the item's current price divided by ...
  102. [102]
    Chained-Dollar Indexes: Issues, Tips on Their Use, and Upcoming ...
    May 30, 2018 · This article discusses the advantages of chain-weighted indexes and the challenges posed by chained dollars, outlines further steps that BEA will be taking to ...
  103. [103]
    [PDF] CHAPTER 7 VALUE AT RISK (VAR) - NYU Stern
    Value at Risk (VaR) tries to answer the question of the most I can lose on an investment, focusing on downside risk and potential losses.
  104. [104]
    Sharpe Ratio - How to Calculate Risk Adjusted Return, Formula
    The Sharpe ratio reveals the average investment return, minus the risk-free rate of return, divided by the standard deviation of returns for the investment.
  105. [105]
    Another look at anchoring and stock return predictability
    This return predictability is attributed to investors' “anchoring and adjustment” bias (Tversky and Kahneman, 1974) that causes them to underreact to positive ...
  106. [106]
    [PDF] Anchoring Bias in Consensus Forecasts and its Effect on Market Prices
    In particular, it is worth considering whether the behavioral bias in professional forecasts is a source of inefficiency in markets that are commonly perceived ...