Fact-checked by Grok 2 weeks ago

Root mean square

The root mean square (RMS), also known as the quadratic mean, is a statistical measure representing the of the of the squares of a set of values. For a of numbers x_1, x_2, \dots, x_n, it is given by the formula x_{\mathrm{RMS}} = \sqrt{\frac{1}{n} \sum_{i=1}^n x_i^2}. In the continuous case, for a f(t) over an from T_1 to T_2, the RMS is f_{\mathrm{RMS}} = \sqrt{\frac{1}{T_2 - T_1} \int_{T_1}^{T_2} [f(t)]^2 \, dt}. The is a special case of the power mean with exponent 2, and it satisfies certain inequalities, such as for positive c, \mathrm{RMS}(a_1 + c, \dots, a_n + c) < c + \mathrm{RMS}(a_1, \dots, a_n). Unlike the arithmetic , which can be zero for oscillating quantities, the RMS provides a positive measure of magnitude that accounts for both positive and negative deviations equally due to squaring. Its origins trace back to the late 19th century, introduced by engineer Charles Steinmetz to address challenges in analyzing alternating current (AC) waveforms, where simple averages yielded misleading results for power calculations. In statistics, the RMS is used to quantify variability, often as the root mean square error (RMSE), which measures the average magnitude of errors in predictions or models, equivalent to the standard deviation of residuals in regression analysis. In physics and signal processing, it serves as a measure of the effective amplitude of oscillating signals, synonymous with the standard deviation from a baseline in some contexts. Electrical engineering prominently employs RMS for AC circuits, where the RMS voltage or current represents the equivalent direct current (DC) value that produces the same power dissipation in a resistor; for a sinusoidal waveform with peak value V_p, this is V_{\mathrm{RMS}} = \frac{V_p}{\sqrt{2}} \approx 0.707 V_p. This application arose during the competition between AC and DC systems in the 1880s and 1890s, enabling consistent power ratings for non-sinusoidal waveforms as well.

Mathematical Foundation

Definition

The root mean square (RMS), also known as the quadratic mean, is a statistical measure that quantifies the magnitude of a varying quantity by providing an effective average value, particularly useful for alternating current (AC) signals where simple arithmetic means can be misleading due to positive and negative excursions canceling out. It achieves this by first squaring the values to emphasize larger deviations, averaging those squares, and then taking the square root to restore the original units, thereby generalizing the arithmetic mean for non-constant quantities. For a discrete set of n values x_1, x_2, \dots, x_n, the RMS is formally defined as x_{\mathrm{RMS}} = \sqrt{\frac{1}{n} \sum_{i=1}^n x_i^2}, where the summation computes the mean of the squared values before applying the square root. This formula assumes basic familiarity with arithmetic averages but derives directly from the principle of averaging squared magnitudes to capture overall strength without directional bias. In the continuous-time domain, for a periodic function x(t) over one period T, the RMS extends to x_{\mathrm{RMS}} = \sqrt{\frac{1}{T} \int_0^T x(t)^2 \, dt}, integrating the squared function to find its average before the square root operation, presupposing knowledge of definite integrals for time-averaged quantities. Here, "root" refers to the square root that scales back to the original dimension, while "mean square" denotes the average of the squares, making RMS a natural extension of the for oscillatory phenomena like AC voltages, where it equates to the direct current (DC) value delivering equivalent power. The term "root mean square" originated in the late 19th century within electrical engineering contexts, introduced by to analyze , though its mathematical formulation applies broadly beyond electricity to any varying dataset.

Properties

The (RMS) of any set of real-valued data points or function values is always a non-negative real number, equaling zero if and only if every value in the set is identically zero. This property arises because the RMS is defined as the square root of the average of squared values, where the squares are inherently non-negative and the square root function preserves non-negativity. A fundamental inequality relating the RMS to other measures of central tendency is the quadratic mean-arithmetic mean (QM-AM) inequality. For a set of non-negative real numbers, the RMS (also known as the quadratic mean) is greater than or equal to the arithmetic mean, with equality holding if and only if all the numbers are equal. This follows from the power mean inequality, which states that power means are non-decreasing with respect to their order p, and the RMS corresponds to p=2 while the arithmetic mean corresponds to p=1. The inequality can be expressed as \sqrt{\frac{1}{n} \sum_{i=1}^n x_i^2} \geq \frac{1}{n} \sum_{i=1}^n x_i for x_i \geq 0, and it was rigorously established in the seminal work on inequalities by Hardy, Littlewood, and Pólya. The RMS exhibits a straightforward scaling property: for any real scalar k and data set x = (x_1, \dots, x_n), the RMS of the scaled set kx = (kx_1, \dots, kx_n) satisfies \mathrm{RMS}(kx) = |k| \cdot \mathrm{RMS}(x). This homogeneity of degree 1 ensures that the RMS behaves consistently under linear transformations, preserving relative magnitudes. For orthogonal components, the RMS satisfies a Pythagorean identity analogous to the classical theorem in Euclidean geometry. Specifically, if x and y are vectors or functions that are orthogonal—meaning their inner product (or average product for discrete cases) is zero—then the square of the RMS of their sum equals the sum of the squares of their individual RMS values: \mathrm{RMS}(x + y)^2 = \mathrm{RMS}(x)^2 + \mathrm{RMS}(y)^2. This property holds in inner product spaces, where orthogonality implies no cross-term contribution in the expansion of the squared norm, and it directly extends the Pythagorean theorem to the L^2 setting./09%3A_Inner_product_spaces/9.03%3A_Orthogonality) The RMS is intrinsically linked to the \ell^2 (or L^2) norm, a fundamental concept in functional analysis and linear algebra. For a discrete vector x \in \mathbb{R}^n, the \ell^2 norm is \|x\|_2 = \sqrt{\sum_{i=1}^n x_i^2}, so the RMS is precisely \|x\|_2 / \sqrt{n}. In the continuous case over an interval of length T, the L^2 norm is \|f\|_2 = \sqrt{\int |f(t)|^2 \, dt}, and the RMS is \|f\|_2 / \sqrt{T}. This scaling by the square root of the measure (n or T) normalizes the norm to yield an average-like quantity. In statistical contexts, particularly for approximating the variance of a zero-mean population, the square of the serves as an unbiased estimator. If the data are centered (mean zero) and drawn from a distribution with variance \sigma^2, then \mathrm{RMS}^2 = \frac{1}{n} \sum_{i=1}^n x_i^2 provides an unbiased estimate of \sigma^2, as the expected value E[\mathrm{RMS}^2] = \sigma^2. This makes the RMS a practical tool for variance estimation when the mean is known or assumed to be zero, though adjustments like the n-1 denominator are used for the sample standard deviation in general cases.

Applications in Signals

Common Waveforms

The root mean square (RMS) value provides a measure of the effective magnitude of periodic waveforms, equivalent to the DC value that would produce the same power dissipation in a resistor. For common waveforms encountered in signal analysis, the RMS is computed by integrating the square of the instantaneous value over one period, averaging, and taking the square root, as defined in the mathematical foundation. This approach yields distinct values depending on the waveform's shape, reflecting variations in how the signal's energy is distributed over time. For a direct current (DC) signal, which is a constant voltage V, the RMS value equals the absolute value of the signal, |V|, since the instantaneous value does not vary and the squaring operation preserves the magnitude. This equivalence underscores the RMS as the DC counterpart for power calculations in steady-state conditions. In a sinusoidal waveform with peak amplitude V_p, the RMS value is \frac{V_p}{\sqrt{2}} \approx 0.707 V_p. This result arises from evaluating the integral V_{\text{RMS}} = \sqrt{\frac{1}{T} \int_0^T [V_p \sin(2\pi f t)]^2 \, dt} over one period T, where the average of the squared sine function simplifies to \frac{1}{2}. Sinusoids are fundamental in alternating current systems, and this RMS factor ensures consistent power equivalence to DC. A square waveform with 50% duty cycle and peak amplitude V_p (bipolar, symmetric about zero) has an RMS value of |V_p|. Here, the signal alternates between +V_p and -V_p, so its square is constantly V_p^2, yielding an average of V_p^2 and thus RMS V_p after the square root. This full-height effective value highlights the square wave's efficient energy delivery compared to smoother forms. For a triangular waveform with peak amplitude V_p (bipolar, 50% duty cycle, symmetric about zero), the RMS is \frac{V_p}{\sqrt{3}} \approx 0.577 V_p. The calculation involves integrating the squared linear ramp over the period: V_{\text{RMS}} = \sqrt{\frac{1}{T} \int_0^T v(t)^2 \, dt}, where v(t) rises and falls linearly, resulting in the \frac{1}{3} factor from the integral of t^2. This value positions the triangle wave between sine and square in terms of effective amplitude. The sawtooth waveform, typically a linear ramp from 0 to V_p over the period (or symmetric equivalent), also yields an RMS of \frac{V_p}{\sqrt{3}} \approx 0.577 V_p, analogous to the triangle due to the identical quadratic integral under the linear profile. For a unipolar sawtooth peaking at V_p, the derivation confirms this through V_{\text{RMS}} = V_p \sqrt{\frac{1}{3}}, emphasizing its similarity in energy distribution to the triangle despite the asymmetric shape. These RMS values for common waveforms are essential in signal processing, as they quantify the effective amplitude for tasks like power estimation, noise assessment, and dynamic range control, enabling consistent comparisons across signal types.

Waveform Combinations

When combining multiple waveforms, the root mean square (RMS) value of the resultant signal is determined by the correlation between the components. For uncorrelated or orthogonal signals, such as sinusoids at different frequencies or in quadrature phase, the signals do not interact in terms of power, leading to a straightforward combination rule. The total RMS is the square root of the sum of the squares of the individual RMS values, reflecting the additive nature of their average powers. This relationship can be expressed as: \mathrm{RMS}_{total} = \sqrt{\mathrm{RMS}_1^2 + \mathrm{RMS}_2^2 + \cdots + \mathrm{RMS}_n^2} For example, consider the sum of two sinusoidal waveforms with peak amplitudes V_1 and V_2 that are 90° out of phase, making them orthogonal. Each individual sinusoid has an RMS value of V_1 / \sqrt{2} and V_2 / \sqrt{2}, respectively, so the total RMS is \sqrt{(V_1^2 / 2) + (V_2^2 / 2)} = \frac{1}{\sqrt{2}} \sqrt{V_1^2 + V_2^2}. In non-orthogonal cases, the exact total RMS includes a cross-term $2 \times \langle f \cdot g \rangle, where \langle \cdot \rangle denotes the time average. However, if this cross-term averages to zero over the observation period—often the case for signals with rapidly varying phases or differing frequencies—the approximation \mathrm{RMS}_{total} \approx \sqrt{\mathrm{RMS}_1^2 + \mathrm{RMS}_2^2} holds. A common application arises in signal processing with a deterministic signal plus additive broadband noise, where the noise is uncorrelated with the signal. Here, the total RMS approximates \sqrt{\mathrm{RMS}_{signal}^2 + \mathrm{RMS}_{noise}^2}, providing a measure of the combined effective amplitude. This root-sum-square method is widely used because uncorrelated noise sources add in power, not amplitude. These formulas are exact only for the infinite time average or when averaging over complete periods of periodic signals, ensuring cross-terms fully cancel where applicable. For finite observation windows, incomplete averaging can introduce errors, particularly if the window does not capture sufficient cycles or if signals are quasi-periodic.

Engineering Uses

Electrical Engineering

In electrical engineering, the root mean square (RMS) value is a fundamental measure used to quantify alternating current (AC) and voltage in circuits, representing the effective value equivalent to a direct current (DC) that would produce the same average power dissipation. For a periodic current i(t), the RMS current I_{\rms} is defined as I_{\rms} = \sqrt{\frac{1}{T} \int_0^T i^2(t) \, dt}, where T is the period of the waveform. This definition arises from the need to characterize the heating effect in resistive loads, as power dissipation in a resistor is proportional to the square of the current. Similarly, the RMS voltage V_{\rms} for a voltage waveform v(t) follows the analogous form V_{\rms} = \sqrt{\frac{1}{T} \int_0^T v^2(t) \, dt}. In power systems and AC mains, which typically employ sinusoidal waveforms, these simplify to I_{\rms} = I_{\peak} / \sqrt{2} and V_{\rms} = V_{\peak} / \sqrt{2}, where I_{\peak} and V_{\peak} are the peak values. For instance, standard 120 V AC mains in the United States corresponds to a peak voltage of approximately 170 V, ensuring consistent power delivery calculations. The RMS value establishes equivalence between AC and DC for power purposes: an AC current with RMS value I_{\rms} dissipates the same average power P = I_{\rms}^2 R in a resistor R as a DC current of magnitude I_{\rms}. This equivalence is crucial for circuit analysis, as it allows AC systems to be treated like DC equivalents when computing thermal effects or energy transfer, avoiding the zero average of instantaneous AC values. In AC circuits with both resistive and reactive components, the average power is given by P_{\avg} = V_{\rms} I_{\rms} \cos \phi, where \phi is the phase angle between voltage and current; this formula derives from the time average of the instantaneous power p(t) = v(t) i(t), with reactive components contributing no net power. For purely resistive circuits (\phi = 0), it reduces to P_{\avg} = V_{\rms} I_{\rms}, matching the DC case. These principles underpin the design of power distribution systems, ensuring efficient transmission and utilization of electrical energy.

Audio Engineering

In audio engineering, the root mean square (RMS) level serves as a key metric for assessing the perceived loudness of audio signals, particularly for sustained sounds, where it correlates more closely with human auditory perception than peak amplitude measurements. Unlike peak levels, which capture instantaneous maxima and can be misleading for continuous program material, RMS provides an average power estimate that better reflects the overall energy and subjective volume of a signal. In digital audio processing, RMS is typically calculated over short integration windows, such as 300 milliseconds, to enable real-time metering that approximates perceived loudness without excessive responsiveness to transients. This windowed approach allows engineers to monitor and adjust levels during mixing and mastering, ensuring consistent playback across devices. Audio compression and limiting techniques often employ RMS detection to manage dynamic range, applying gain reduction based on average signal levels to prevent excessive variation while preserving musicality; common targets range from -12 dBFS to -20 dBFS, depending on the medium, to balance loudness and headroom. For instance, broadcast standards aim for alignment around -18 dBFS to accommodate peaks up to -9 dBFS. While effective for average energy, RMS metering overlooks inter-sample peaks that occur between digital samples, potentially resulting in clipping during playback or conversion if true peak levels exceed 0 dBTP; this limitation necessitates complementary true peak monitoring to avoid distortion. Standardization efforts by the Audio Engineering Society (AES) and European Broadcasting Union (EBU) have incorporated RMS measurements into broadcast audio guidelines since the 1990s, with EBU R68 (2000) defining an alignment level of -18 dBFS for a 1 kHz sine wave to ensure interoperability and consistent loudness across transmission chains.

Statistical and Physical Applications

Error Measurement

The root mean square error (RMSE) is a widely used metric for quantifying the accuracy of predictive models by measuring the average magnitude of errors between predicted and observed values. It is defined as \text{RMSE} = \sqrt{\frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y}_i)^2}, where y_i are the observed values, \hat{y}_i are the predicted values, and n is the number of observations. This formula computes the square root of the mean of the squared residuals, providing a measure in the same units as the target variable, which facilitates interpretation in and forecasting tasks. To enable comparisons across datasets with different scales, RMSE is often normalized, such as by dividing by the mean of the observed values (yielding normalized RMSE, or ), the range of observations, or the standard deviation. Normalization by the mean, for instance, expresses errors as a percentage of the average value, making it useful for benchmarking models on heterogeneous data. Compared to the mean absolute error (MAE), which averages the absolute differences between predictions and observations, RMSE penalizes larger errors more heavily due to its quadratic scaling. This property makes RMSE particularly suitable for applications where outliers or significant deviations are costly, as it amplifies the impact of such errors in the overall metric. In machine learning, RMSE serves as a standard evaluation metric for regression models, assessing predictive performance on tasks like house price estimation or demand forecasting. It is commonly employed in weather forecasting to gauge model accuracy, such as in numerical weather prediction systems where it quantifies deviations in temperature or precipitation predictions against observations. Statistically, RMSE relates to the bias-variance decomposition of (MSE), where \text{MSE} = \text{bias}^2 + \text{variance} + \sigma^2 and \text{RMSE} = \sqrt{\text{MSE}}, with \sigma^2 representing the irreducible error from noise in the data. This breakdown aids in diagnosing sources of error, such as overfitting (high variance) or underfitting (high bias), in statistical learning contexts.

Root Mean Square Speed

In the kinetic theory of gases, the root mean square (RMS) speed v_{\rms} provides a measure of the typical magnitude of molecular velocities, defined as the square root of the mean of the squared speeds. It is expressed as v_{\rms} = \sqrt{\frac{3kT}{m}} = \sqrt{\frac{3RT}{M}}, where k is Boltzmann's constant, T is the absolute temperature, m is the mass of an individual molecule, R is the gas constant, and M is the molar mass. This formula emerges from the Maxwell-Boltzmann distribution, which models the speeds of molecules in an ideal gas at thermal equilibrium. The derivation calculates the average of v^2 by integrating over the three orthogonal velocity components, yielding a mean kinetic energy of \frac{3}{2} kT per molecule and thus \frac{1}{2} m v_{\rms}^2 = \frac{3}{2} kT, as established in James Clerk Maxwell's foundational 1860 work on the dynamical theory of gases. From the Maxwell-Boltzmann distribution, the RMS speed surpasses the arithmetic average speed v_{\avg} and the most probable speed v_{\mp} (the speed at the distribution's peak), satisfying v_{\rms} > v_{\avg} > v_{\mp}, with v_{\rms} \approx 1.085 v_{\avg}. In applications to , the RMS speed governs effusion rates, as the flux of molecules through a pinhole is directly proportional to v_{\rms}, aligning with derived from empirical observations. The c in an connects to it through c = \sqrt{\frac{\gamma RT}{M}} = \sqrt{\frac{\gamma}{3}} v_{\rms}, where \gamma is the of specific heats. Experimental support for these relations stems from 19th-century investigations, including Thomas Graham's studies in the 1830s and 1840s, alongside own experiments that corroborated predictions of molecular speeds around 400–500 m/s for common gases at .

Advanced Analysis

In the , the root mean square (RMS) value of a signal is intimately linked to its spectral representation through . provides the foundational relationship, stating that the total energy or power in the equals that in the . For a periodic signal of period T with Fourier coefficients c_k = \frac{1}{T} \int_0^T x(t) e^{-i \omega_k t} \, dt, where \omega_k = \frac{2\pi k}{T}, the time-domain mean square value (RMS squared) is given by \frac{1}{T} \int_0^T |x(t)|^2 \, dt = \sum_{k=-\infty}^{\infty} |c_k|^2. This equality connects the RMS directly to the magnitudes of the spectral components and underpins the power (PSD), defined as the distribution of power across frequencies. For band-limited signals, where energy is confined to a finite range, the RMS value simplifies to the of the integrated PSD over the . Mathematically, \text{[RMS](/page/RMS)} = \sqrt{\int_{f_1}^{f_2} S_{xx}(f) \, df}, with S_{xx}(f) denoting the in units such as V²/Hz, and the limits f_1 to f_2 defining the . This formulation is essential for assessing signal power within specific spectral bands, such as in filtered communications channels. In the analysis of , particularly with flat , the value scales with the of the effective . For (Johnson-Nyquist) in , the open-circuit voltage across a R at T is v_n = \sqrt{4 k T B R}, where k is Boltzmann's and B is the ; this derives from integrating the constant PSD $4 k T R over B. The effective , independent of R, is k T B, highlighting the frequency-domain origin of contributions in . For multitone signals consisting of sinusoids at distinct frequencies, the components are orthogonal, so the total RMS is the magnitude of the vector sum of their phasors: \text{RMS} = \sqrt{\sum_i \text{RMS}_i^2}, where each \text{RMS}_i = A_i / \sqrt{2} for amplitude A_i. This reflects the incoherent addition of powers in the . Since the 1960s, with the advent of the (FFT) algorithm, such spectral decompositions have enabled efficient RMS computations in communications, transforming raw time-series data into frequency bins for .

Relationship to Other Statistics

The root mean square (RMS) of a set of non-negative real numbers is always greater than or equal to their (AM), with equality all the numbers are equal; this follows from the convexity of the squaring function and applied to the average. The RMS-AM inequality highlights that the RMS captures quadratic effects, such as energy or power in physical systems, where simple averaging would understate the magnitude. The standard deviation (SD) of a dataset is the square root of the variance, which is the RMS of the deviations from the , whereas the RMS is the square root of the mean of the squared original values. Thus, the SD quantifies dispersion around the , while the RMS measures the overall scale of the data without centering. Unlike the , which is robust to outliers as it depends only on the order statistics, the is highly sensitive to extreme values because squaring amplifies large deviations more than small ones. The , focusing on the most frequent value, is even less affected by outliers but provides limited information about spread compared to the . For a non-negative random variable X, the RMS is \sqrt{\mathbb{E}[X^2]}, while the root mean absolute value is \mathbb{E}[\sqrt{X^2}] = \mathbb{E}[X]; by , since the function is , \mathbb{E}[X] \leq \sqrt{\mathbb{E}[X^2]}, with if X is constant . This probabilistic perspective underscores the RMS as an L_2-norm measure of magnitude. The RMS generalizes as the power mean of order p=2, defined for p > 0 as M_p = \left( \frac{1}{n} \sum_{i=1}^n x_i^p \right)^{1/p} for positive x_i; the power mean inequality states that M_p \geq M_q for p > q, with the as M_1 and when all x_i are equal.

References

  1. [1]
    Root-Mean-Square -- from Wolfram MathWorld
    Root-mean-square (RMS) is the square root of the mean of the squared values of a set of numbers, calculated as sqrt((sum_(i=1)^(n)x_i^2)/n).
  2. [2]
    Introduction to RMS Measurements | Electronics Textbook
    Root-mean-square (RMS) calculation was introduced over a hundred years ago by Charles Steinmetz, who grew up in Breslau, Germany (now Poland).
  3. [3]
    [PDF] The R.M.S. Error
    The RMS error is a measure of the error around the regression line, in the same sense that the SD is a measure of variability around the mean. Page 8. 8. 10.
  4. [4]
    RMS
    The r.m.s. value is a kind of average, as the word "mean" in the name root-mean-square implies. If you take the average over time of the square of the current, ...
  5. [5]
    Root Mean Square - an overview | ScienceDirect Topics
    Note that the origins of the use of rms values date from the late 1800s when alternating and direct current distribution systems competed for acceptance ...
  6. [6]
    [PDF] Average and RMS Values of a Periodic Waveform
    square root. This is known as the “Root Mean Square” or RMS value of any time- varying (or spatially-varying) waveform, and is defined as: { }. ∫. = ≡ τ τ 0.
  7. [7]
    Power Mean -- from Wolfram MathWorld
    A power mean is a mean of the form M_p(a_1,a_2,...,a_n)=(1/nsum_(k=1)^na_k^p)^(1/p), (1) where the parameter p is an affinely extended real number and all ...
  8. [8]
    [PDF] A Note on Standard Deviation and RMS - myGeodesy
    Jun 1, 1999 · When the bias is zero, T is said to be unbiased and mean ... is an unbiased estimator of the population standard deviation s. Proof that RMS.
  9. [9]
    2.5 AC and DC waveforms, average and RMS values - Open Books
    This is actually what is measured when a multimeter is configured to perform a DC voltage or current measurement. The average of any periodic waveform is equal ...
  10. [10]
    Lesson 10. Sinusoids, RMS, and Complex Numbers
    Oct 25, 2021 · Another useful parameter is the root-mean-square (rms) voltage. For all sinusoids centered around zero, this is just the peak value divided by ...
  11. [11]
    [PDF] Assessing the importance of the root mean square (RMS) value of ...
    The RMS value for the square signal was 5 VRMS. Those of sinusoidal and triangular were 5.54 and 5.76 VRMS respectively.
  12. [12]
    [PDF] RMS Values of Commonly Observed Converter Waveforms
    In this appendix, several useful formulas and tables are developed which allow these rms values to be quickly determined. RMS values of the doubly-modulated ...
  13. [13]
    [PDF] CONTINUOUS-TIME FOURIER SERIES - University of Michigan
    Recall that the rms value of a sinusoid is its amplitude/. √. 2, and the average power of a sinusoid=(rms ... orthogonal signals is the sum of their average ...
  14. [14]
    Step-by-Step Noise Analysis Guide for Your Signal Chain
    Calculating the Noise of the Signal Chain. To add all the noise contributions, the root sum square method is used: Equation 25. Noise Spectral Density. Noise ...
  15. [15]
    [PDF] Chapter 12 Alternating-Current Circuits - MIT
    12.4 Power in an AC circuit ... It is convenient to define the root-mean-square (rms) current as. 2. 0 rms.
  16. [16]
    Root-mean-square (rms) voltage | Definition & Facts | Britannica
    ### Summary of RMS Voltage in Electrical Engineering
  17. [17]
    RMS Voltage of a Sinusoidal AC Waveform - Electronics Tutorials
    RMS Voltage or Root Mean Square Voltage of an AC Waveform is the amount of AC power that produces the same heating effect as DC Power.
  18. [18]
    Root Mean Square (RMS) Quantities | Basic Alternating Current (AC ...
    Why can't we measure AC and DC voltage the same way? Root mean square, or RMS, is the DC equivalent output value of a sine wave, like an AC waveform.
  19. [19]
    What Are Loudness Meters and Why It Matters - Icon Collective
    RMS meters give a better impression of perceived loudness then peak meters. They measure the dynamic range in a waveform and then display average dB levels ...
  20. [20]
    Metering | AIMM
    RMS Meters: RMS (Root Mean Square) meters show the average level of an audio signal over time. They are useful for measuring the overall loudness of a track ...
  21. [21]
  22. [22]
    What Is RMS and How Does It Differ from True Peak?
    Jan 31, 2023 · RMS represents the average (also called “integrated”) loudness of a signal. It will typically take a measurement every 300ms.
  23. [23]
    Levels - Transom
    May 18, 2011 · Dynamics processing, such as compression and limiting is often required to tame the peaks enough to get average levels up to -12dBfs, but that ...
  24. [24]
    RMSE Explained: A Guide to Regression Prediction Accuracy
    RMSE is the square root of the average of squared differences between observed and predicted values. It's a widely used regression metric that tells us how much ...What Is RMSE? · Using mean squared error (MSE) · When to Use RMSE · example
  25. [25]
    Root Mean Square Error (RMSE) - Statistics By Jim
    Root Mean Square Error (RMSE) measures the average difference between a statistical model's predicted values and the actual values.
  26. [26]
    How to normalize the RMSE - Marine Data Science
    Jan 7, 2019 · You will find, however, various different methods of RMSE normalizations in the literature: You can normalize by. the mean: NRMSE=RMSE¯ ...
  27. [27]
    Normalized Root Mean Squared Error (NRMSE) - Lightning AI
    Choose from “mean”, “range”, “std”, “l2” which corresponds to normalizing the RMSE by the mean of the target, the range of the target, the standard deviation ...
  28. [28]
    Choosing between MAE, MSE and RMSE - Hugo Matalonga
    Mar 29, 2023 · RMSE also measures how close the predictions are to the true values on average, but it gives more weight to large errors than small ones. RMSE ...
  29. [29]
    Root-mean-square error (RMSE) or mean absolute error (MAE) - GMD
    Jul 19, 2022 · As its name implies, the RMSE is the square root of the mean squared error (MSE). ... history given by (Stigler, 1973, 1984), making it one of the ...
  30. [30]
    3.1.2: Maxwell-Boltzmann Distributions - Chemistry LibreTexts
    Jul 7, 2024 · The Maxwell-Boltzmann equation, which forms the basis of the kinetic theory of gases, defines the distribution of speeds for a gas at a certain temperature.Introduction · Plotting the Maxwell... · Related Speed Expressions
  31. [31]
    [PDF] History of the Kinetic Theory of Gases* by Stephen G. Brush** Table ...
    In his first paper, published in 1860, he used the Clausius mean-free-path idea to obtain unexpected results for the viscosity of a gas; and he analyzed the ...
  32. [32]
    3.1.10: Molecular Speed Distribution - Chemistry LibreTexts
    Oct 2, 2025 · For N2 at 25ºC, the most probable speed is 421 m/sec. The average speed is a little larger than the most probable speed. For N2 at 25ºC, the ...
  33. [33]
    2.9: Graham's Laws of Diffusion and Effusion - Chemistry LibreTexts
    Dec 14, 2018 · Graham's Law states that the effusion rate of a gas is inversely proportional to the square root of the mass of its particles.Missing: validation | Show results with:validation
  34. [34]
    Speed of Sound - HyperPhysics
    The speed of sound in air is 343 m/s, while the rms speed of air molecules is 502 m/s using a mean mass of air molecules of 29 amu.
  35. [35]
    Mean Square Convergence and Parseval's Identity - Penn Math
    Jan 17, 2024 · Informally, this means that the Fourier series of f converges to f in some average sense even if not at every point. Moreover,.
  36. [36]
    Units of Power Spectral Density | Mechanics and Machines
    Nov 19, 2014 · (This is known as Parseval's Theorem.) So to get the RMS value of a signal from its PSD, you just compute the area under the PSD and take the ...
  37. [37]
    [PDF] MT-048: Op Amp Noise Relationships - Analog Devices
    side. Equation 2 can be used to calculate the total rms noise in the bandwidth 0.1 to 10 Hz by letting FL = 0.1 Hz, FH = 10 Hz, FC = 0.7 Hz, vnw = 10 nV/√Hz. ...Missing: TB) | Show results with:TB)<|control11|><|separator|>
  38. [38]
    Full article: Math Bite: A Simple Proof of the RMS–AM Inequality
    If we expand (1) and use the fact that it's discriminant is negative, we get 2 ⁢ 𝑎 ⁢ 𝑏 < 𝑎 2 + 𝑏 2 , which is a geometric mean–root-mean square (RMS) inequality ...
  39. [39]
    [PDF] 1. The AM-GM inequality - Berkeley Math Circle
    P2 = r x2. 1 + ··· + x2 n n is sometimes called the root mean square. For x1,...,xn > 0,. P−1 = n. 1 x1+ ··· + 1 xn is called the harmonic mean. Power mean ...
  40. [40]
    (PDF) Root Mean Square Error Compared to, and Contrasted with ...
    Aug 6, 2025 · This short note defines and discusses root mean square error and standard deviation. These ideas are very similar so confusion about them has arisen.<|separator|>
  41. [41]
    How Standard Deviation Relates to Root-Mean-Square Values
    Jul 28, 2020 · With RMS, we square the data points; with standard deviation, we square the difference between each data point and the mean.
  42. [42]
    [PDF] Quantitative Analysis of Clinical Data Ingo Ruczinski Logistics
    The mean is sensitive to outliers. • The median is resistant to outliers. • The geometric mean is used when a logarithmic transformation is appropriate (for ...
  43. [43]
    [PDF] Probability and Statistics for Computer Science
    outliers. ✺mean and standard deviation are very sensitive to outliers. ✺median and interquartile range are not sensitive to outliers. Page 7. Modes. ✺Modes ...<|separator|>
  44. [44]
    [PDF] TOPIC. Inequalities; measures of spread. This lecture explores
    Oct 24, 2000 · This lecture explores the implications of Jensen's inequality for g-means in general, and ... deviation, or root mean square deviation: σX ...