Fact-checked by Grok 2 weeks ago

Loudness

![Lindos1.svg.png][float-right]
Loudness is the subjective attribute of by which sounds are perceived to differ in strength, distinct from the objective physical measure of level. It arises from the nonlinear response of the human to acoustic stimuli, incorporating factors such as signal , , , and temporal patterning. Empirical quantification of loudness relies on psychophysical methods, yielding units like the phon, defined as the loudness matching a 1 kHz at a specified level in decibels, and the sone, a perceptually linear unit where 1 sone corresponds to 40 phons and each subsequent doubling of sones doubles the perceived loudness. dependence is captured by equal-loudness contours, standardized in ISO 226, which map levels across frequencies for tones perceived as equally loud by otologically normal listeners under free-field conditions. These contours, originally derived from extensive listener judgments, reveal heightened sensitivity in mid-frequencies (around 2–5 kHz) and reduced sensitivity at extremes, informing applications in audio , assessment, and hearing protection. While loudness models enable computational prediction for complex sounds, variations in individual hearing thresholds and contextual effects underscore its inherently perceptual nature.

Psychoacoustic and Technical Foundations

Definition and Perception of Loudness

Loudness is defined as the subjective perception of by the human auditory system, distinct from objective physical measures such as level (SPL) in decibels, which quantifies acoustic energy. This psychoacoustic attribute arises from neural processing in the and , where perceived volume integrates factors beyond mere , including content and temporal characteristics. Unlike SPL, which assumes logarithmic scaling of physical , loudness reflects nonlinear human , with empirical studies showing it follows Stevens' : perceived loudness \psi approximates k \cdot I^{0.67}, where I is and k is a constant fitted to experimental data from magnitude estimation tasks. Human perception of loudness varies significantly with frequency due to the ear's uneven sensitivity, peaking between 2 and 5 kHz and declining at extremes below 100 Hz or above 10 kHz, even at elevated SPLs. Equal-loudness contours, formalized in ISO 226, map the SPL required across frequencies (20 Hz to 12.5 kHz) to achieve equivalent perceived loudness for pure tones, based on listener judgments in controlled threshold-of-hearing experiments. The standard's 2003 revision incorporated data from over 200 participants, while the 2023 update refined contours using Bayesian modeling of recent psychoacoustic measurements to better account for inter-subject variability and age-related shifts. These contours underpin frequency-weighted metrics like (approximating 40-phon levels), though they deviate from true loudness at low levels where absolute thresholds dominate . To quantify loudness perceptually, the scale measures loudness level as the SPL at 1 kHz yielding equivalent perceived , aligning complex sounds to reference tones via contour ; for instance, a 500 Hz tone at 60 dB SPL equates to about 50 s. Complementing this, the scale provides a ratio-based unit of subjective magnitude, where 1 sone corresponds to 40 phons (a moderate conversational level), and each 10-phon increment doubles sones, reflecting empirical doubling of perceived loudness from paired-comparison tests. This linearity in sones facilitates modeling, as validated in auditory scaling experiments since the , though both scales assume steady-state tones and underperform for transient or broadband signals without additional masking corrections.

Measurement Principles and Units

Loudness measurement relies on psychoacoustic models that approximate human perception, accounting for frequency sensitivity and temporal integration, as pure sound pressure level in decibels (dB SPL) does not capture subjective volume. The unit defines loudness level as equivalent to the SPL of a 1 kHz judged equally loud by listeners under standard conditions, derived from equal-loudness contours established in experiments like those by and Munson in 1933. One equals 1 dB SPL at 1 kHz, but contours show lower sensitivity at low and high frequencies, requiring adjustments for broadband signals. The provides a nonlinear of perceived loudness magnitude, where 1 sone corresponds to the loudness of a 1 kHz at 40 phons (or 40 SPL), and loudness approximately doubles for every 10-phon increase, reflecting Stevens' with an exponent around 0.3 for to perceived loudness. follows S = 2^((P - 40)/10), where S is sones and P is phons, enabling quantification of ratios rather than levels; for instance, a 50-phon is 2 sones. These units stem from laboratory but are limited for dynamic programme material due to masking and adaptation effects. In audio production and broadcast, objective loudness metering uses standardized algorithms to estimate integrated programme loudness, as defined in ITU-R BS.1770, which employs K-weighting—a high-pass and low-pass filter combination—to weight frequencies according to human hearing sensitivity, followed by level gating to exclude near-silence and averaging over time. The primary unit is LUFS (Loudness Units Full Scale), where 0 LUFS equals -0.691 dB below full-scale digital sine wave, integrating mean-square values across channels with relative gating at -10 LU or absolute at -70 LUFS to handle varying content. EBU R 128, building on BS.1770, specifies -23 as the target for normalized audio in , with additional metrics like short-term loudness (3-second blocks), momentary loudness (400 ms), and loudness range (LRA) to assess dynamic variation, all in . True-peak measurement, also in BS.1770, detects inter-sample peaks in dBTP (True Peak) to prevent clipping, targeting below -1 dBTP for headroom. These methods prioritize perceptual consistency over peak or levels, which correlate poorly with subjective loudness, as validated against listener trials showing reduced uncertainty in programme estimation.

Historical Evolution

Analog Era Practices

In the analog era, spanning roughly from the 1930s to the late 1980s, audio engineers employed and limiting to maximize perceived loudness while contending with the physical constraints of media such as records and . originated in broadcast applications during the 1930s to prevent signal , with early commercial units from manufacturers like Collins, , and entering use by the early 1930s. These techniques reduced the by attenuating loud peaks—such as drum hits—and allowing the overall signal level to be raised, thereby increasing average loudness without exceeding the medium's peak tolerance. By the 1940s, the rise of jukeboxes, which operated at fixed playback volumes, incentivized mastering engineers to prioritize louder recordings to make tracks stand out against competitors. This trend intensified in the as producers demanded higher average levels for 7-inch singles to compete on radio , marking the informal onset of loudness competition in commercial music production. In mastering for , engineers applied alongside equalization during the transfer from multitrack tape— which offered wider —to discs cut by lathes. The 1954 adoption of the standard optimized frequency response, enabling narrower grooves for longer playtimes while facilitating louder cuts by compensating for bass-heavy content that required deeper grooves. Limiting, often implemented as high-ratio (typically 10:1 or greater), served as a final safeguard against groove overload, preserving playability on turntables where excessive peaks could cause skipping. Despite these efforts, analog media imposed strict limits on loudness escalation: vinyl's dynamic range was constrained to approximately 50-60 dB due to groove geometry and surface noise, while magnetic tape hovered around 70 dB, far below the 90+ dB potential of live acoustics. Engineers balanced loudness gains against risks like inner-groove and reduced playing time, as louder signals demanded wider or deeper grooves that consumed space—leading to a shift by the and toward prioritizing duration over maximal volume in mastering. An example of applied appears in ' 1976 track "Rich Girl," where peak reduction enhanced average loudness for vinyl release. Overall, analog practices emphasized controlled to fit imperfect , avoiding the unchecked seen later in digital formats.

Digital Transition and Escalation

The transition to digital audio recording and reproduction in the late 1970s and early 1980s fundamentally altered loudness practices in music production. Analog formats like vinyl records and magnetic tape imposed physical constraints, such as groove width limitations on vinyl and tape saturation, which naturally capped achievable loudness to avoid audible distortion. The introduction of the compact disc (CD) in 1982, with its fixed digital ceiling at 0 dBFS (full scale), eliminated these analog barriers, allowing engineers to maximize peak levels without medium-specific degradation. However, since human perception of loudness correlates more with average (RMS) levels than peaks, digital mastering enabled aggressive compression to elevate RMS while keeping peaks below clipping, setting the stage for competitive loudness increases. This shift escalated in the 1990s as (DSP) tools proliferated, permitting unprecedented control over dynamics. Average loudness on CDs rose steadily, with recordings from the mid-2000s averaging approximately 5 dB higher than those from the or early . A pivotal development was the 1994 release of the Waves L1 Ultramaximizer, the first widely available digital brickwall limiter featuring look-ahead capability, which prevented inter-sample clipping and allowed sustained high levels without traditional trade-offs. Mastering engineers increasingly applied such tools to make tracks "punch" louder on radio, car stereos, and CD players, where unadjusted playback favored perceptually louder masters in playlists or broadcasts. By the late 1990s, this escalation intensified as and limiting usage in mastering surged dramatically between and , often reducing to 5-8 dB or less on commercial releases. The absence of analog "warmth" or natural limiting encouraged a feedback loop: producers responded to consumer and retailer preferences for immediate impact, prioritizing short-term loudness over long-term . This digital-enabled , dubbed the "," prioritized elevation through multiband and iterative limiting passes, fundamentally reshaping audio aesthetics in pop, rock, and electronic genres.

Peak Intensity (1990s–2010s)

During the 1990s and 2000s, the reached its zenith as mastering engineers employed increasingly aggressive and multiband limiting to elevate the perceived volume of commercial recordings, often at the expense of transient clarity and overall fidelity. This escalation was facilitated by advancements in , including software limiters that permitted sustained high average levels close to 0 without traditional analog saturation constraints. Analyses of popular tracks from the era reveal a marked rise in (RMS) levels, with many masters achieving averages of -8 or higher by the mid-2000s, compared to -12 or lower in prior decades. Notable instances of extreme processing include Metallica's (released September 12, 2008), where the version underwent such severe brickwall limiting that its averaged DR4 or less across tracks, resulting in clipping and ; comparative stems from the album's adaptation demonstrated superior when less compressed. Similarly, albums like Nickelback's (2005) and Dream Theater's (2005) exemplified the trend toward minimal headroom, with crest factors reduced to 3-5 dB, prioritizing competitive loudness over musical nuance. These practices stemmed from industry incentives, as louder masters stood out in unnormalized playback environments like radio broadcasts and point-of-sale demos, where volume correlated with perceived impact. By the early 2010s, empirical measurements confirmed the trend's intensity, with integrated loudness in top popular recordings often dipping below -9 LUFS, reflecting cumulative increases in both midrange and low-frequency energy over the preceding two decades. Audio professionals, including those affiliated with the Audio Engineering Society, documented these shifts as a competitive "arms race" that diminished audio quality, prompting initial critiques in technical forums and publications. Despite occasional pushback from artists and engineers advocating for dynamic preservation, the period's mastering norms entrenched low dynamic range as a de facto standard in genres like rock, pop, and electronic music until streaming normalization began mitigating the incentives around 2013.

Underlying Causes

Technical Enablers

The advent of in audio engineering provided a foundational tool for increasing perceived loudness by reducing the difference between the quietest and loudest parts of a signal, allowing quieter elements to be boosted relative to peaks without exceeding maximum limits. Compressors, originating in the 1930s for broadcast applications to prevent , evolved through optical designs like the Teletronix LA-2A in the and variable-mu tube units, enabling mastering engineers to apply ratios exceeding 10:1 for aggressive control. Limiters, a specialized form of with high ratios (often 20:1 or infinite) and fast attack times, further enabled loudness maximization by clipping transients just below the digital ceiling of 0 , introduced with compact discs in 1982, which lacked the physical constraints of analog like vinyl groove depth or magnetic tape saturation that previously self-limited excessive levels. Brickwall limiters, refined in the 1990s with , prevented inter-sample clipping in non-oversampled playback systems, permitting sustained high average levels—measured in or later —while maintaining peak compliance. Digital audio workstations (DAWs) and plugin-based processing, proliferating from the mid-1990s with software like , democratized multiband compression and limiting, allowing precise frequency-specific gain reduction to enhance low-end density and overall loudness without broadband artifacts. These tools facilitated iterative mastering workflows where engineers could preview and adjust on identical playback chains, escalating integrated loudness from typical analog-era values around -12 to -14 RMS to digital peaks approaching -6 or higher by the early . Advancements in analog-to-digital conversion and reduced quantization noise, enabling heavier without audible , while dithering algorithms preserved perceived during bit-depth reduction. However, these enablers prioritized short-term perceptual competition over long-term dynamic preservation, as evidenced by measurable reductions in (peak-to-RMS ratio) from 12-15 in 1980s masters to under 6 in many releases.

Commercial and Consumer Dynamics

The escalation of audio loudness in commercial music production stemmed from competitive pressures within the recording industry, where producers and labels prioritized higher integrated loudness levels to ensure tracks stood out against competitors during playback on radio, CDs, and early digital platforms without per-track volume adjustment. This practice, often termed the "loudness war," arose from the perception that louder masters conveyed greater energy and immediacy, potentially enhancing perceived quality and market appeal in environments like retail demos or broadcast chains that apply uniform limiting. Mastering engineers faced directives to maximize RMS or peak levels, frequently at the expense of dynamic range, as evidenced by industry analyses showing average commercial loudness rising from around -18 LUFS in the 1990s to -8 LUFS or higher by the late 2000s across genres. Consumer listening habits amplified these incentives, as playback devices such as car stereos, portable players, and home systems in variable acoustic environments favored tracks with elevated average levels to cut through ambient noise without requiring manual volume increases. In shuffle or playlist scenarios, louder recordings initially dominated auditory attention, fostering a feedback loop where audiences associated heightened loudness with excitement or fullness, even as it masked subtleties and induced fatigue over extended sessions. Empirical surveys and audio engineering critiques indicate this dynamic persisted because non-expert listeners rarely discerned compression artifacts in casual settings, reinforcing demand for "punchier" releases over nuanced dynamics. The advent of loudness in major streaming services—such as Spotify's rollout in 2017 targeting -14 integrated loudness—fundamentally altered these dynamics by equalizing playback volumes across catalogs, thereby removing the competitive edge of hyper-compressed masters and incentivizing preservation of to avoid post-normalization limiting. This shift, adopted broadly by platforms like and by 2019, reflected regulatory and technical standards like EBU R128, compelling labels to recalibrate mastering practices toward sustainable loudness targets around -14 to -16 for optimal fidelity post-normalization. However, legacy catalogs and non-streaming formats continue to exhibit remnants of prior escalation, highlighting lingering commercial inertia in sales.

Impacts and Effects

Degradation of Audio Fidelity

Excessive and brickwall limiting, employed to maximize loudness, introduce audible artifacts into audio signals, including pumping, breathing, and harmonic from peak shaving. These processes reduce the —the ratio of peak to levels—by up to 3 compared to recordings from the , resulting in a denser, less nuanced that obscures subtle details and alters . For instance, Metallica's 2008 album exhibits crest factors akin to heavily compressed pop tracks, yielding a "compact" presentation that diminishes the punch and clarity expected in production. Brickwall limiting, which enforces a hard on peaks to prevent digital clipping, often sacrifices transient accuracy; rapid attacks from instruments like lose sharpness as high-frequency content is attenuated or smeared, leading to a flatter perceptual . Studies confirm that , including those with hearing impairments, rate uncompressed linear audio highest for overall (mean score 60.53 on a 100-point scale) and perceived (63.63), while heavily compressed versions score significantly lower (: 47.43; : 43.88), indicating a of emotional depth and . Modern genres such as pop and rock typically feature dynamic ranges of 6–8 dB, far narrower than the 12+ dB in , amplifying these losses across commercial releases. Over-limiting can also generate distortion, particularly in multitrack mixes where summed signals exceed the limiter's repeatedly, exacerbating content that fatigues listeners and reduces long-term enjoyment. Empirical measurements from 1969–2010 show average loudness rising by 5 alongside these degradations, underscoring how trade-offs have become normalized in mastering practices. While moderate enhances clarity in certain contexts, applications in the era systematically erode the high- potential of formats.

Alterations to Dynamic Range

Dynamic range compression, a core technique in achieving greater perceived loudness, systematically reduces the gap between a recording's levels and its average . By applying reduction to transient peaks and subsequent makeup to elevate the overall signal, audio engineers can increase integrated loudness metrics like or while avoiding digital clipping at 0 . This alteration prioritizes uniform intensity over natural variation, often employing multiband compressors and brickwall limiters to target specific frequency ranges, thereby diminishing the expressive contrast inherent in musical performances. Empirical analysis of 4,500 tracks from popular albums spanning 1969 to 2010 demonstrates a 5 rise in levels from the to the mid-, accompanied by a 3 decline in —the peak-to- ratio—since the 1980s. This shift accelerated after 1990, coinciding with digital mastering tools that enabled aggressive limiting without the physical constraints of analog media like . In practice, crest factors in heavily compressed pop and rock masters frequently fell to 6-8 by the , contrasting with 10-14 in pre-digital eras where tape saturation and mechanical groove limits preserved wider dynamics. Such reductions homogenize audio signals, suppressing quiet passages and blunting percussive attacks to maintain high average levels for competitive playback on radio, , and early digital platforms. While integrated loudness range metrics, such as those defined in EBU Tech 3342, indicate stable short-term variability across decades—potentially due to genre-specific elements like sustained notes in electronic music—the crest factor's contraction confirms a net loss in instantaneous dynamic excursion. This technical alteration has persisted into the , though streaming has somewhat mitigated incentives for extreme in new releases.

Listener Experience and Health Considerations

Excessive dynamic in audio mastering, characteristic of the loudness wars, diminishes the perceptual appeal of music by eliminating natural variations in intensity, resulting in a uniform loudness that listeners find monotonous and lacking emotional depth. This reduction in prevents the from experiencing the contrast between quiet passages and crescendos, which normally enhances engagement and immersion. Studies indicate that such hyper-compressed tracks are less preferred by listeners when side effects like become apparent, as the absence of transients—such as sharp attacks—alters and spatial qualities, making the sound feel artificial and confined. Prolonged exposure to heavily compressed audio induces auditory fatigue, where the ears become overwhelmed by relentless high average levels without respite, leading to discomfort and reduced listening endurance. Engineers and audiophiles report that albums mastered at extreme loudness levels, such as Metallica's Death Magnetic (2008) with integrated loudness exceeding -5 LUFS, prompt users to lower volume or stop playback sooner due to this fatigue, exemplified by fan campaigns demanding remasters. Research attributes this to the brain's expectation of dynamic relief, which, when unmet, heightens perceived strain, particularly in genres like rock and pop where compression ratios often exceed 10:1. On health grounds, while absolute playback volume remains the primary determinant of (NIHL), dynamic indirectly heightens risks by impairing the middle ear's protective es. A 2023 study on awake guinea pigs exposed to overcompressed at 85 SPL for seven days found that the —responsible for dampening intense sounds—lost approximately half its efficacy for up to a week post-exposure, unlike natural dynamic which showed quicker recovery. This could leave listeners more susceptible to subsequent loud impulses, potentially accelerating cochlear damage even at moderate volumes compliant with guidelines like the WHO's 80 dB limit for 40 hours weekly. Compressed signals may also encourage volume increases to perceive detail lost in flattened , amplifying NIHL prevalence, which affects over 1.5 billion people globally per WHO estimates, with as a key contributor. Empirical data from such animal models underscore a causal link between artifacts and temporary auditory , though human longitudinal studies remain limited.

Standards and Measurement Advances

Traditional Metrics (RMS, Peak Levels)

Peak levels measure the maximum instantaneous amplitude of an audio signal, serving as a safeguard against overload in both analog and digital systems. In digital audio, peaks are constrained to 0 dBFS to avoid clipping, where exceeding this threshold introduces harsh distortion due to the finite resolution of digital representation. Traditional peak programme meters, employed in broadcasting and recording since the mid-20th century, feature rapid attack times—typically 1-3 milliseconds—to detect these transients accurately, allowing engineers to maintain headroom and prevent downstream saturation. Root mean square (RMS) levels quantify the average of a signal by computing the of the mean of squared values over an integration window, commonly 300 milliseconds, providing an approximation of sustained loudness. This metric, rooted in principles for assessment, gained prominence in audio production as a for perceived , particularly for with consistent energy, since human hearing integrates over short durations. In pre-digital eras, RMS informed VU meter ballistics, which emulated average program levels for cutting and mastering, targeting values around -10 to -20 relative to to balance warmth and headroom. During the escalation of competitive loudness from the onward, producers maximized values—often pushing integrated track to -8 or higher—while constraining peaks to 0 through multiband and limiting, a practice central to the "loudness wars." This approach exploited playback systems' tendency to emphasize average energy over peaks, making tracks appear louder on radio and early players without exceeding technical limits. Empirical analyses of commercial releases from this period show increases of 3-5 per decade, correlating with reduced crest factors (peak-to- ratios) from 12-15 in the to under 6 by the 2000s. Despite their utility, and metrics exhibit significant shortcomings for comprehensive loudness evaluation. readings capture only extrema, ignoring average energy and failing to reflect overall program density or listener perception of volume. , while better for averages, applies no —treating low and high equivalently despite human peaks around 2-5 kHz—and uses fixed short windows that overlook long-term or masking effects in complex mixes. These flaws contributed to inconsistent across media, as evidenced by variations in perceived loudness between tracks with similar but differing content.

Modern Systems (LUFS, EBU R128)

The Loudness Units relative to Full Scale (LUFS) metric, defined in ITU-R Recommendation BS.1770 first published in 2006, quantifies perceived audio loudness by integrating the mean-square root of the K-weighted audio signal over time, relative to full digital scale (0 dBFS). This approach incorporates a pre-filter (K-weighting) that emphasizes mid-frequencies where human hearing is most sensitive, applies absolute gating to exclude periods below -70 dBFS, and uses relative gating (updated in BS.1770-2, 2011) to ignore content quieter than -10 LU relative to the absolute threshold, thereby focusing on program material rather than silence. Unlike traditional root mean square (RMS) levels, which average unweighted signal power and overlook perceptual factors, or peak levels, which capture only instantaneous maxima without temporal integration, LUFS provides a closer approximation to subjective loudness across diverse content. EBU Recommendation R128, issued by the in August 2010 and revised through 2023, adopts BS.1770 metering for , targeting an integrated programme loudness of -23 with tolerances of ±0.2 for quality-controlled content and ±1.0 for live programmes. It mandates a maximum true level of -1 dBTP (measured with 20 kHz oversampling) to prevent clipping during , while permitting short-term and momentary loudness excursions up to +6 and +9 above the integrated level, respectively, to accommodate natural dynamics. Additional descriptors include Loudness Range (LRA), calculated per EBU Tech 3342 as the statistical variation in short-term loudness (3-second blocks), aiding assessment of programme consistency without mandating limits. These systems addressed limitations of peak- and RMS-based practices by prioritizing long-term perceived volume over competitive maximization, enabling broadcasters to maintain without abrupt level jumps between segments. Adoption accelerated in broadcasters from , with compliance integrated into workflows for TV, radio, and online delivery; supplements to R128, such as s2 (2023) for streaming, extend guidance to hybrid distribution while retaining the -23 anchor for consistency. Globally, R128 influenced standards like ATSC A/85 (2010, -24 LKFS target), though variations persist in streaming (e.g., -14 to -16 ), underscoring LUFS's role in standardizing measurement amid diverse platforms.

Normalization Implementations

Broadcasting and Regulatory Standards

In broadcasting, regulatory standards for audio loudness aim to maintain consistent perceived levels across programs and advertisements, mitigating abrupt changes that disrupt viewer experience. These standards emerged in response to the "loudness wars," where competitive maximization of audio levels led to and complaints about excessive volume in commercials relative to programming. The foundational algorithm for measuring programme loudness is defined in ITU-R Recommendation BS.1770, first published in 2006 and revised through version 5 in November 2023, which specifies frequency-weighted, level-gated metrics for integrated loudness and true-peak levels to approximate human perception. The (EBU) adopted BS.1770 in its Recommendation R 128, initially published in August 2010 and revised in 2014, mandating an integrated loudness target of -23 (±1 LU for live content) with a maximum true of -1 TP to ensure uniform playback across channels without beyond artistic intent. This standard has been implemented by public broadcasters in over 30 countries and influences global practices, with tolerances of ±0.3 for measurements on 20 kHz bandwidth signals. In the United States, the Commercial Advertisement Loudness Mitigation (CALM) Act, signed into law on December 27, 2010, and enforced by the (FCC) from December 13, 2012, prohibits commercials from exceeding the average loudness of accompanying programming, relying on ATSC Recommended Practice A/85 (published 2009, revised 2013) for compliance measurement at -24 LKFS using BS.1770 methods. Internationally, variations persist: and align closely with ATSC A/85 at -24 LKFS, while countries like permit higher peaks up to 0 dBTP under similar integrated targets; many Asian and Latin American broadcasters follow or adapted BS.1770 thresholds around -23 to -24 . Compliance is verified through metering tools adhering to these algorithms, with regulators like the FCC conducting periodic audits, though a 2025 FCC notice of proposed rulemaking seeks data on CALM's ongoing efficacy amid streaming shifts. These frameworks prioritize perceptual consistency over peak normalization, reducing the need for excessive limiting while preserving programme .

Streaming Platform Practices

Major streaming platforms implement loudness to standardize playback volume across tracks, mitigating the effects of the by adjusting audio levels to predefined integrated loudness targets measured in Loudness Units relative to (LUFS). This process typically attenuates louder masters to the target while potentially amplifying quieter ones within safe headroom limits to prevent or clipping, often using true peak detection. Normalization is generally enabled by default but user-toggleable, with variations in targets and methodologies reflecting platform-specific engineering choices. Spotify applies normalization to -14 LUFS integrated loudness, a standard adopted to ensure consistent playback without requiring manual adjustments between tracks. The platform processes audio in real-time, turning down masters exceeding the target while preserving where possible, and offers user settings for "Loud," "Normal," or "Quiet" modes that adjust the effective target slightly for perceived preferences. This policy, detailed in 's documentation, has been in place since at least and applies across devices, though web and desktop players may exhibit minor variances compared to mobile apps. Distributors like automatically optimize uploads to meet 's -14 and -1 true peak guidelines to minimize . Apple Music employs Sound Check normalization, updated in March 2022 to use metering with a -16 target, which is quieter than many competitors to prioritize audio fidelity and headroom. Enabled by default on and macOS devices post-update, it normalizes tracks in both track and album modes, adjusting levels to avoid over-compression artifacts during playback, particularly for lossless formats. This shift from legacy ReplayGain-like methods to EBU R128-compliant addressed criticisms of inconsistent volume matching, though it applies selectively to avoid amplifying tracks that would exceed safe peaks. Tidal normalizes to -14 by default since its introduction in , with user options to disable it entirely or select a quieter -18 target for enhanced dynamic preservation, accessible via settings on and . This flexibility caters to audiophiles preferring unaltered masters, especially in high-resolution MQA or HiFi tiers, where can introduce minor processing delays or bit-depth reductions if is applied. The platform's emphasizes minimal , only engaging when integrated loudness deviates significantly from the target. YouTube Music introduced a "Consistent Volume" normalization feature in April 2025, building on prior ad-hoc peak-based adjustments that only attenuated tracks exceeding roughly -7 to prevent clipping. Unlike video content on the main platform, which has targeted -14 since 2019, the music service's new toggleable option aims for smoother transitions across diverse user-generated and official uploads, though it leaves quieter tracks unboosted to maintain artistic intent. This rollout addressed long-standing user complaints about volume inconsistencies in playlists spanning genres.

Consumer and Playback Devices

Consumer playback devices implement loudness normalization to ensure consistent perceived volume across tracks or sources, often using metadata tags or real-time processing to avoid manual adjustments. , a proposed standard from 2001, calculates integrated loudness for audio files and embeds adjustment values (typically targeting -14 for tracks or -16 for albums) that compatible players apply during playback without permanent file alteration, preserving . This technique is supported in software like , , and hardware such as certain portable media players, enabling seamless transitions between varying source material from local libraries. Apple's Sound Check feature, introduced in iTunes and now standard on and macOS devices, normalizes playback to a consistent integrated loudness level using metering, updated in 2022 to align with EBU R128 principles and enabled by default on new devices. It scans library tracks for ReplayGain-compatible tags or computes equivalent adjustments, targeting approximately -16 for stereo content, though some analyses indicate effective levels around -20 post-processing to prevent clipping. This applies across iPhones, iPads, and speakers during local or playback, reducing perceived jumps but potentially attenuating louder masters. On devices, loudness normalization varies by manufacturer and app, with Samsung's 6.1.1 (released July 2024) introducing a system-wide feature that dynamically adjusts media volume to mitigate abrupt changes, building on earlier Auto Volume modes. Third-party players like Poweramp support modes (track, album, or peak-based) for local files, applying gain offsets up to 10 while monitoring for clipping. historically offered normalization, but successor services rely on streaming platform metadata, with Android 15 adding AAC loudness metadata support for enhanced compatibility. In home theater and TV systems, normalization often aligns with broadcast standards like ATSC A/85 (-24 LKFS) or EBU R128 (-23 ), where receivers and soundbars process incoming signals to maintain intelligibility and overall loudness. Devices such as AVRs from brands like or include dynamic volume controls that apply real-time loudness compensation, compressing peaks and boosting quiet sections to a user-selectable target, though this can reduce dynamics in cinematic content. Smart TVs and streaming media players (e.g., , ) defer to source normalization from platforms like , which target -27 LKFS for -gated loudness, but may overlay device-specific or limiting for consistent output across or wireless connections.

Controversies and Viewpoints

Criticisms of Excessive Compression

Excessive in mastering, often pursued to maximize perceived loudness, has drawn criticism for diminishing the perceptual quality and emotional depth of recordings. Audio engineers and researchers argue that aggressive compression reduces the —the ratio between peak and levels—resulting in a flattened dynamic profile that eliminates natural variations between quiet and loud passages, thereby making less lifelike and engaging. For instance, analyses of commercial pop and rock tracks from the 2000s onward show average dynamic ranges dropping to as low as 6-8 dB, compared to 12-15 dB in earlier decades, leading to a loss of transient punch and spatial depth. A primary concern is , where sustained high average levels without dynamic relief cause auditory strain over extended playback. Perceptual studies indicate that hyper-compressed audio elicits lower quality ratings in blind listening tests, with participants reporting increased annoyance and reduced enjoyment due to the absence of breathing room in the soundstage. An investigation into effects found that higher compression levels correlate with faster onset of fatigue, even if not consciously noted by listeners, as the processes unrelenting mid-to-high level signals without respite. This fatigue arises from the psychoacoustic overload of constant loudness, contrasting with uncompressed material that allows momentary relief, preserving perceptual freshness. Technical artifacts further compound these issues, including clipping distortion, pumping, and from brickwall limiting pushed beyond 0 . Critics note that such processing introduces nonlinearities that degrade fidelity, particularly in complex mixes with orchestral or acoustic elements, where subtle details are smeared. Empirical measurements reveal that over-compressed tracks exhibit elevated (THD) levels, sometimes exceeding 1-2% in peaks, which perceptually manifests as harshness and reduced clarity. While some defend for broadcast consistency, detractors emphasize that it sacrifices artistic intent, as composers and performers rely on dynamic for expressive tension and release, a principle rooted in acoustic physics where variation mirrors emotional arcs.

Defenses and Market Realities

Proponents of aggressive loudness in music mastering contend that it delivers a perceptual advantage in competitive playback scenarios, such as radio broadcasts and album sequencing, where quieter tracks risk being overshadowed by louder competitors, thereby maintaining listener engagement. This approach stems from the observation that human perception favors higher average levels, often interpreting them as more energetic or authoritative, which aligns with commercial goals of maximizing impact in short-attention media environments. Record labels and producers have cited louder masters as key to commercial viability, arguing that they better capture the "excitement" and consistency expected in contemporary genres like pop and hip-hop, where dynamic restraint might render tracks less competitive on charts or in promotional contexts. Compression techniques enabling this loudness are defended as tools for "gluing" elements together, sustaining notes, and preventing signal overload across diverse playback systems, from car stereos to clubs, thus ensuring broad accessibility without requiring user adjustments. In market terms, the loudness escalation prior to widespread streaming reflected arms-race dynamics among labels, where each release escalated integrated loudness—often exceeding -10 —to avoid diminished presence relative to peers, a practice driven by sales metrics and in focus groups favoring denser, upfront presentations. Even after platforms like implemented -14 in 2015, some mastering persists at -8 to -11 for non-normalized outlets or genres prioritizing , as labels weigh potential gains in unnormalized environments like or international radio against uniform streaming playback. This reflects ongoing economic pressures, where perceived loudness correlates with short-term consumer preference in casual listening, despite long-term shifts toward standards compliance.

Empirical Evidence on Listener Preferences

Blind listening tests conducted under controlled conditions, where audio samples are presented at equal perceived loudness levels, consistently demonstrate that listeners tend to prefer music versions retaining greater over those subjected to heavy . In a 2012 study published in The Journal of the Acoustical Society of America, participants rated music samples with varying degrees of limiting ; higher correlated with superior judgments of pleasantness, quality, and overall preference, even as compressed versions were perceived as louder. This outcome aligns with psychoacoustic principles where excessive introduces audible artifacts like and reduced transient clarity, diminishing long-term enjoyment despite short-term loudness appeal. However, tolerance for compression varies with its degree and application context. A 2015 investigation in the Journal of the Audio Engineering Society examined perceptual effects of in recordings using methodology with normal-hearing listeners; results indicated low inter-subject consistency in preferences and no statistically significant quality degradation from high compression levels when loudness was matched, suggesting many listeners are less sensitive to mastering practices common in the loudness wars era than critiques imply. Factors such as listener familiarity with hyper-compressed commercial releases and playback environment (e.g., ) further moderate preferences, with moderate compression often favored over extremes in multi-signal processing scenarios. Post-2010s streaming has amplified these findings' relevance, as platforms attenuate louder masters, effectively rewarding preservation. Empirical data from listener surveys and tests in normalized ecosystems reinforce that uncompressed or lightly compressed tracks elicit higher satisfaction ratings, attributed to reduced listening fatigue and enhanced emotional impact from preserved peaks and valleys. Nonetheless, individual differences persist, with some demographics exhibiting bias toward louder presentations due to acclimation effects from prolonged exposure to compressed .

Influence of Streaming Normalization

Streaming platforms implement loudness to standardize playback across tracks, typically targeting integrated loudness levels measured in (Loudness Units relative to Full Scale). normalizes to -14 , to -16 , and to -13 to -15 depending on the track, adjusting gain up or down as needed without altering the source file. This process, which began gaining traction with 's adoption in 2009, attenuates overly loud masters to prevent and ensures quieter, more dynamic tracks are not perceptually buried. The introduction of diminished the competitive pressure of the "loudness wars," where producers previously maximized levels through heavy to stand out on non-normalized playback systems like and early downloads. By turning down loud tracks to a common level, platforms removed the advantage of hyper-, encouraging engineers to prioritize transient preservation and over absolute loudness. Studies and analyses from the onward show a measurable increase in average in , with values rising from lows of 4-6 in the mid-2000s to 7-9 by the late , correlating with widespread adoption. In mastering workflows, this shift has standardized targets around -14 to -9 integrated loudness with true peak levels not exceeding -1 dBTP, minimizing the risk of clipping post- or during to lossy formats like at 256 kbps used by . Producers now often use tools like loudness meters compliant with BS.1770 to preview normalization penalties, fostering practices that balance commercial appeal with audio fidelity. However, empirical listener tests indicate that even normalized loud masters can retain a perceived edge due to their density and reduced headroom demands, sustaining some compression in genres like and . Overall, has promoted a more rational approach to loudness in the , with platforms' consistent application reducing artifacts from mismatched volumes and enabling higher-quality streaming experiences, though it has not eradicated loud mastering entirely in competitive markets.

Mastering Practices in the

In the , audio mastering engineers have prioritized compatibility with streaming platforms' loudness algorithms, which adjust playback volume to standardized levels to prevent abrupt changes between tracks. Major services such as , , and apply targeting an integrated loudness of approximately -14 (Loudness Units relative to Full Scale), as recommended by the (AES) TD1004.1.16 guideline for streaming content. This shift, accelerated by widespread adoption of ITU-R BS.1770-compliant metering since the mid-2010s, has diminished incentives for the "" of the prior decade, allowing masters to integrate at -14 without risking attenuation that could undermine perceived punch or introduce inter-sample clipping. Mastering workflows now routinely incorporate real-time metering to measure integrated loudness across the entire track duration, alongside short-term (3-second window) and momentary values to preserve dynamic variation. Engineers aim for true levels not exceeding -1 dBTP to accommodate processing, which may add 1-2 dB of headroom or apply limiting. Techniques include judicious use of multiband and brickwall limiting to achieve target loudness while minimizing reduction, often verifying results via playback simulations on services like Spotify's "Loud" mode or Apple's Sound Check. For albums, often references the loudest track at -14 , permitting quieter songs to sit at -16 or below for artistic contrast. A key practice emphasizes balancing loudness with , as excessive —once common to maximize levels—now yields under , potentially resulting in fatiguing, lifeless playback. Engineers target dynamic ranges of 8-12 for most genres, using tools like upward expansion or serial to enhance transient impact without inflating average levels. This approach aligns with empirical observations that listeners on normalized platforms perceive greater clarity and emotional depth in masters retaining headroom for peaks, particularly in genres like rock or classical hybrids. Genre-specific adjustments persist: electronic and masters may push toward -12 to -10 pre-normalization for competitive edge in non-normalized contexts like clubs, while acoustic-focused works favor lower targets to highlight nuance. Emerging tools, including AI-assisted analyzers integrated into suites like iZotope Ozone, facilitate against reference tracks and platform-specific exports, ensuring masters translate consistently across devices from earbuds to hi-fi systems. By 2025, this data-driven methodology has become standard, with engineers cross-referencing against EBU R128 for broadcast compatibility and avoiding over-reliance on legacy peak meters, which undervalue human auditory perception as modeled in LUFS algorithms.

References

  1. [1]
    Loudness Basics - AES - Audio Engineering Society
    Loudness is the perceived “strength” of a sound on a scale from quiet to loud. It is a subjective measure that considers the sound intensity, frequency and ...Missing: empirical | Show results with:empirical
  2. [2]
    Loudness quantification - Applied Mathematics Consulting
    Feb 24, 2016 · Loudness is the perceived intensity of a sound. It is not a physical phenomena but a psychological phenomena. Loudness is subjective, but not entirely so.
  3. [3]
  4. [4]
    Loudness Units: Phons and Sones - HyperPhysics
    60 phons means "as loud as a 60 dB, 1000 Hz tone". The loudness of complex sounds can be measured by comparison to 1000Hz test tones.
  5. [5]
    Convert between sone and phon units of loudness
    Sone is a more linear scale of perception. Increasing loudness by 10 phon increases loudness in sone by a factor of 2. (This is true for sounds louder than 40 ...
  6. [6]
    ISO 226:2003 - Acoustics — Normal equal-loudness-level contours
    This International Standard specifies combinations of sound pressure levels and frequencies of pure continuous tones which are perceived as equally loud by ...
  7. [7]
    [PDF] Normal equal-loudness level contours - ISO 226:2003 Acoustics
    An equal-loudness contour is a measure of sound pressure, over the frequency spectrum, for which a listener perceives a constant loudness. The unit of ...
  8. [8]
    Stevens' Power Law
    Stevens' Power Law ; Loudness, 0.67, Sound pressure of 3000-Hz tone ; Vibration, 0.95, Amplitude of 60 Hz on finger ; Vibration, 0.6, Amplitude of 250 Hz on finger.
  9. [9]
    Loudness - HyperPhysics
    Sound loudness is a subjective term describing the strength of the ear's perception of a sound. It is intimately related to sound intensity but can by no means ...<|separator|>
  10. [10]
    Loudness | Acoustics | University of Salford
    A definition of the loudness of tones ... Another criticism comes from work cited by Fastl in 'Psychoacoustics: Facts and Models' (Zwicker & Fastl 1990).
  11. [11]
    Sound Quality Metrics: Loudness and Sones - SIEMENS Community
    Loudness level can be expressed in sones or phons, which are both units of loudness. The phon is a unit of loudness that represents equal loudness to a 1000Hz ...
  12. [12]
    Sones phons loudness decibel sone 0.2 - 0.3 - 0.4 - 0.5 - 0.6 define ...
    In acoustics, loudness is a subjective measure of the sound pressure. One sone is equivalent to 40 phons, which is defined as the loudness of a 1 kHz tone at ...
  13. [13]
    The Sone: Psychoacoustic Loudness Unit - CalcSimpler.com
    Sep 27, 2025 · Loudness in sones doubles when the loudness level in phons increases by approximately 10 units for mid-frequency sounds. Mathematically, ...
  14. [14]
    BS.1770 : Algorithms to measure audio programme loudness ... - ITU
    Mar 14, 2025 · BS.1770 : Algorithms to measure audio programme loudness and true-peak audio level ; Recommendation BS.1770. Approved in 2023-11. Managed by R03- ...
  15. [15]
    [PDF] Recommendation ITU-R BS.1770-5 - (11/2023)
    This Recommendation specifies audio measurement algorithms for the purpose of determining subjective programme loudness, and true-peak signal level.
  16. [16]
    [PDF] R 128 - LOUDNESS NORMALISATION AND PERMITTED ...
    The measures. 'Loudness Range', 'Maximum Momentary Loudness' and 'Maximum Short-term Loudness' can be used to further characterise an audio signal as well as to ...
  17. [17]
    Loudness | EBU Technology & Innovation
    Basically EBU R 128 recommends to normalize audio at -23 LUFS ±0.2 LU (±1 LU for live programmes), measured with a relative gate at -10 LU. The metering ...
  18. [18]
    [PDF] Recommendations for Loudness of Internet Audio Streaming and On ...
    Aug 30, 2021 · ITU-R BS.1770 defines a method for measuring Integrated Loudness of audio: a frequency-weighted level-gated measurement of average power over an ...
  19. [19]
    What's the history of compression? - Page 2 - Gearspace
    May 30, 2007 · Compression was around in the late 1920's... early 30's. First commercially available units were from Collins, Western Electric & RCA.
  20. [20]
    A Brief History of Professional Audio Compressors - Dynamic Grading
    May 12, 2025 · Compressors originated as problem-solvers. During the 1930s and 40s, as radio and telephone networks expanded, engineers needed a way to prevent signal ...
  21. [21]
    The Loudness Wars - USC Viterbi School of Engineering
    An auditory arms race that has been waging within the music industry for the past 60 years, all in the name of who can make their recordings sound the loudest.Introduction · Limits Of Loudness In The... · Does Hyper-Compression Hurt...
  22. [22]
    Mastering History: Evolution and Future Trends in Mastering
    Jul 15, 2020 · Loudness was a big factor to consider. In the 1940s, Jukeboxes became popular and they were often set to a predetermined volume level. This ...Loudness War · Modern Mastering · History Of Mastering: What...
  23. [23]
  24. [24]
    The History of Compressor/Limiters - Vintage Digital
    Oct 18, 2023 · The origins of compressor/limiters can be traced back to the broadcast industry, where they were used to prevent over-modulation of radio signals.
  25. [25]
    'Dynamic Range' & The Loudness War - Sound On Sound
    There is, without question, a constant growth in average levels between 1982 and 2005, and today's records are roughly 5dB louder than they were in the '70s.
  26. [26]
    The Loudness Wars - Everland Studios
    ... brick wall limiter. The first widely available brick wall limiter was the Wave L1, released in 1994. If you need a refresher on what exactly a limiter does ...
  27. [27]
    How the 'Loudness Wars' Made Music Sound Worse (And What We ...
    Mar 25, 2023 · Digital plugins like the Waves L1 limiter, released in 1994, played a major role. ... loudness without squashing a mix with a brick wall limiter.Missing: introduction date
  28. [28]
  29. [29]
    The Loudness War: Who won and who lost? - SoundGuys
    Jun 17, 2021 · So the average loudness of CDs rose over time, samples clipped more frequently, the loudness war intensified, and dynamic range kept shrinking.
  30. [30]
    A Visual History of Loudness in Popular Music - FlowingData
    Jan 5, 2010 · Christopher Clark explores the trend of loudness with this graphic [pdf]. He selected several songs from each year and summed the maximum RMS levels.
  31. [31]
    Analysis: Metallica's Death Magnetic Sounds Better in Guitar Hero
    Sep 16, 2008 · Part of the "loudness war," this type of compression is designed to make music sound as loud as possible at the expense of dynamic range (the ...
  32. [32]
    Loudness Wars - Metallica's Death Magnetic sounds like crap
    Dec 20, 2008 · The CD release of Death Magnetic is massively limited and overcompressed. Listen to before-and-after samples using the Guitar Hero mix as a ...
  33. [33]
    Most Compressed Albums ever ??? | HomeRecording.com
    Feb 2, 2006 · Most noticeably, the new Dream Theater Album, Octavarium and the latest NickelBack record, Fight for all the right reasons. Both of those are ...<|control11|><|separator|>
  34. [34]
    Increased levels of bass in popular music recordings 1955–2016 ...
    Increase in RMS energy (A) and integrated loudness in LUFS (B) in popular music during the last decades. The dots represent the averaged values for the top ...
  35. [35]
    AES Journal Forum » The Loudness War - Audio Engineering Society
    Jun 22, 2011 · [Feature] The term “loudness war” refers to the ongoing competitive increase in the loudness of commercially distributed music.
  36. [36]
  37. [37]
    Understanding the Loudness War in Mastering in 2025 | iMusician
    May 22, 2025 · This article explores the loudness war, its history, and what role it plays in the age of streaming platforms.What Is Loudness In... · The Loudness War In The Era... · How Do I Master My Music For...<|separator|>
  38. [38]
  39. [39]
    What is the Loudness War & Is it Still Relevant in 2025?
    Oct 28, 2024 · The so-called loudness war is the ongoing battle to make music 'louder,' starting from the late 1980s and really peaking in the mid-2000s. Back ...
  40. [40]
    The End Of The Loudness War? | Stereophile.com
    Dec 9, 2015 · In essence, the idea is that if all music is played back at the same perceived volume, there's no longer an incentive for mix or mastering ...Missing: causes | Show results with:causes
  41. [41]
    How loud to master for streaming (the TRUTH!)
    May 2, 2023 · For years, the music industry has been engaged in a so-called “loudness war,” where tracks are aggressively compressed and limited to ...
  42. [42]
    (PDF) The effect of dynamic range compression on ... - ResearchGate
    Jul 21, 2015 · Initial findings of this study have determined that amplitude compression adversely affects the audio ... Justin Paterson.
  43. [43]
    Dynamic Range Across Music Genres and the Perception of ...
    Feb 10, 2016 · Compression in Music Production. Dynamic range refers to the level difference between the highest and lowest-level passages of an audio signal.
  44. [44]
    The Loudness Wars: Why Music Sounds Worse - NPR
    Dec 31, 2009 · When you're through listening to a whole album of this highly compressed music, your ear is fatigued. You may have enjoyed the music but you don ...
  45. [45]
    Loudness War Research - Dynamic Range Day
    Loudness has almost no affect on listener's preferences when comparing different songs; Listeners tend to dislike the side-effects of hyper-compression, and ...
  46. [46]
    The Loudness War: Background, Speculation and Recommendations
    Conversely, low dynamic content, such as highly compressed music, which is very common in modern genres, will tend to increase auditory fatigue (Vickers 2010) .
  47. [47]
    [PDF] Overcompressed sound, a new auditory risk and a window on new ...
    Sep 15, 2023 · In contrast, after exposure to over-compressed but not natural music, the stapedius- muscle reflex lost half its effectiveness for a week.
  48. [48]
    Deafness and hearing loss: Safe listening
    Mar 21, 2025 · You can safely listen to a sound level of 80dB for up to 40 hours a week. If the sound level is 90dB, the safe listening time reduces to four hours per week.
  49. [49]
    Loud Music and Leisure Noise Is a Common Cause of Chronic ...
    This review summarizes evidence that loud music and other forms of “leisure noise” are common causes of noise-induced hearing loss, tinnitus, and hyperacusis.
  50. [50]
    Auditory changes in awake guinea pigs exposed to overcompressed ...
    This study provides evidence of a possible temporary effect (hearing fatigue) at the level of the acoustic reflex of the stapedius muscle. The permanent ...
  51. [51]
    On The Level: Audio Metering Then & Now - ProSoundWeb
    May 6, 2025 · A history lesson on the origins, types, and usefulness of audio meters as well as the industry measurement standards created during development.
  52. [52]
    Metering - Sound On Sound
    the difference between the highest and lowest acceptable levels. We typically arrange for the loudest peaks to ...
  53. [53]
  54. [54]
    What is RMS in Audio? The Absolute BEST Beginner's Guide (2025)
    Jan 18, 2024 · RMS (Root Mean Square) in audio represents the average power or loudness of an audio signal, measuring the average power of the waveform.
  55. [55]
    LUFS vs Peak vs RMS - what are they, what's the difference and ...
    Mar 19, 2024 · Peak is the absolute highest level, RMS is the average level over a period, and LUFS is a loudness measurement over the whole recording.
  56. [56]
    Loudness Standards: LUFS, Peaks, and Streaming Limits - InSync
    Jun 30, 2025 · Peak measures the highest level the audio produces (even incredibly short duration peaks), while RMS quantifies the average level. RMS ...
  57. [57]
    The Difference Between Peak, True Peak, LUFS, and RMS
    Peak/True Peak measure signal level at a specific time, while LUFS and RMS measure average loudness over time. RMS uses voltage, LUFS human perception.
  58. [58]
    Loudness in Streaming | EBU Technology & Innovation
    Nov 21, 2023 · This supplement to R 128 gives guidance for an audio levelling solution in streaming based on loudness. Streaming of broadcast content (live ...
  59. [59]
    Sound Volume Requirements for Commercials (CALM Act)
    In the Commercial Advertisement Loudness Mitigation (CALM) Act, Congress directed the FCC to establish these rules, which went into effect on December 13, 2012.
  60. [60]
    A/85, Techniques for Establishing and Maintaining Audio Loudness ...
    This Recommended Practice provides guidance to broadcasters and creators of audio for high-definition (HD) or standard-definition (SD) television content.
  61. [61]
    Eyes on Your Audio: RTW - Worldwide Loudness Delivery Standards
    Worldwide loudness standards vary by distributor, geography, and content. For example, AES streaming is -16 to -20 LUFS, while EBU R128 is -23 LUFS.
  62. [62]
  63. [63]
    Mastering for Streaming: Platform Loudness and Normalization ...
    How does YouTube use Loudness Normalization? Similar to Spotify, YouTube uses -14 LUFS as a target loudness - unlike Spotify, YouTube measures the signal ...
  64. [64]
    Loudness normalization on Spotify
    Audio is delivered to us at different volume levels. We use loudness normalization to balance soft and loud songs for a more consistent listening experience. ...
  65. [65]
    Volume normalization - Spotify Support
    Volume normalization balances soft and loud songs, creating a more uniform listening experience. iOS Android Desktop
  66. [66]
    Opting into Loudness Normalization - DistroKid Help Center
    DistroKid will automatically adjust the level and headroom of your audio to Spotify's recommended settings: -14dB integrated LUFS with -1dB true peak maximum.
  67. [67]
    Apple Switches to LUFS, Enables Sound Check by Default
    Mar 23, 2022 · Apple has updated their Sound Check loudness normalization algorithm to use LUFS, and enabled it by default on new iOS devices and Macs.
  68. [68]
    Apple Choose -16LUFS Loudness Level For Apple Music - Here's Why
    May 10, 2022 · Apple has announced their LUFS target to be significantly quieter at -16LUFS. However, by choosing -16LUFS, Apple has taken on board the recent AES TD1008 ...
  69. [69]
    TIDAL implements loudness normalisation - but there's a catch
    Nov 17, 2016 · Finally, TIDAL allows users to toggle normalisation on and off in the settings - on iOS and Android, at any rate. In browsers the option ...<|separator|>
  70. [70]
    Mastering for Streaming Services (Spotify, Tidal, Apple) - Blog - Elysia
    From 2016 to 2019, the typical YouTube normalization was -13dB and did not refer to LUFS. Only since 2019 YouTube has been using the -14dB LUFS by default.
  71. [71]
    YouTube Music's 'Consistent volume' is the normalization option we ...
    Apr 18, 2025 · YouTube Music is getting a new “Consistent volume” normalization option. The main YouTube app has had its own “stable volume” tool for a couple ...
  72. [72]
    YouTube Music DOES Use Loudness Normalization... but not much ...
    Oct 11, 2023 · YouTube Music reduces loudness only if it exceeds -7 LUFS, unlike the video platform which uses -14 LUFS. Music below -7 LUFS is left untouched.
  73. [73]
    ReplayGain - Hydrogenaudio Knowledgebase
    Apr 5, 2024 · ReplayGain is a technique to achieve the same perceived loudness of audio files, using an algorithm to measure perceived loudness.
  74. [74]
    Adjust the sound quality in Music on iPhone - Apple Support
    Go to the Settings app on your iPhone. Tap Apps, then tap Music. Tap one of the following: EQ: Choose an equalization (EQ) setting. Sound Check: Normalize ...
  75. [75]
    One UI 7 brings Loudness Normalization to more Samsung phones
    Dec 9, 2024 · The feature prevents sound from suddenly becoming too loud or too quiet while playing media. It was initially introduced in the One UI 6.1.1 ...
  76. [76]
    Replay Gain - Using Poweramp - Q&A
    Aug 20, 2024 · @bandit ReplayGain is a feature that Poweramp can use to adjust playback levels so the peak point in each track is played at the same volume as ...Replay Gain Controls - General Chatter - PowerampReplayGain tag information inside mp3 files - Using Poweramp - Q&AMore results from forum.powerampapp.com
  77. [77]
    Android 15 loudness normalization and existing volume ... - GitHub
    Oct 14, 2024 · Android 15 introduces support for AAC loudness control. Moreover, it's said that Media3 will add support for this.
  78. [78]
    [PDF] Technical Document AESTD1006.1.17-10 Loudness Guidelines for ...
    Oct 12, 2017 · Loudness Normalization achieves equal average Loudness ... A combination audio amplifier and audio/video switching device for a home theater.<|separator|>
  79. [79]
    Loudness Standards - Full Comparison Table (music, film, podcast)
    Jun 30, 2019 · If you master a song at -14 LUFS, your audio will have the same perceived loudness as any other different song. You can master at -14 LUFS and ...
  80. [80]
    (PDF) Hyper-compression in music production: Listener preferences ...
    Aug 7, 2018 · The study reported here investigated whether the amounts of hyper-compression typical of current audio practice produce results that listeners prefer.
  81. [81]
    Quality and loudness judgments for music subjected to compression ...
    Aug 8, 2012 · Dynamic-range compression (DRC) is used in the music industry to maximize loudness. The amount of compression applied to commercial ...<|control11|><|separator|>
  82. [82]
    AES Journal Forum » Perceptual Effects of Dynamic Range ...
    Feb 3, 2014 · The belief that the use of dynamic range compression in music mastering deteriorates sound quality needs to be formally tested.Missing: studies impact
  83. [83]
    [PDF] The effect of dynamic range compression on the psychoacoustic ...
    In modern popular styles of music, there is a trend to dynamically compress (henceforth, compress) commercial releases in order to enhance its loudness ...
  84. [84]
    Loudness compression, loudness wars.. What exactly it is and why ...
    Jun 2, 2018 · The problem, as in many things, is when it is done to excess. Some recordings have been compressed so much that the dynamic range is only 20 dB ...Dynamic range, loudness war, remasters.Vinyl and loudness wars. - Audio Science Review (ASR) ForumMore results from www.audiosciencereview.comMissing: criticisms | Show results with:criticisms
  85. [85]
    The Loudness Wars: Over-Compression and its Impact on Music ...
    Jan 12, 2018 · While the audience might like the music, they experience sonic fatigue and stop listening because the loudness is inexplicably offensive to ...
  86. [86]
    The Loudness War Analyzed | Music Machinery
    Mar 23, 2009 · And thus the loudness war – engineers must turn up the volume on their tracks lest the track sound wimpy when compared to all of the other loud ...
  87. [87]
    In The Studio: The Top 10 Reasons Why Music Is Compressed
    10: Compression is part of the sound of contemporary music. Completely uncompressed music would sound lifeless and boring to most listeners.
  88. [88]
  89. [89]
    Where are we with Loudness in 2025 - Gearspace
    Jan 24, 2025 · The LUFS value represents the average loudness over time, while dBFS measures the peak level. A dynamic track might have an LUFS-to-peak ...
  90. [90]
    Perceptual Effects of Dynamic Range Compression in Popular ...
    Aug 9, 2025 · There is a widespread belief that the increasing use of dynamic range compression in music mastering (the loudnesswar) deteriorates sound ...Missing: AES | Show results with:AES
  91. [91]
    For the Love of Music: Loudness Normalization - audioXpress
    Aug 21, 2024 · At the height of the music loudness wars, digital audio was saved from a death spiral by a handful of companies, broadcasters, and labs working ...
  92. [92]
    Factors influencing listener preference for dynamic range compression
    Jun 22, 2017 · The factors influencing listener preference for dynamic range compression have been proposed to include: prolonged exposure to hyper-compressed ...
  93. [93]
    How Spotify Will End The Loudness War - Production Advice
    Oct 23, 2009 · The answer is: Spotify uses “Loudness Normalization” by default. It adjusts the playback level of all songs so you don't have to keep adjusting your volume ...<|separator|>
  94. [94]
  95. [95]
  96. [96]
    Setting Levels In Mastering Music for Streaming - Lars Lentz Audio
    Sep 17, 2025 · The AES guideline says streaming platforms should keep integrated loudness around -14 LUFS for the loudest tracks of an album and -16 LUFS ...
  97. [97]
    Best Mastering Level for Streaming - Sage Audio
    The best mastering level for streaming is an integrated -14 LUFS, as it best fits the loudness normalization settings of the majority of streaming services.
  98. [98]
  99. [99]
    Balancing Loudness & Dynamics in Music Mastering - MasteringBOX
    Apr 9, 2025 · In audio mastering, dynamic range and loudness are two key concepts that can make or break the impact and clarity of a track.What Is Loudness? · The Loudness Wars · Practical Mixing Tips
  100. [100]
    Trends in Audio Mastering: Staying Ahead in Music Production
    Sep 9, 2025 · To meet these expectations, mastering engineers are focusing on preserving dynamic range while ensuring clarity and balance. This trend ...