Fact-checked by Grok 2 weeks ago

Audio frequency

Audio frequency refers to the oscillation rates of sound waves or electrical signals that fall within the range audible to the human ear, generally from 20 Hz to 20 kHz, encompassing the full spectrum of pitches perceivable by healthy young adults under quiet conditions. This range defines the audible portion of the acoustic spectrum, where low frequencies produce deep bass tones and high frequencies yield sharp treble, with human auditory sensitivity peaking between approximately 2 kHz and 5 kHz. In acoustics, audio frequencies represent pressure variations in air caused by vibrating sources, measured in hertz (Hz) as cycles per second, and are fundamental to sound perception via the ear's , which tonotopically maps frequencies to neural signals for processing. The lower limit of 20 Hz corresponds to infrasonic vibrations just entering audibility, while the upper 20 kHz boundary marks the onset of ultrasonic waves beyond typical human detection, though individual variation exists—newborns may hear up to 20 kHz, but often reduces high-frequency sensitivity with age. Complex sounds, like speech or , comprise multiple audio frequencies superimposed, with speech fundamentals around 100–300 Hz and harmonics extending to 8 kHz or higher for . Audio engineering standards emphasize reproducing this full 20 Hz to 20 kHz range to achieve high-fidelity sound, dividing it into sub-bands for design: sub-bass (20–60 Hz) for rumble, bass (60–250 Hz) for warmth, low mids (250–500 Hz) for body, midrange (500–2 kHz) for clarity and vocals, upper mids (2–4 kHz) for presence, and treble (4–20 kHz) for air and sparkle. Devices like loudspeakers and amplifiers are specified by their across this band, ideally flat within ±3 dB to minimize , as deviations can alter perceived or balance. Beyond hearing, audio systems may extend slightly for headroom, but the core focus remains on this audible spectrum to support applications from recording and to spatial audio and .

Fundamentals

Definition

Audio frequency refers to the range of periodic vibrations, typically mechanical in the form of sound waves or electrical/electromagnetic signals, that fall within the spectrum perceivable by the average human ear, generally from 20 Hz to 20 kHz. is quantified in hertz (Hz), the unit representing the number of cycles or oscillations per second. This range distinguishes audio frequencies from , which encompasses vibrations below 20 Hz, and , which includes those above 20 kHz; both are typically inaudible to humans without specialized equipment. Human perception of within audio frequencies follows a , where musical intervals such as octaves are defined by a doubling of (e.g., from 440 Hz to 880 Hz), ensuring equal perceptual steps across the spectrum rather than linear frequency increments. The term "audio frequency" emerged in the early amid advancements in radio and , where it became necessary to differentiate low-frequency audible signals from higher radio frequencies used for transmission, as seen in amplifier designs by around 1912 and treatises by the 1910s.

Physical Properties

Audio frequencies manifest as sound waves, which are longitudinal pressure waves propagating through a medium such as air. In this context, the wave consists of alternating regions of and , where air molecules oscillate parallel to the direction of wave travel, creating variations in local pressure. The frequency of these oscillations, measured in hertz (Hz), represents the number of complete cycles per second and fundamentally determines the of the sound, though pitch perception varies with human hearing. The \lambda of an audio frequency wave is inversely related to its f and directly proportional to the c in the medium, given by the formula \lambda = \frac{c}{f}. At standard (20°C), the in dry air is approximately 343 m/s. For example, a 1 kHz has a of about 34 cm, illustrating how lower frequencies produce longer waves that can interact differently with environments. While primarily governs the intensity and loudness of sound—quantified using the (dB) scale, where sound pressure level (SPL) in dB is calculated as L_p = 20 \log_{10} \left( \frac{p}{p_0} \right) with reference pressure p_0 = 20 \mu—the content plays a key role in defining the through the distribution of frequencies. Complex audio signals comprise multiple frequencies, and their relative amplitudes contribute to the unique quality distinguishing, say, a from a . Audio frequencies are measured using instruments like oscilloscopes, which visualize time-domain waveforms to determine period and thus , and spectrum analyzers, which display frequency-domain content. The , particularly the (FFT) algorithm, decomposes complex waveforms into their constituent sinusoidal components, enabling precise identification of frequency spectra in audio signals. These tools are essential for analyzing both pure tones and sounds within the typical audible range of 20 Hz to 20 kHz.

Human Auditory Range

Audible Spectrum Limits

The audible spectrum for human hearing is conventionally defined as the frequency range from 20 Hz to 20,000 Hz (20 kHz) for young adults with normal hearing under standard conditions. This range represents the boundaries where pure tones can be detected at threshold levels, as established through psychoacoustic measurements of minimum audible fields. The 226 standard on equal-loudness contours provides the foundational data for these limits, specifying levels and frequencies perceived as equally loud by otologically normal listeners aged 18 to 25 years. At the lower end, frequencies below 20 Hz are generally not perceived as discrete auditory tones but rather as tactile vibrations or pressure sensations, marking the transition to . This limit arises because the cochlea's basilar membrane responds less effectively to very low frequencies, requiring high levels of approximately 78 dB SPL for just-detectable tonal near 20 Hz. The upper limit of 20 kHz typically applies to adolescents and young adults, but it declines progressively with age due to , a affecting the inner ear's hair cells. For instance, individuals over 40 years often experience thresholds shifting below 15 kHz, with high-frequency sensitivity dropping first and most severely, as evidenced by audiometric data showing steeper losses above 8 kHz in older populations. Individual variations further influence these boundaries; women generally exhibit slightly higher sensitivity, particularly in high frequencies, with thresholds about 2 dB better than men across tested ranges, possibly due to sex-specific differences in cochlear mechanics and hormonal influences. Additionally, chronic noise exposure accelerates boundary shifts by damaging outer hair cells, primarily impacting frequencies from 3 to 6 kHz initially and extending to broader high-frequency loss with prolonged exposure above 85 dBA.

Variations in Hearing Sensitivity

Human hearing sensitivity varies significantly across the audio frequency spectrum, with the ear exhibiting peak responsiveness in the mid-frequency range of approximately 2 to 5 kHz, where the threshold of hearing is lowest, and reduced sensitivity at both low and high extremes. This frequency response curve reflects the combined effects of the outer ear's resonance and the inner ear's mechanical properties, resulting in a dip in sensitivity below 500 Hz and above 8 kHz for most adults. The Fletcher-Munson curves, also known as equal-loudness contours, illustrate these variations by mapping the sound pressure levels required for tones of different frequencies to be perceived as equally loud relative to a 1 kHz reference. Developed through experimental measurements, these contours show that at moderate listening levels, low frequencies demand substantially higher levels for equivalent perceived ; for instance, a 100 Hz tone requires about 10 more intensity than a 1 kHz tone to sound equally loud. The contours flatten at higher sound pressure levels, indicating less pronounced sensitivity differences, but the mid-frequency peak persists across all levels. This uneven sensitivity profile profoundly influences audio system design, particularly in emphasizing the midrange frequencies (roughly 1 to 3 kHz) that carry the bulk of speech intelligibility information, such as consonant sounds, which are crucial for clear communication. Audio engineers prioritize balanced in this range to ensure perception, as under-emphasis here can degrade understanding even if overall volume is adequate. Exposure to intense noise can induce temporary threshold shifts (TTS), a reversible decrease in hearing sensitivity that typically affects high frequencies first and recovers within hours to days, serving as an early indicator of potential damage. Repeated or prolonged exposures may lead to permanent threshold shifts (PTS), resulting in lasting high-frequency hearing loss, often starting around 4 kHz, due to damage to cochlear hair cells in the basal region. Such shifts are closely linked to tinnitus, a perception of phantom noise frequently matching the affected high-frequency bands, which can persist even after threshold recovery and significantly impacts quality of life.

Frequency Classification

Low Frequencies

Low audio frequencies, spanning approximately 20 Hz to 250 Hz, form the range essential for conveying depth, weight, and emotional impact in reproduction. These frequencies correspond to longer wavelengths—ranging from about 17 meters at 20 Hz to 1.4 meters at 250 Hz—compared to higher pitches, influencing how interacts with environments and equipment. Natural sources of low frequencies abound in everyday acoustics. Thunder produces rumbling sounds primarily in the 20-120 Hz range, creating a sense of vast power through infrasonic components that can extend below 10 Hz. Bass drums generate fundamental tones between 40-100 Hz, delivering the punchy attack in percussion ensembles. Male vocals typically feature fundamental frequencies around 100-200 Hz, contributing to the resonant of lower-pitched speech and . The physical characteristics of low frequencies lead to distinct acoustic behaviors. Their extended wavelengths readily excite room modes—resonant frequencies determined by room dimensions—resulting in standing waves that cause bass buildup or nulls at specific locations, unevenly distributing . To counter this, subwoofers are employed as dedicated drivers capable of handling these wavelengths with high , often placed strategically to minimize modal peaks. Human hearing sensitivity drops markedly below 200 Hz, requiring higher levels for equivalent perceived compared to frequencies. Challenges in handling low frequencies stem from their high energy demands and recording complexities. Achieving adequate reproduction necessitates substantially more amplifier power—often 10 times or more than for mid frequencies—to drive speakers against air resistance and achieve comparable volume, due to driver inefficiencies and the effects on bass propagation. In recording, phase misalignment between microphones or tracks can cause destructive in the low end, producing a thin or hollow response that undermines mix clarity.

Mid Frequencies

Mid frequencies, encompassing approximately 250 Hz to 4,000 Hz, constitute the central portion of the audible spectrum and are vital for achieving clarity and definition in audio signals. This range primarily handles the articulation of speech consonants and the rich harmonic structures that define many acoustic and electronic sounds. Prominent sources within this band include the human voice, where formants—the resonant peaks of the vocal tract—typically occur between 500 Hz and 2,000 Hz, shaping vowel quality and consonant sharpness for effective communication. In music, electric guitars derive their core tonal presence and note attack from harmonics concentrated in the 250 Hz to 4,000 Hz region, while the piano's mid-octave keys and overtones, spanning similar frequencies, provide melodic warmth and sustain. Perceptually, the mid frequencies align with peak human hearing sensitivity, especially from 1,000 Hz to 4,000 Hz, where the ear perceives sounds most acutely and equal responses are maximized. This heightened responsiveness ensures that the conveys the bulk of informational content, such as phonetic details essential for speech intelligibility in both quiet and noisy environments. Excessive energy in the mid frequencies, however, can introduce harshness, particularly in the upper portion around 2,000 Hz to 4,000 Hz, leading to listener discomfort and reduced enjoyment. In complex audio mixes, overlapping elements from vocals, guitars, and other instruments frequently cause clutter, muddying separation and demanding careful balancing to maintain .

High Frequencies

High audio frequencies, typically encompassing the range from approximately 4 kHz to 20 kHz, correspond to the portion of the audible and play a key role in imparting airiness, sparkle, and intricate detail to soundscapes. This band is essential for capturing transient elements that enhance perceived clarity and spatial depth in audio reproduction. Sibilance, the sharp hissing quality in like "s" and "sh," predominantly occurs within the 5-10 kHz subrange, where excessive emphasis can lead to if not balanced properly. Prominent sources of high frequencies include percussion instruments such as cymbals and hi-hats, whose metallic attacks and decays generate harmonics extending up to 10 kHz and beyond, providing rhythmic bite and shimmer in musical contexts. Natural examples encompass bird calls, which often feature shrill components in this range for communication, and breathy vocal elements, where upper harmonics add an ethereal quality to human voices. These sources collectively contribute to the vividness and nuance in environmental and artistic audio. The short wavelengths of high-frequency sounds—ranging from about 8.6 cm at 4 kHz to 1.7 cm at 20 kHz—facilitate precise auditory imaging and localization, as these dimensions allow the to create significant interaural level differences by shadowing waves arriving at the far . This property supports enhanced stereo separation and directionality cues in listening environments. However, high frequencies attenuate more rapidly in air than lower ones due to increased molecular and effects, which scale with frequency and limit their effective over longer distances. Presbycusis, the progressive age-related decline in hearing, initially impairs sensitivity to frequencies above 4 kHz, resulting in a loss of detail that affects the discernment of consonants, harmonics, and subtle environmental sounds. This high-frequency vulnerability arises from degenerative changes in the cochlea's basal turn, leading to reduced auditory acuity for elements and diminished overall .

Psychoacoustic Effects

Perceived Pitch

Human perception of pitch is fundamentally logarithmic with respect to frequency, meaning that equal intervals in perceived pitch correspond to multiplicative changes in frequency rather than additive ones. This non-linear relationship arises because the auditory system processes sound in a way that compresses higher frequencies, making the perceived difference between tones more uniform on a logarithmic scale. A classic example of this logarithmic scaling is the octave interval, where the doubles in perception when the exactly doubles; for instance, the A4 at 440 Hz is perceived as the same one higher as A5 at 880 Hz. To model this psychoacoustically, the provides an approximation of perceived , transforming linear f (in Hz) to mels m via the formula: m = 2595 \log_{10}\left(1 + \frac{f}{700}\right) This equation, derived from empirical studies, captures the scale's near-linear behavior at low frequencies and logarithmic at higher ones, aiding in applications like and audio processing. Another key aspect of pitch perception is the phenomenon, where the brain infers the of a complex tone from its higher harmonics even if the fundamental itself is absent in the signal. This illusory pitch arises because the analyzes the periodicity and spectral structure of the harmonics, reconstructing the missing low-frequency component cognitively rather than from direct cochlear stimulation. Originally described in early psychoacoustic research, this effect underscores how pitch is a constructed percept rather than a simple reflection of physical frequency. In cultural contexts, musical scales like equal temperament divide the octave into 12 equal semitones, each with a frequency ratio of $2^{1/12} \approx 1.0595, allowing consistent transposition across keys in Western music traditions. This system, while an approximation of just intonation's simple ratios (e.g., 3:2 for a perfect fifth), prioritizes versatility over perfect harmonic purity, influencing composition and performance globally.

Frequency Masking

Frequency masking is a psychoacoustic phenomenon in which the of one is diminished or obscured by the presence of another , due to the limited resolution of the human in processing simultaneous or closely timed acoustic events. This effect arises from the nonlinear behavior of the , where stronger neural responses to a dominant suppress weaker responses to nearby s. Simultaneous masking occurs when a louder sound, such as a low-frequency tone, hides quieter sounds at adjacent frequencies within the same , as the auditory filters cannot resolve them distinctly. Temporal masking, on the other hand, involves sounds that precede (pre-masking) or follow (post-masking) the target sound by a short , typically up to 200 milliseconds, where the masker's excitation lingers in the auditory pathway, rendering the target inaudible. These processes are quantified through masking thresholds, which define the minimum level at which a sound becomes perceptible in the presence of a masker. The concept of critical bands underlies frequency masking, representing frequency regions where the ear's resolution is roughly constant, leading to intra-band interactions that amplify masking effects. The Bark scale models these bands, spanning approximately 24 critical bands across the audible spectrum from 20 Hz to 16 kHz, with band widths increasing from about 100 Hz at low frequencies to over 3 kHz at high frequencies. The Bark number z for a given f in hertz is calculated as: z = 13 \arctan(0.00076 f) + 3.5 \arctan\left( \left( \frac{f}{7500} \right)^2 \right) This scale ensures that masking is primarily confined within each band, facilitating models of auditory perception. In audio compression applications, frequency masking enables efficient data reduction by exploiting these thresholds to eliminate inaudible spectral components. The MP3 codec, developed in the early 1990s, employs a psychoacoustic model to compute simultaneous and temporal masking curves, allocating fewer bits to masked regions while preserving audible content, achieving compression ratios up to 12:1 with minimal perceptual degradation. For example, in music production, a prominent bass line can mask vocal formants around 200-500 Hz, reducing lyrical intelligibility unless addressed through spectral separation. Similarly, in recordings, the inherent noise floor—typically -90 dB or lower in professional setups—is masked by louder program material, ensuring that subtle hiss or hum remains below the auditory threshold during playback.

Technical Applications

Audio Reproduction Systems

Audio reproduction systems are engineered to faithfully capture, store, and reproduce sound waves across the audible spectrum, ensuring minimal and coverage of frequencies from approximately 20 Hz to 20 kHz to align with human hearing capabilities. These systems encompass components for input (), digital or analog (via sampling or means), and output (speakers), each optimized to handle specific frequency bands without introducing artifacts like or issues. Microphones serve as the primary capture devices in audio reproduction, converting acoustic pressure variations into electrical signals while aiming for a flat to accurately represent the input . Studio microphones, for instance, typically exhibit a response that is flat within ±3 from 20 Hz to 20 kHz, allowing precise recording of fundamentals, harmonics, and high-frequency transients in professional environments. This design minimizes coloration, ensuring the captured signal mirrors the original sound's content as closely as possible. In storage, the Nyquist-Shannon sampling dictates that the sampling rate must exceed twice the highest of interest to prevent , necessitating a rate greater than 40 kHz for the full audible band up to 20 kHz. The (CD) standard adopts 44.1 kHz as its sampling rate, providing a margin above the Nyquist limit to accommodate practical filters while capturing the complete spectrum without loss. This rate originated from adaptations in video recording technologies but has become the benchmark for consumer , enabling storage formats that preserve fidelity during playback. Modern formats employ higher sampling rates, such as 96 kHz and 192 kHz at 24-bit depth, to extend beyond the audible range for enhanced detail and reduced quantization noise, supported by streaming services and file formats like . Speakers reproduce stored audio by converting electrical signals back into acoustic waves, often using specialized drivers to manage different frequency ranges efficiently. Woofers handle low and lower midrange frequencies (typically 40 Hz to 2-3 kHz in two-way systems), leveraging larger diaphragms for efficient bass reproduction, while tweeters manage high frequencies (above 2-3 kHz), employing smaller, lighter cones to achieve rapid response for treble details. Crossover networks, consisting of passive filters with inductors and capacitors, divide the signal at these band boundaries—such as around 2-3 kHz for two-way systems—to direct appropriate frequencies to each driver, preventing damage and overlap distortions. The historical evolution of audio reproduction systems reflects progressive expansions in frequency coverage, driven by technological advancements. Early phonographs, reliant on mechanical needles and acoustic horns, were limited to a narrow band of roughly 100 Hz to 5 kHz due to groove constraints and playback mechanics, restricting reproduction to midrange tones with significant high- and low-frequency roll-off. By the 1950s, the advent of high-fidelity (hi-fi) systems, incorporating , vacuum-tube amplifiers, and improved electrostatic or dynamic drivers, achieved near-full-range response from 20 Hz to 20 kHz, enabling balanced playback of orchestral lows to highs in home environments.

Equalization and Processing

Equalization () is a fundamental process in audio engineering that involves adjusting the balance of frequency components within an to achieve desired tonal characteristics during production and playback. This technique allows engineers to shape sound by boosting or attenuating specific frequency ranges, often to correct imbalances or enhance artistic intent. Common types of equalizers include and graphic EQs. A equalizer provides precise control over three main parameters: the center , (boost or cut amount), and Q-factor (, which determines the steepness of the ). This flexibility makes it ideal for surgical adjustments, such as targeting narrow resonances. In contrast, a graphic uses fixed bands, typically aligned with ISO third-octave centers, allowing users to adjust sliders for each band without altering the center . Shelving filters, often integrated into EQs, provide broad adjustments to high or low ; for example, a high-shelf boosts or cuts all above a specified point with a gradual slope, commonly used to add airiness to the upper range. The primary goals of equalization include compensating for room acoustics and enhancing overall clarity. Room acoustics can introduce peaks and dips in due to reflections and standing waves, which EQ helps mitigate by attenuating problematic frequencies to achieve a flatter response. For instance, to enhance clarity, engineers often cut low-mid frequencies around 200-400 Hz to reduce "mud," a buildup that obscures definition in mixes. In digital audio workstations (DAWs), equalization frequently relies on Fast Fourier Transform (FFT)-based processing for real-time spectral analysis and manipulation, enabling efficient handling of complex signals without introducing significant latency. Multi-band compression, which applies dynamic range control across frequency bands (such as low, mid, and high as referenced in frequency classification), requires careful phase considerations; linear-phase modes preserve transient timing by minimizing phase shifts from crossover filters, preventing smearing in the recombined signal. A notable standardization in equalization is the RIAA curve for vinyl records, established in 1954 by the . This curve attenuates low frequencies (below 500 Hz) by up to 20 during recording to limit groove width and reduce noise, while boosting high frequencies (above 2 kHz) to maintain ; the inverse curve is applied during playback to restore the original balance.

References

  1. [1]
    Basics of Sound, the Ear, and Hearing - Hearing Loss - NCBI - NIH
    Thus, the dynamic range of hearing covers approximately 130 dB in the frequency region in which the human auditory system is most sensitive (between 500 and ...
  2. [2]
    [PDF] Extended High Frequency in Hearing and Speech1 | Acoustics Today
    The audible frequency range specific to humans, known as the human hearing range, can extend from 20 Hz to. 20 kHz depending on how good a person's hearing sen-.
  3. [3]
    [PDF] Acoustical Definitions - MIT
    Audio Frequency. An audio frequency is any frequency corresponding to a normally audible sound wave. Note 1: Audio frequencies range roughly from 15 to ...
  4. [4]
    [PDF] 9. SOUND AND NOISE Prepared by E. M. Roth, M. D., Lovelace ...
    referred to as the audio-frequency range and may be considered to extend from about 16 to 20,000 Hz. Pressure oscillations at frequencies above this range ...
  5. [5]
    [PDF] The Physical Nature of Sound - The Society of Broadcast Engineers
    This generally is regarded as the audible or audio-frequency range, and it is the frequencies in this range that are the concern of this chapter. Frequencies ...
  6. [6]
    Understanding Sound - Natural Sounds (U.S. National Park Service)
    Jul 3, 2018 · The units of frequency are called hertz (Hz). Humans with normal hearing can hear sounds between 20 Hz and 20,000 Hz. Frequencies above 20,000 ...
  7. [7]
  8. [8]
    What is a hertz (HZ)? | Definition from TechTarget
    Jun 22, 2023 · Hertz measures the number of wave cycles (or frequency) passing through a given point in a second. The curving blue line represents the ...
  9. [9]
    Frequency ratios and pitch perception – Understanding Sound
    The bottom line is that humans perceive musical pitch in terms of the logarithm of frequency- rather than the frequency itself. Log scales for frequency graphs.
  10. [10]
    The History of Audio and Sound Measurement
    May 31, 2015 · In 1912, in Palo Alto, CA, de Forest developed a three-audion amplifier capable of increasing audio-frequency signals by about 120x. He spent ...
  11. [11]
    [PDF] Radio-Telephony-Goldsmith-1918.pdf
    The radio frequency used in radio telephony must be quite inaudible and completely steady, and many times higher than the audio frequency voice vibrations.Missing: coined | Show results with:coined
  12. [12]
    Sound is a longitudinal wave (article) - Khan Academy
    Sound travels as waves of energy, but, unlike light, the waves transmit energy by changing the motion of particles.
  13. [13]
    Sound as a Longitudinal Wave - The Physics Classroom
    Sound waves in air (and any fluid medium) are longitudinal waves because particles of the medium through which the sound is transported vibrate parallel to the ...
  14. [14]
    14.1 Speed of Sound, Frequency, and Wavelength - Physics
    Mar 26, 2020 · The greater the density of a medium, the slower the speed of sound. The speed of sound in air is low, because air is compressible. Because ...
  15. [15]
    Transverse and Longitudinal Waves - HyperPhysics
    Sound Waves in Air. A single-frequency sound wave traveling through air will cause a sinusoidal pressure variation in the air. The air motion which ...
  16. [16]
    Sound properties: amplitude, period, frequency, wavelength (video)
    Sep 8, 2014 · Another key idea in sound waves is the wavelength of the sound wave. The idea of a wavelength is that when this sound is traveling through a region of air, the ...
  17. [17]
    What is FFT (Fast Fourier Transform) math function of an ... - Tektronix
    The FFT (Fast Fourier Transform) math function on an oscilloscope is used to convert a time-domain signal into its frequency-domain representation.
  18. [18]
    [PDF] About FFT Spectrum Analyzers - thinkSRS.com
    FFT spectrum analyzers compute a signal's frequency spectrum by sampling, using the Fast Fourier Transform (FFT) to transform the digitized input signal.
  19. [19]
    Spectrum Analyzer vs Oscilloscope: A Comparison Guide ... - Keysight
    Frequency analysis: An oscilloscope with FFT can perform basic frequency analysis, similar to a spectrum analyzer. Time-domain measurements: Some spectrum ...
  20. [20]
    The Audible Spectrum - Neuroscience - NCBI Bookshelf - NIH
    Humans can detect sounds from about 20 Hz to 20 kHz, though adults often have an upper limit closer to 15-17 kHz.
  21. [21]
    Extended High Frequency Thresholds in College Students - NIH
    Human hearing is sensitive to sounds from as low as 20 Hz to as high as 20,000 Hz in normal ears. However, clinical tests of human hearing rarely include ...Missing: authoritative | Show results with:authoritative
  22. [22]
    ISO 226:2003 - Acoustics — Normal equal-loudness-level contours
    This International Standard specifies combinations of sound pressure levels and frequencies of pure continuous tones which are perceived as equally loud by ...Missing: audible | Show results with:audible
  23. [23]
    Hearing at Low and Infrasonic Frequencies - Noise and Health
    Sound at 20-200 Hz is called low-frequency sound, while for sound below 20 Hz the term infrasound is used. The hearing becomes gradually less sensitive for ...Missing: authoritative | Show results with:authoritative
  24. [24]
    Presbycusis - StatPearls - NCBI Bookshelf - NIH
    On a standard audiogram, presbycusis appears as an overall down-sloping line that represents impaired hearing at higher frequency sounds. Laboratory testing for ...Missing: upper | Show results with:upper
  25. [25]
    Sex Differences in a Cross Sectional Study of Age-related Hearing ...
    In those high frequencies, hearing was worse in men than in women. The ... Other studies have reported better hearing sensitivity in women than in men (14-16).
  26. [26]
    The Effect of Noise Exposure on High-Frequency Hearing Loss ...
    Apr 10, 2023 · Noise-induced hearing loss (NIHL) initially presents as sensorineural hearing loss in higher frequency ranges (3 kHz–6 kHz), which is known as ...
  27. [27]
    Maximum Sensitivity Region of Human Hearing
    ### Summary of Human Hearing Sensitivity Peak Frequencies
  28. [28]
  29. [29]
    The relationship between talker acoustics, intelligibility, and effort in ...
    May 8, 2020 · Energy in mid-range frequencies (ME13): Mean energy in the 1–3 kHz range has reliably shown a relationship with intelligibility (Green et al., ...
  30. [30]
    Facts about speech intelligibility - DPA Microphones
    Here, the frequency band around 2 kHz is the most important frequency range regarding perceived intelligibility. Most consonants are found in this frequency ...Missing: midrange | Show results with:midrange
  31. [31]
    Temporary and Permanent Noise-Induced Threshold Shifts
    Exposure to intense sound can produce TTS, acute changes in hearing sensitivity that recover over time, or PTS, a loss that does not recover to pre-exposure ...
  32. [32]
    Noise-Induced Hearing Loss (NIHL) - NIDCD
    Apr 16, 2025 · However, long or repeated exposure to sounds at or above 85 dBA can cause hearing loss. The louder the sound, the shorter the amount of time it ...
  33. [33]
    Audio Frequency Range Explained - Gear4music
    Aug 14, 2023 · Audio frequency refers to the rate at which sound waves oscillate or vibrate, producing the sensations of pitch and tone that our brain interprets as sound.What does audio frequency... · Ranges in the audio frequency...
  34. [34]
    What is Audio Frequency and Why It Matters - Clideo
    Oct 8, 2024 · Low frequencies. A range of 20 - 250 Hz. It can also be divided into Sub-Bass (20 - 60 Hz) and Bass (60 - 250 Hz). · Mid frequencies. A range of ...Low Frequencies · Mid Frequencies · High Frequencies
  35. [35]
    The Lethal Frequencies of Thunderstorm Sounds and Energies
    Oct 22, 2018 · Energy spectrum (Q) generated by thunder is 0.5–2 with frequencies range from <4 to 125Hz. At the low frequency of <10Hz wind noise could mask the thunder sign.
  36. [36]
    Kick Drum EQ 101 – How to Dial-in the Perfect Sound | Gear4music
    Dec 12, 2024 · Its fundamental frequency lies in the 40-100 Hz range, and you can use a high-pass filter after this to gently remove any unwanted sub-bass ...
  37. [37]
    Vocal EQ Chart: The Ultimate Vocal EQ Cheat Sheet (2024)
    May 17, 2023 · Men's voices typically have a lower fundamental frequency than women's, often falling between 85 Hz and 180 Hz. This lower range is where the ...
  38. [38]
    Room Modes 101 - Acoustic Frontiers LLC
    Mar 23, 2015 · Room modes are created when a sound wave travels between two opposite boundaries, for example the left and right side walls or the floor and ceiling.
  39. [39]
    Small Studios And Bass - Know Your Enemy | Production Expert
    Mar 30, 2025 · Standing Waves Affect The Reverb Too​​ The problematic bass frequency will ring on longer in the room. This is why while EQ and speaker ...
  40. [40]
  41. [41]
    Why does low frequency audio take up more power
    Feb 4, 2018 · So simply more power is needed to achieve the same sound pressure level at low frequencies compared to high frequencies.
  42. [42]
    Does bass really require more power?
    Jul 21, 2020 · Long answer: Bass requires more driver movement as well as larger (and therefore heavier) drivers, so the amp needs more power to produce it.What is the impact on amplifier power / SPL from filtering out low ...How much comparable power do we need for different frequencies?More results from www.audiosciencereview.com
  43. [43]
    Phase Demystified - Sound On Sound
    You should also take care when layering sounds with prominent low frequencies (such as basses and kick drums), because it can really suck the power out of the ...Missing: challenges energy
  44. [44]
    How To Tackle Phasing When Recording - Demo Mentor
    Feb 16, 2025 · Low frequencies are particularly prone to phase problems. They can make a mix sound muddy or weak. I tackle this in a few ways: High-pass filter ...
  45. [45]
  46. [46]
    Voice Acoustics: an introduction to the science of speech and singing
    Our hearing is most sensitive for frequencies from 1000 to 4000 Hz. Consequently, the fundamentals of low voices, especially low men's voices, contribute ...Missing: 250-4000 piano intelligibility clutter
  47. [47]
  48. [48]
    How to Handle Harshness in a Mix (+ 4 Great Plugins)
    May 15, 2020 · Harshness can be introduced by an individual track, or by a buildup of multiple sources together. This can cause ear fatigue and ultimately detract from the ...Missing: overemphasis clutter
  49. [49]
    [PDF] THE FREQUENCY SPECTRUM, INSTRUMENT RANGES, AND EQ ...
    MID RANGE. 800 - 2.5k. (where clutter happens, our ears are sensitive, too ... FULLNESS - 120 Hz, BOOMINESS - 200-240 Hz. CLARINET. FLUTE. TROMBONE. TRUMPET.
  50. [50]
  51. [51]
    Drum Frequencies of Kick Bass Drum, Hi Hats, Snare and Crash ...
    Sep 12, 2011 · And finally hi-hats and cymbals tend to have its central frequency in the upper frequency range (treble). However, the central frequency at Emax ...
  52. [52]
  53. [53]
    Interaural Level Difference – Introduction to Sensation and Perception
    High frequency sounds have short wavelengths, so the head casts an acoustic shadow and sounds are quieter in the ear away from the sound. Below about 1000 Hz, ...
  54. [54]
    Absorption and Attenuation of Sound in Air
    Mar 22, 2016 · As sound waves travel through the air, the amplitude of the sound wave decreases (attenuates) as some of the energy carried by the wave is lost ...
  55. [55]
    Acoustics Chapter One: Pitch and Tuning
    In general, we perceive pitch logarithmically in relation to frequency. Each successive octave is a doubling of the frequency of the previous one—1ƒ, 2ƒ, 4ƒ, 8 ...
  56. [56]
    Pitch, Intervals and Key Areas | MUsic Technology Online Repository
    Our percept of pitch is logarithmically organized such that an octave above concert A is twice its frequency, therefore, 880Hz; whereas the next octave above ...
  57. [57]
    Mel Frequency Cepstral Coefficient - an overview - ScienceDirect.com
    The following formula is used to compute the Mels for a particular frequency: [3.23] Mel ( f ) = 2595 × log 10 ( 1 + f / 700 ) . A step by step ...
  58. [58]
    [PDF] missing fundamentals: a problem of auditory or mental processing?
    It is assumed in psychoacoustic research that experiments in pitch perception measure the auditory performance of a listener. The processes in a person's brain ...
  59. [59]
    Individual differences for complex tones with unresolved harmonics
    Apr 16, 2004 · The pitch percept is dominated by the missing fundamental if the harmonics are resolved. If the harmonics become unresolved and are added in ...Missing: original | Show results with:original
  60. [60]
    Equal Temperament - HyperPhysics
    It divides the octave into 12 equal semitones. It is common practice to state musical intervals in cents, where 100¢ is defined as one equal tempered semitone.
  61. [61]
    14.1.2: Equal Temperament - Physics LibreTexts
    Aug 24, 2025 · In this scale the frequencies between octaves are chosen to be equally spaced in which case the ratio between notes is 1.059463 (the notes can ...<|control11|><|separator|>
  62. [62]
    Psychoacoustics: Facts and Models - SpringerLink
    Eberhard Zwicker was one of the worlds top authorities in psychoacoustics. In his labs in Stuttgart and München he educated scientists and engineers who ...
  63. [63]
    Analytical expressions for critical‐band rate and critical bandwidth ...
    Nov 1, 1980 · The critical‐band rate as well as the critical bandwidth are functions of frequency. These dependencies have been given in table form.
  64. [64]
    [PDF] Perceptual Coding of High-Quality Digital Audio - Index of /
    This paper discusses how human perception aspects are integrated in the design of modern audio coding systems. By Karlheinz Brandenburg, Fellow IEEE, Christof ...
  65. [65]
    Explanation of 44.1 kHz CD sampling rate - CS@Columbia
    Jan 9, 2008 · The CD sampling rate has to be larger than about 40 kHz to fulfill the Nyquist criterion that requires sampling at twice the maximum analog ...Missing: primary source
  66. [66]
    Crossover Networks for Loudspeakers - HyperPhysics
    Crossover networks route frequency ranges to different drivers in loudspeakers, using capacitors, inductors, and resistors to filter frequencies.
  67. [67]
    A Brief History of Recording to ca. 1950 - CHARM
    The story of sound recording, and reproduction, began in 1877, when the man of a thousand patents, Thomas Edison, invented the phonograph. In essence, his ...
  68. [68]
    [PDF] Practical-Hi-Fi-Sound.pdf - World Radio History
    an audio system is important. The limits of audibility are from. 20Hz to 20kHz so one might conclude that any high-quality audio system must cover this range.
  69. [69]
    What Are the Different Types of EQ and Filters? - Icon Collective
    The most common types of EQ used in music production are parametric, semi-parametric, dynamic, graphic, and shelving.<|control11|><|separator|>
  70. [70]
    PARAMETRIC EQ vs GRAPHIC EQ: How & When to Use Them
    A graphic equalizer offers gain control of a fixed set of frequencies, usually the ISO third-octave frequencies. A parametric equalizer offers gain control over ...
  71. [71]
    Shelving filter explained: understanding high-shelf and low-shelf ...
    A shelving filter, also referred to as a shelf filter, shelf EQ, shelving EQ etc. allows you to boost or attenuate either the high end or the low end of the ...
  72. [72]
    Room Correction Explained – Trinnov Optimizer Technology
    Room correction refers to the process of compensating for acoustic issues caused by a listening environment. Even the best loudspeakers, placed in less-than- ...
  73. [73]
    Muddy Mix? Here Are 3 Simple Ways to Fix It - Mastering.com
    Oct 31, 2016 · Just remember that frequency range, 200 to 400 hertz, that's going to be problematic. #3: Mix Buss EQ Technique 3 is actually removing that mud ...
  74. [74]
    Tech Stuff - Equalization (EQ), Metering and the FFT - ZyTrax
    Feb 9, 2025 · This page focuses on software based equalizers for manipulating recorded (digital) audio but most of the principles remain the same.Missing: DAWs | Show results with:DAWs
  75. [75]
    Multiband Compressor in Mastering: When, Why & How
    Dec 8, 2023 · Crossovers in multiband compressors split the sound into different parts. However, they can cause phase issues where the sound doesn't blend ...Some History on Multiband... · How to use a Multiband... · An Overview of Popular...
  76. [76]
    RIAA Curve: The 1954 Turntable Equalization Standard That Still ...
    Oct 17, 2025 · The RIAA curve is an equalization filter applied to vinyl records and then corrected in record player amplifiers in such a way that the listener is never aware ...