Fact-checked by Grok 2 weeks ago

Sound

Sound is a mechanical longitudinal wave that results from the vibration of particles in an elastic medium, propagating as alternating compressions and rarefactions of the medium away from the source. This disturbance transfers energy through the medium without net displacement of the particles themselves, and sound requires a material medium—such as air, water, or solids—to travel, as it cannot propagate in a vacuum. Key properties of sound waves include frequency, which determines the pitch and is measured in hertz (Hz); amplitude, which corresponds to loudness or intensity; wavelength, the distance between consecutive compressions; and speed, which varies by medium and temperature. In dry air at 20°C, the speed of sound is approximately 343 meters per second (m/s). For humans, the audible frequency range typically spans from 20 Hz to 20 kHz, though sensitivity peaks between 2 kHz and 5 kHz. Sound production occurs when an object vibrates, disturbing adjacent particles and initiating the wave; common sources include vocal cords in speech, strings or air columns in musical instruments, and mechanical impacts. Once generated, sound waves propagate outward, undergoing phenomena such as reflection (echoes), refraction (bending due to medium changes), diffraction (bending around obstacles), and interference (superposition of waves), which influence how sound is heard in different environments. Detection involves the waves reaching a receiver, such as the human ear, where they cause the eardrum to vibrate, transmitting signals through the cochlea to the brain for auditory perception. The study of sound, known as acoustics, encompasses its physical principles, including production, transmission, reception, and effects, with applications in fields like architecture, medicine (e.g., ultrasound), and engineering.

Nature of Sound

Definition

Sound is a mechanical wave that propagates through an elastic medium, such as air, water, or solids. In gases and liquids, sound waves are longitudinal, with particles of the medium vibrating back and forth parallel to the direction of wave propagation. In solids, sound waves can be either longitudinal or transverse. These waves arise from the compression and rarefaction of the medium's particles, creating alternating regions of high and low density. Unlike electromagnetic waves, such as light, which can travel through the vacuum of space, sound requires a physical medium for propagation and cannot exist or transmit in a vacuum. This fundamental difference stems from sound's reliance on the elasticity and density of matter to carry the vibrational energy. Everyday examples of sound include the pressure variations produced by human speech, where vocal cords create rapid air molecule oscillations, or thunder, resulting from the explosive expansion of heated air during lightning. These pressure fluctuations in the surrounding medium allow the auditory experience to reach a listener. The English word "sound," in the sense of noise or auditory sensation, derives from the Latin sonus, meaning "a sound" or "tone," which entered the language through Old French son.

Mechanical Waves

Sound waves are mechanical waves that propagate as longitudinal or compressional disturbances through a medium, characterized by alternating regions of compression, where particles are densely packed, and rarefaction, where particles are more spread out. In gases and liquids, these waves involve variations in pressure and density with particle motion parallel to the propagation direction. In solids, sound waves can also propagate as transverse waves, involving shear displacements perpendicular to the propagation direction, in addition to longitudinal waves. The motion of particles in a longitudinal sound wave consists of small oscillations parallel to the direction of wave propagation, resulting in fluctuations of local density and pressure without any net displacement of the medium over the wave's path. This back-and-forth vibration enables the wave to advance while individual particles return to their original positions after each cycle, distinguishing sound propagation from bulk material movement. In transverse sound waves in solids, particles oscillate perpendicularly, transferring shear energy. The formation and propagation of sound waves depend on the elastic properties and density of the medium, which determine how readily it can store and release energy through compression and expansion (for longitudinal waves) or shear (for transverse waves in solids). Elasticity allows the medium to resist deformation and return to equilibrium, while density influences the inertia opposing these changes, together governing the efficiency of wave transmission. The speed of sound varies with these medium properties, though specific values depend on environmental factors. The mathematical description of sound wave propagation in one dimension is given by the acoustic wave equation for pressure p(x, t): \frac{\partial^2 p}{\partial t^2} = c^2 \frac{\partial^2 p}{\partial x^2} where t is time, x is position, and c is the speed of sound in the medium. This partial differential equation arises from Newton's second law applied to fluid elements and the continuity equation, capturing how pressure variations evolve linearly in an ideal, non-viscous medium. Sound waves exhibit fundamental behaviors including reflection, where waves bounce off boundaries like walls to produce echoes; refraction, the bending of waves when passing through regions of varying medium properties such as temperature gradients; diffraction, the spreading of waves around obstacles or through openings; and interference, the superposition of waves leading to constructive or destructive patterns. These phenomena arise from the wave nature of sound and influence its propagation in real environments, such as how diffraction allows sound to curve around corners.

Physical Properties

Speed of Sound

The speed of sound is the distance traveled per unit time by a sound wave as it propagates through an elastic medium. In ideal gases, it is given by the formula c = \sqrt{\frac{\gamma P}{\rho}}, where \gamma is the adiabatic index (ratio of specific heats), P is the pressure, and \rho is the density. For an ideal gas, this simplifies to c = \sqrt{\frac{\gamma R T}{M}}, where R is the universal gas constant, T is the absolute temperature, and M is the molar mass of the gas. In dry air at 20°C and standard atmospheric pressure, the speed of sound is approximately 343 m/s. This value increases in denser media: in water at 20°C, it reaches about 1480 m/s, while in solids like steel, it is around 5000–6000 m/s, depending on the alloy and type of wave (longitudinal or shear). The speed decreases with altitude in Earth's atmosphere primarily due to lower temperatures at higher elevations, which reduce molecular kinetic energy and thus wave propagation velocity. The speed of sound in air exhibits a strong temperature dependence, increasing by roughly 0.6 m/s for each 1°C rise above 0°C, as derived from the relation v \approx 331 + 0.6 T m/s, where T is in °C. Early measurements of the speed of sound date to the 17th century, with French mathematician Marin Mersenne estimating it around 448 m/s in 1636 using timed echoes from cannon fire over known distances. Modern techniques employ ultrasonic methods, such as pulse-echo interferometry, to achieve high precision by measuring travel times of high-frequency sound pulses through samples. Knowledge of the speed of sound enables applications like echolocation in bats, where emitted ultrasonic pulses reflect off objects to gauge distance based on round-trip time, and sonar systems in underwater navigation, which use acoustic pings to detect submerged features by accounting for the medium's propagation speed.

Frequency and Wavelength

Sound waves are characterized by their oscillatory nature, where frequency denotes the number of complete cycles or vibrations occurring per second, measured in hertz (Hz). Wavelength represents the spatial distance between consecutive compressions or rarefactions in the wave, typically denoted as λ. These properties are interrelated through the wave speed c, given by the equation \lambda = \frac{c}{f}, where f is the frequency; this relation arises because the wave speed is the product of frequency and wavelength, ensuring that higher frequencies correspond to shorter wavelengths for a fixed medium speed. For human hearing, the audible frequency range spans approximately 20 Hz to 20 kHz, though sensitivity peaks in the mid-range. At the lower end, a 20 Hz sound wave in air (with c \approx 343 m/s at standard conditions) has a wavelength of about 17 m, illustrating how low frequencies produce long spatial periods that can interact with large-scale environments. Complex sound waves, such as those from musical instruments, often consist of a fundamental frequency—the lowest component—and a series of overtones that form the harmonic series, where each subsequent frequency is an integer multiple of the fundamental (e.g., 2f, 3f, 4f). These harmonics arise from the physics of vibrating sources, contributing to the wave's overall periodicity and shape through superposition. The Doppler effect modifies the observed frequency when a sound source moves relative to the medium, altering the wave's periodicity as perceived by a stationary observer. For a source approaching at speed v_s (with v_s < c), the observed frequency f' is f' = f \frac{c}{c - v_s}, derived from the reduced effective wavelength: the source emits waves more frequently into the compressed frontal region, shortening the distance between wavefronts to \lambda' = (c - v_s)/f, so f' = c / \lambda'. This shift increases with source speed, explaining phenomena like rising pitch from approaching sirens. At high amplitudes or speeds exceeding the sound barrier, nonlinear effects distort sound waves, leading to steepening and formation of shock waves where the waveform's front compresses into a discontinuity. Sonic booms exemplify this, as aircraft generate abrupt pressure jumps propagating as N-shaped shocks rather than smooth oscillations, limited by the medium's nonlinearity.

Amplitude and Intensity

In sound waves, amplitude refers to the maximum displacement of particles in the medium from their equilibrium position, or equivalently, the maximum deviation of pressure from the ambient atmospheric pressure. This displacement amplitude quantifies the magnitude of the wave's oscillation, determining the energy carried by the wave. Sound intensity I, defined as the average power per unit area perpendicular to the direction of wave propagation, measures the rate of energy flow through a surface. For a plane progressive sound wave, intensity relates to the pressure amplitude p by the formula I = \frac{p^2}{2 \rho c}, where \rho is the density of the medium and c is the speed of sound in that medium./16%3A_Sound/16.2%3A_Sound_Intensity_and_Level) This expression derives from the acoustic impedance Z = \rho c, linking pressure variations to energy flux; detailed pressure relations appear in the Sound Pressure section. For a point source radiating sound isotropically in free space, intensity follows the inverse square law, decreasing proportionally to $1/r^2, where r is the distance from the source. This geometric spreading arises because the total power output spreads over the surface of an expanding sphere, reducing the power density with increasing radius. Sound waves dissipate energy through absorption and attenuation as they propagate, converting acoustic energy into heat via mechanisms such as viscosity and thermal conduction in the medium. In air, the classical absorption coefficient \alpha due to these effects is proportional to the square of the frequency and depends on factors like molecular viscosity and thermal conductivity, typically on the order of 10^{-8} to 10^{-3} m^{-1} at audible frequencies under standard conditions (20°C, 1 atm). Attenuation thus limits the range of sound transmission, with higher frequencies attenuating more rapidly. The threshold of hearing corresponds to an intensity of approximately $10^{-12} W/m² for a pure tone at 1 kHz, representing the minimum detectable sound level for young, healthy ears under ideal conditions. This value establishes the baseline for human auditory sensitivity to acoustic power.

Measurement and Quantification

Sound Pressure

Sound pressure refers to the instantaneous local deviation from the ambient pressure in a propagating medium, such as air or water, caused by the alternating compressions and rarefactions of a sound wave. This pressure fluctuation, denoted as p(t), is the primary physical parameter for quantifying sound objectively, with units in pascals (Pa), where 1 Pa equals 1 newton per square meter. For practical measurements, the root-mean-square (RMS) value is used to represent the effective magnitude of this varying pressure over a time period T: p_{\text{rms}} = \sqrt{\frac{1}{T} \int_0^T p(t)^2 \, dt} This RMS formulation accounts for the squared average of the pressure waveform, providing a stable metric suitable for both sinusoidal and complex signals. The standard reference pressure for sound pressure level (SPL) in air is 20 micropascals (μPa), which corresponds to the nominal threshold of human hearing at 1 kHz under quiet conditions, defining 0 dB SPL. This value serves as the baseline for comparing sound pressures across environments. In different media, sound pressure for a given acoustic intensity is notably higher in liquids than in gases, owing to the greater incompressibility of liquids, which results in a higher bulk modulus and acoustic impedance (approximately 400 rayls in air versus 1.5 megarayls in water at standard conditions). For instance, achieving the same intensity in water requires roughly 70 times greater pressure amplitude than in air due to these impedance differences. Sound pressure is captured using specialized transducers: microphones in gaseous media like air, which detect pressure variations while ignoring static atmospheric pressure, and hydrophones in liquids like water, which are designed to be insensitive to hydrostatic pressure and focus on acoustic fluctuations. These devices record the time-dependent waveform p(t), enabling analysis of the pressure's amplitude and temporal characteristics. At a fundamental level, sound pressure arises from the medium's resistance to compression, related to particle displacement \xi by the expression p = B \frac{\partial \xi}{\partial x}, where B is the bulk modulus of the medium quantifying its stiffness against volume change. This relation highlights how small spatial gradients in displacement generate pressure perturbations in the wave.

Decibel Scale

The decibel (dB) is a logarithmic unit used to quantify the ratio of two power or pressure quantities, providing a practical scale for the vast range of sound intensities encountered in nature and engineering. The unit originated from work at Bell Telephone Laboratories, where the transmission unit (TU) was introduced in 1924 to measure signal attenuation in telephone systems as a logarithmic ratio of powers. In 1928, the TU was renamed the bel (B) in honor of inventor Alexander Graham Bell (1847–1922), with the decibel defined as one-tenth of a bel for finer resolution. This scale compresses the dynamic range of sound, where a 10 dB increase corresponds to a tenfold increase in power, making it essential for acoustics. In acoustics, the sound pressure level (SPL) expresses sound pressure relative to a reference value using the formula: L_p = 20 \log_{10} \left( \frac{p}{p_0} \right) \ \mathrm{dB} where p is the root-mean-square sound pressure in pascals (Pa) and p_0 = 20 \, \mu\mathrm{Pa} is the standard reference pressure, approximately the threshold of human hearing at 1 kHz in air. This reference is established in international standards for airborne sound measurements. The factor of 20 arises because pressure level is proportional to the square of the pressure ratio, aligning with the logarithmic nature of perceived sound. Similarly, the sound intensity level quantifies acoustic power per unit area with the formula: L_I = 10 \log_{10} \left( \frac{I}{I_0} \right) \ \mathrm{dB} where I is the sound intensity in watts per square meter (W/m²) and I_0 = 10^{-12} \, \mathrm{W/m^2} is the reference intensity, corresponding to the human hearing threshold at 1 kHz. This reference is defined in IEC standards for sound intensity measurements. Intensity level is particularly useful for comparing sound sources in free fields, as it directly relates to energy flux. To account for the human ear's varying sensitivity across frequencies, weighted scales modify the basic decibel measurement; the A-weighted scale, denoted dB(A), applies a frequency response curve that approximates the ear's perception at moderate sound levels, attenuating low frequencies below 500 Hz and high frequencies above 10 kHz. This weighting is standardized in IEC 61672 for sound level meters and is widely used in environmental and occupational noise assessments. Representative examples illustrate the scale's application: normal conversation typically measures around 60 dB, a jet engine takeoff at 100 feet reaches about 140 dB, and the pain threshold for sound pressure begins near 120–130 dB, beyond which immediate hearing damage can occur. These levels highlight the scale's utility in safety regulations, such as those from OSHA, where exposures above 85 dB(A) over eight hours require protection.

Human Perception

Hearing Physiology

The human auditory system begins with the outer ear, which captures and funnels sound waves into the ear canal. The outer ear consists of the pinna (auricle), a cartilaginous structure that aids in sound localization by reflecting waves differently based on direction, and the external auditory canal, a tube lined with skin and cerumen-producing glands that conducts sound to the tympanic membrane (eardrum). The middle ear, an air-filled cavity behind the eardrum, amplifies sound vibrations through the ossicles: the malleus (hammer), incus (anvil), and stapes (stirrup), which are connected in series and transmit mechanical energy from the eardrum to the inner ear while protecting against excessive pressure via the Eustachian tube. The inner ear houses the cochlea, a fluid-filled spiral structure within the bony labyrinth, where sound transduction occurs, surrounded by the vestibular system for balance. Sound transduction takes place in the cochlea's organ of Corti, located on the basilar membrane, which divides the cochlear duct and varies in stiffness along its length. When vibrations from the stapes reach the oval window, they create traveling waves in the cochlear fluid (perilymph and endolymph), causing the basilar membrane to vibrate maximally at specific points depending on frequency. Inner and outer hair cells in the organ of Corti detect these movements; stereocilia on the hair cells bend against the tectorial membrane, opening mechanically gated ion channels that depolarize the cells and release neurotransmitters onto auditory nerve fibers. This mechanoelectrical process converts acoustic energy into electrical signals, with outer hair cells enhancing sensitivity through active amplification via prestin motors, while inner hair cells primarily relay the signals. The cochlea exhibits tonotopic organization, where frequency is mapped spatially along the basilar membrane: high frequencies (typically above 2 kHz) stimulate the basal end near the oval window due to its narrower, stiffer structure, while low frequencies (below 1 kHz) activate the apical end, which is wider and more flexible. This place-specific resonance, first described by Georg von Békésy, ensures that different sound frequencies elicit peak responses at distinct cochlear locations, preserving frequency information in neural coding. Neural signals from hair cells travel via the spiral ganglion neurons of the cochlear nerve (cranial nerve VIII), forming the auditory division of the vestibulocochlear nerve. These fibers synapse in the cochlear nuclei of the brainstem, where information branches into dorsal and ventral pathways for parallel processing. Ascending projections cross to the contralateral superior olivary complex for binaural integration, then to the inferior colliculus in the midbrain, the medial geniculate nucleus of the thalamus, and finally the primary auditory cortex in the temporal lobe, maintaining tonotopy throughout. This pathway enables rapid sound analysis, with latencies as short as 5-10 ms from cochlea to cortex. Mammalian hearing evolved from reptilian ancestors, with adaptations like the cochlea and ossicles enhancing sensitivity to airborne sounds in the 20 Hz to 20 kHz range typical for humans, allowing for vocal communication and predator detection. Age-related hearing loss, or presbycusis, progressively diminishes this range, often starting with high-frequency decline around 20 kHz in young adults and accelerating after age 50 due to hair cell loss, strial atrophy, and neural degeneration, affecting over 30% of those over 65.

Pitch

Pitch is the perceptual attribute of sound that corresponds to its periodicity, subjectively experienced as the highness or lowness of a tone. It arises primarily from the fundamental frequency of periodic sounds but is not identical to physical frequency, which is quantified in hertz as the number of cycles per second. In psychoacoustics, pitch perception integrates both spectral (place-based) cues from the basilar membrane's excitation patterns and temporal cues from neural phase locking, with the former dominating at higher frequencies and the latter at lower ones. To model the nonlinear relationship between physical frequency and perceived pitch, psychoacoustic scales such as the mel and bark scales are employed. The mel scale, derived from human judgments of equal perceptual intervals, approximates linear spacing at low frequencies (below 1 kHz) and logarithmic spacing at higher frequencies, reflecting how pitch differences are compressed at higher registers. Similarly, the bark scale aligns with the critical bands of hearing—frequency regions where auditory processing behaves as a single filter—spanning 24 bands from 20 Hz to 20 kHz, each one bark wide, to capture perceptual uniformity in pitch spacing. These scales facilitate applications in audio processing and hearing research by mapping physical frequencies to perceptual equidistance. The just noticeable difference (JND) for pitch, the smallest frequency change detectable 50% of the time, follows the Weber-Fechner law, where the JND is roughly proportional to the stimulus frequency, typically 0.3% to 1% depending on the range. For frequencies around 100–400 Hz, changes as small as 0.2% in repetition rate can be discerned, highlighting the auditory system's fine resolution in this musically relevant range. Octave equivalence further structures pitch perception, wherein doubling the frequency (e.g., from 261.6 Hz to 523.3 Hz, C4 to C5) evokes the sensation of the same note transposed higher, a phenomenon rooted in harmonic similarity and evident in cross-cultural musical systems. In complex tones, such as those from musical instruments or speech, pitch often derives from virtual pitch mechanisms rather than a single dominant frequency. Virtual pitch emerges from the pattern of harmonics, allowing perception of a low pitch even when the fundamental is absent—a phenomenon known as the missing fundamental illusion. For instance, harmonics at 600 Hz, 800 Hz, and 1000 Hz can elicit a perceived pitch of 200 Hz, processed via subharmonic coincidence or template matching in the auditory system. This illusion underscores pitch's holistic nature, prioritizing harmonic relations over individual components. Disorders like congenital amusia impair pitch perception, manifesting as deficits in fine-grained discrimination without affecting general hearing or intelligence. Individuals with amusia may fail to detect pitch changes smaller than several semitones (e.g., 11 semitones for rising tones), leading to difficulties in melody recognition and musical processing, though speech intonation is often spared due to coarser perceptual thresholds. This condition affects approximately 4% of the population and is linked to atypical right-hemisphere auditory processing.

Loudness

Loudness refers to the subjective perception of sound intensity by the human auditory system, which does not scale linearly with physical sound pressure or intensity. Unlike objective measures of amplitude, loudness integrates multiple factors including frequency content and temporal characteristics, leading to nonlinear perceptual responses. This perceptual construct is quantified using specialized units: the phon, which equates the loudness of a sound to that of a 1 kHz pure tone at the same sound pressure level in decibels, and the sone, a linear scale where 1 sone corresponds to the loudness of a 1 kHz tone at 40 phons, with each doubling of sones perceived as twice as loud. Equal-loudness contours, originally mapped as the Fletcher-Munson curves, illustrate how sounds of equal perceived loudness require varying sound pressure levels across frequencies. These contours, refined in the ISO 226 standard, show that at moderate levels (around 40-60 phons), the human ear perceives tones between 2 and 5 kHz as louder for a given pressure than those at lower or higher frequencies, due to the resonance of the ear canal and ossicular transfer function. Perceptual models approximate loudness in sones using Stevens' power law, where N = 2^{(L_N - 40)/10} and L_N is the loudness level in phons, emphasizing the nonlinear growth that aligns with subjective doubling every 10 phons at mid-frequencies. Frequency dependence further shapes loudness, with peak sensitivity in the 2-5 kHz range aligning with speech frequencies, where minimal pressure yields maximal perceived intensity compared to bass or treble extremes. In cases of sensorineural hearing loss, particularly from cochlear damage like outer hair cell loss, loudness recruitment occurs: thresholds elevate, but perceived loudness grows abnormally steeply with intensity, often reaching normal levels at higher pressures due to reduced nonlinear amplification at low intensities. Temporal integration contributes to loudness summation, where the auditory system accumulates energy over approximately 200 ms, such that a brief tone (e.g., 5 ms) must be about 10-20 dB louder than a 200 ms tone to match perceived loudness, with the integration amount varying nonmonotonically by level and peaking around moderate intensities. Simultaneous masking effects diminish perceived loudness when concurrent sounds overlap in frequency and time, as a stronger "masker" elevates the detection threshold of a weaker signal within the same critical band, effectively reducing its subjective intensity.

Timbre

Timbre is the perceptual attribute of sound that allows differentiation between tones of identical pitch and loudness, arising primarily from the harmonic spectrum, the attack and decay envelope, and the presence of inharmonicity. The harmonic spectrum consists of the fundamental frequency and its overtones, with the relative amplitudes of these components shaping the unique "color" of the sound. The attack refers to the initial transient rise in amplitude, while decay encompasses the subsequent amplitude reduction, both of which contribute to the temporal profile that the auditory system uses to identify sound sources. Inharmonicity, the deviation of overtone frequencies from exact integer multiples of the fundamental, introduces subtle distortions that further modify timbre, particularly in instruments like pianos where stretched strings produce slightly sharp higher partials. A central acoustic correlate of timbre is the spectral centroid, defined as the weighted average frequency of the spectrum, where weights are the amplitudes of the frequency components. This measure quantifies the "center of gravity" of the spectral energy distribution and is strongly linked to perceptual dimensions of timbre. For example, sounds with a higher spectral centroid, indicating greater energy in upper frequencies, are perceived as brighter, while lower centroids evoke mellower qualities. In perceptual studies, brightness emerges as a robust, unitary dimension of timbre, primarily driven by spectral cues like the centroid rather than categorical knowledge of the sound source. Instrumental examples illustrate these principles vividly. When a clarinet and a violin play the same note, their timbres differ markedly: the clarinet's closed cylindrical bore favors odd-numbered harmonics (e.g., the fundamental, third, fifth), producing a reedy, hollow tone, whereas the violin's open string vibration yields a fuller spectrum with stronger even harmonics, resulting in a smoother, more brilliant quality. Such differences in harmonic emphasis allow listeners to distinguish sources rapidly, often within 60 milliseconds. Perceptually, the prominence of higher harmonics enhances the sensation of brightness, a key timbral quality that influences emotional and aesthetic responses to music. Instruments like the trumpet exhibit high brightness due to concentrated energy in upper partials, contrasting with the bassoon's darker timbre from lower spectral emphasis. This perception scales with the raw spectral centroid rather than adjustments relative to the fundamental frequency. Culturally, timbre plays a role in systems like the Hornbostel-Sachs classification, which organizes musical instruments into categories—idiophones, membranophones, chordophones, aerophones, and electrophones—based on the vibrating medium that generates sound, thereby grouping timbres by production mechanism. For instance, idiophones (e.g., xylophones) often yield bright, metallic resonances from solid body vibration, while membranophones (e.g., drums) produce warmer, indefinite pitches through membrane oscillation. This 1914 ethnomusicological framework underscores how timbre reflects both acoustic physics and cultural instrument design.

Duration

The temporal resolution of the human auditory system enables detection of brief silent gaps in ongoing sounds, with the minimum detectable gap typically ranging from 10 to 20 ms depending on stimulus conditions such as noise type and frequency content. This acuity is crucial for parsing auditory streams and maintaining perceptual continuity. In reverberant environments, the precedence effect further enhances this resolution by suppressing the perception of echoes that arrive within approximately 5 to 20 ms after the direct sound, allowing listeners to localize the primary source accurately while ignoring subsequent reflections. Duration discrimination in hearing follows a logarithmic scale, akin to pitch perception, governed by Weber's law where the just noticeable difference in duration is proportional to the reference duration itself. For example, the relative precision in distinguishing durations improves with longer stimuli but remains scaled logarithmically, facilitating efficient processing of temporal extents from milliseconds to seconds. This perceptual scaling ensures that small proportional changes are detectable across a wide range of sound lengths. The onset and offset characteristics of a sound significantly shape its temporal perception, particularly through rise time—the duration over which amplitude increases from silence to peak. Shorter rise times, often below 10 ms, produce a perceived sharpness in the attack phase, contributing to the impression of abruptness or incisiveness in the sound's initiation. Conversely, gradual offsets can extend the sense of continuity, influencing how the sound's termination is integrated into the overall temporal structure. In room acoustics, reverberation time (RT60) quantifies the persistence of sound after the source ceases, defined as the time for the sound pressure level to decay by 60 dB. This metric is calculated using Sabine's formula: \text{RT}_{60} = \frac{0.161 V}{A} where V is the room volume in cubic meters and A is the total sound absorption (in sabins, equivalent to the effective absorbing area). Optimal RT60 values vary by application, typically 0.5 to 1.5 seconds for concert halls to balance clarity and warmth. Psychologically, the duration of a sound profoundly affects its categorization: brief events under 200 ms are typically perceived as impulses or transients due to incomplete temporal integration in the auditory pathway, evoking a sense of punctuality or impact. In contrast, durations exceeding 200 ms allow for sustained perception as tones, enabling fuller loudness summation and pitch recognition, which underscores the role of length in distinguishing impulsive from tonal auditory experiences.

Texture

In auditory perception, texture refers to the perceptual complexity arising from the interaction of multiple simultaneous or overlapping sounds, influencing how listeners organize and interpret the auditory scene. This complexity emerges from the density, independence, and grouping of sound elements, distinct from the qualities of isolated sounds. Musical textures are commonly classified into monophonic, homophonic, and polyphonic types based on the number and interrelation of independent lines. Monophonic texture features a single melodic line without accompaniment, creating a sparse, focused auditory experience. Homophonic texture involves a primary melody supported by chordal accompaniment, where subsidiary elements move in rhythmic unison with the main line, fostering a sense of unity. Polyphonic texture, by contrast, comprises multiple independent melodic lines that interweave, producing a richer, more intricate perceptual layering. Perceptual fusion occurs when concurrent sounds cohere into a single auditory object, often driven by harmonicity—where frequency components align in integer ratios—and synchronized timing of onsets and offsets. Conversely, segregation arises when sounds are perceived as distinct streams, facilitated by deviations from harmonicity or temporal asynchrony, allowing listeners to parse overlapping elements. These processes enable the differentiation of layered sounds in complex environments. Density and layering contribute to texture by varying the number and prominence of auditory streams, with high density in orchestral settings creating a thick, immersive fabric through stratified instrumental groups, while solo performances yield a thinner, more transparent perception. In orchestration, layering exploits perceptual stratification to separate foreground melodies from background harmonies, enhancing clarity amid multiplicity. Auditory scene analysis governs texture through Gestalt principles, such as common fate, where sounds exhibiting correlated changes in amplitude or frequency are grouped together as a unified entity. This principle aids in organizing polyphonic streams by binding elements with shared trajectories, contrasting with the diffuse grouping in stochastic noise fields. Examples illustrate these dynamics: choral music often employs polyphonic texture, where independent vocal lines fuse or segregate based on harmonic alignment, evoking a collective yet distinct ensemble. In noise fields, such as crowds or wind, texture manifests as a dense, non-periodic superposition of events, perceived via statistical regularities rather than discrete lines, leading to a homogeneous auditory backdrop. The timbre of individual components may subtly influence overall texture by aiding stream identification.

Spatial Localization

Spatial localization of sound sources is a critical aspect of auditory perception, enabling humans to determine the direction and distance of sounds in three-dimensional space. This process primarily relies on binaural cues, which arise from the separation of the two ears by the head, and monaural cues, which stem from the filtering effects of the head, torso, and pinnae. Binaural cues include the interaural time difference (ITD) and interaural level difference (ILD), while monaural cues involve spectral modifications captured by the head-related transfer function (HRTF). These mechanisms allow for precise localization, typically within a few degrees of accuracy for frontal sound sources in humans. The interaural time difference (ITD) is the primary cue for localizing low-frequency sounds, where the sound wave reaches one ear before the other due to the path length difference caused by head size. For humans, maximum ITDs reach approximately 700 μs, corresponding to sounds originating from the azimuthal extremes (around ±90 degrees). This cue is most effective for frequencies below about 1.5 kHz, as higher frequencies introduce phase ambiguities that degrade ITD utility. In contrast, the interaural level difference (ILD) dominates for high-frequency sounds above 1.5 kHz, resulting from the acoustic shadow cast by the head, which attenuates sound intensity at the far ear by up to 20 dB or more for lateral sources. The duplex theory, first proposed by Lord Rayleigh in 1907, elegantly explains this frequency-dependent reliance on ITD for timing and ILD for intensity, providing a foundational framework for binaural processing that remains valid today. Monaural cues, particularly those encoded in the HRTF, are essential for vertical (elevation) localization and fine-tuning azimuthal judgments. The HRTF describes the direction-dependent filtering of sound by the listener's anatomy, introducing spectral notches and peaks that vary with source elevation; for instance, the pinna's convoluted shape creates frequency-specific resonances around 5-10 kHz that shift with angle, allowing the auditory system to infer elevation without binaural disparities. These spectral cues from the pinna are particularly vital when only one ear receives the sound, such as for sources near the median plane. Experiments using filtered noise bursts have demonstrated that listeners can achieve elevation accuracies of about 10-15 degrees when spectral cues are preserved in the HRTF. Distance perception complements directional localization through a combination of intensity-based and environmental cues. As sound propagates, its intensity decreases according to the inverse square law, providing a relative cue for familiar sources, though absolute distance estimation is less precise without context (errors often exceed 20-30% indoors). High-frequency components attenuate more rapidly due to atmospheric absorption, shifting the spectrum toward lower frequencies for distant sources and aiding perceptual scaling. Room reflections further enhance distance judgments via the direct-to-reverberant energy ratio; closer sources exhibit higher direct sound relative to echoes, while distant ones blend more with reverberation, improving estimation accuracy in enclosed spaces by up to 50% compared to anechoic conditions. In virtual audio reproduction, such as through headphones, spatial localization faces challenges in achieving sound externalization—the perception of sources outside the head rather than internalized. Non-individualized binaural rendering often results in in-head localization due to mismatched HRTFs, but incorporating dynamic head movements or early reflections can enhance externalization, with studies showing up to 70% of listeners perceiving virtual sources as external when spectral and reverberant cues are optimized. This contrasts with real-space listening, where natural acoustics promote robust externalization across azimuths. Evolutionary adaptations have refined spatial localization in various species, exemplified by owls, which exhibit exceptional precision for nocturnal hunting. Barn owls (Tyto alba) possess asymmetrical ear openings and internal baffles that amplify ITDs and ILDs across a broader frequency range (up to 10 kHz), enabling localization errors as small as 1-2 degrees in the vertical plane. Masakazu Konishi's pioneering work in the 1970s revealed how the owl's inferior colliculus neurons selectively respond to these cues, integrating them via delay lines for microsecond precision—a mechanism that parallels but surpasses human capabilities, highlighting convergent evolution in auditory processing.

Frequency Extremes

Infrasound

Infrasound refers to acoustic waves with frequencies below 20 Hz, which lie outside the typical range of human auditory perception. These waves have long wavelengths, exceeding 17 meters in air at standard atmospheric conditions, due to the inverse relationship between frequency and wavelength given the speed of sound. Infrasound is generated by a variety of natural and anthropogenic sources. Natural origins include geological events such as earthquakes and avalanches, as well as biological activity from large animals like elephants, whose rumbles propagate over long distances for communication. Volcanic eruptions and severe weather phenomena also produce infrasound through explosive releases of energy. Anthropogenic sources encompass industrial operations, notably wind turbines, where blade rotation creates low-frequency pressure fluctuations. Due to their low frequencies, infrasound waves experience minimal attenuation in the atmosphere, particularly when guided by stratospheric ducts formed by temperature and wind gradients. This enables propagation over vast distances; for instance, infrasound from volcanic eruptions has been detected thousands of kilometers away, aiding global monitoring efforts. Detection of infrasound relies on specialized instrumentation rather than standard audio microphones, as these waves induce subtle pressure variations. Microbarometers measure atmospheric pressure changes with high sensitivity, while differential pressure sensors or adapted microphones capture the signals in arrays for precise localization. Humans may perceive infrasound non-auditorily as physical sensations, such as vibrations, pressure on the body, or unease, particularly at higher amplitudes. Exposure to infrasound has been associated with physiological effects in some studies, including reports of nausea, anxiety, dizziness, and fatigue, often linked to sources like wind turbines. However, controlled experiments, such as those simulating prolonged exposure, have found no significant impacts on sleep, mood, or vital signs in participants. These effects remain debated, with annoyance and expectation playing roles in subjective responses. Infrasound monitoring has practical applications in environmental and geophysical sciences. It enables tracking of wildlife behaviors, such as elephant migrations via their low-frequency calls, and detection of atmospheric phenomena for weather forecasting by analyzing microbaroms from ocean waves. Additionally, infrasound arrays support hazard assessment, including early warnings for volcanic activity and avalanches. Emerging research as of 2025 has also explored therapeutic applications, such as using infrasound (1–20 Hz) to modulate wound healing processes.

Ultrasound

Ultrasound refers to acoustic waves with frequencies greater than 20 kHz, exceeding the upper limit of human hearing. These waves exhibit short wavelengths, which allow for high spatial resolution in applications; for instance, at 100 kHz in air, focused ultrasound can achieve a resolution of approximately 1.7 mm. Ultrasound is commonly generated using piezoelectric transducers, which convert electrical energy into mechanical vibrations through the converse piezoelectric effect in materials like lead zirconate titanate (PZT). Natural sources include biological systems, such as bat echolocation, where certain species produce pulses up to 212 kHz for navigation and prey detection. In propagation, ultrasound experiences higher absorption in biological tissues compared to audible sound, with the absorption coefficient α approximately proportional to the square of the frequency (α ∝ f²) due to viscous and relaxation losses. This attenuation limits penetration depth but enables precise targeting; focusing is achieved using acoustic lenses made from materials with differing sound speeds, such as silicone or epoxy, to concentrate energy into beams. Key applications of ultrasound include medical imaging, where Doppler techniques measure blood flow velocity by detecting frequency shifts in reflected waves from moving red blood cells. It is also used in ultrasonic cleaning, where high-intensity waves create cavitation bubbles that implode to remove contaminants from surfaces, and in ranging systems like sonar for distance measurement in underwater navigation. Recent advancements as of 2025 include AI-assisted imaging for improved diagnostics and portable handheld devices enhancing accessibility in point-of-care settings. Biological interactions with ultrasound can produce thermal effects through absorption-induced heating and mechanical effects via cavitation, where gas bubbles form, grow, and collapse under pressure variations. Safety guidelines for diagnostic applications limit exposure using the mechanical index (MI), which quantifies cavitation risk, recommending MI < 1.9 to minimize non-thermal bioeffects.

References

  1. [1]
    Waves and Sound - Physics
    Jun 17, 1998 · Anything that vibrates is producing sound; sound is simply a longitudinal wave passing through a medium via the vibration of particles in the ...
  2. [2]
    17.1 Sound – College Physics chapters 1-17 - UH Pressbooks
    The physical phenomenon of sound is defined to be a disturbance of matter that is transmitted from its source outward. Sound is a wave. On the atomic scale, it ...
  3. [3]
    Properties of Sound Waves - UCSD Music
    Properties of Sound Waves. Speed of sound: in air: 340 m/s; in water: 1480 m/s. Amplitude range of hearing (humans). Threshold of audibility: 0.00002 N/m ...
  4. [4]
    Speed of Sound - HyperPhysics
    The speed of sound in air is 343 m/s, while the rms speed of air molecules is 502 m/s using a mean mass of air molecules of 29 amu.
  5. [5]
    The range of human hearing - Physics
    Humans are sensitive to a particular range of frequencies, typically from 20 Hz to 20000 Hz. Whether you can hear a sound also depends on its intensity - we're ...Missing: audible | Show results with:audible
  6. [6]
    [PDF] Chapter 6 Waves II: Sound Waves
    A sound wave is a longitudinal wave in an elastic medium (which could be a gas, liquid or solid). In such a wave the particles of the medium oscillate back ...
  7. [7]
    Hearing: Additional Information - Learn Genetics Utah
    Sound has three main qualities that our ears and brains can discern: volume, pitch, and timbre (TAM-ber). Each of these properties comes from a different ...
  8. [8]
    What is Acoustics
    Acoustics is defined as the science that deals with the production, control, transmission, reception, and effects of sound.
  9. [9]
    Sound as a Longitudinal Wave - The Physics Classroom
    Sound waves traveling through a fluid such as air travel as longitudinal waves. Particles of the fluid (i.e., air) vibrate back and forth in the direction ...
  10. [10]
    The Nature of Sound - The Physics Hypertextbook
    Sound is a longitudinal, mechanical wave. Sound can travel through any medium, but it cannot travel through a vacuum. There is no sound in outer space.
  11. [11]
    Sound as a Mechanical Wave - The Physics Classroom
    Sound is a mechanical wave and cannot travel through a vacuum. Light is an electromagnetic wave and can travel through the vacuum of outer space. Next ...
  12. [12]
    What happens when sound waves enter a vacuum? - CK-12
    Sound waves cannot travel through a vacuum. This is because sound waves are mechanical waves that require a medium (like air, water, or a solid substance) ...<|control11|><|separator|>
  13. [13]
    Sound is a Pressure Wave - The Physics Classroom
    Sound is a mechanical wave that results from the back and forth vibration of the particles of the medium through which the sound wave is moving.
  14. [14]
    47 Sound. The wave equation - Feynman Lectures - Caltech
    In the case of sound, however, we know that it propagates through the air between the source and the hearer, and it is certainly a natural question to ask what, ...
  15. [15]
    SOUND Definition & Meaning - Merriam-Webster
    For example, the sound that means "something heard" descends from Latin sonus ("sound"), whereas the sound that means "to measure the depth of water" traces to ...
  16. [16]
    Module 4 Waves | Science 111 - Lumen Learning
    Longitudinal sound waves are waves of alternating pressure deviations from the equilibrium pressure, causing local regions of compression and rarefaction.
  17. [17]
    Wave Motion in Mechanical Medium - Graduate Program in Acoustics
    Longitudinal Waves​​ Pick a single particle and watch its motion. The wave is seen as the motion of the compressed region (ie, it is a pressure wave), which ...Missing: compressional | Show results with:compressional
  18. [18]
    The wave equation for sound - Physclips.
    The wave equation and the speed of sound​​ So this is a solution provided that (ω/k)2 = /ρ. Now ω/k is the wave speed (see travelling_sine_wave for revision) so ...
  19. [19]
    The Speed of Sound in Other Materials - NDE-Ed.org
    While the density of a medium also affects the speed of sound, the elastic properties have a greater influence on the wave speed. Density. The density of a ...
  20. [20]
    Sound Waves - Galileo
    From F = ma to the Wave Equation​​ Having found how the local pressure variation relates to s(x,t), we're ready to derive the wave equation from F=ma for a thin ...
  21. [21]
    [PDF] Lecture Notes III (Continued – Part 2) - Course Websites
    Sound waves reflect like light, can be focused using mirrors, refract due to temperature/pressure, and can interfere. The law of reflection applies to sound.
  22. [22]
    Reflection in Waves, Wave Refraction, and Diffraction - Albert.io
    Sep 12, 2023 · Reflection is when a wave is turned back, refraction is bending when changing medium, and diffraction is bending and spreading around barriers.Reflection in Waves · Wave Refraction · Explanation of Wave Refraction
  23. [23]
    Speed of Sound
    The speed of sound (a) is equal to the square root of the ratio of specific heats (g) times the gas constant (R) times the absolute temperature (T). The ...
  24. [24]
    17.2 Speed of Sound – University Physics Volume 1
    The speed of sound in gases is related to the average speed of particles in the gas, v rms = 3 k B T m , where is the Boltzmann constant ( 1.38 × 10 − 23 J/K ) ...
  25. [25]
    Speed of Sound - HyperPhysics
    Sound speed depends on the medium's density and elastic properties. In water, it's 1482 m/s at 20°C. Sound speed in liquids depends on temperature.
  26. [26]
    [PDF] Speed of Sound | NASA
    Solid Steel. Sea Water. Air. Speed of sound in m/sec at 21°C (70°F). 5,180 m/s ... Explore. In this section, students will create sound waves, measure the sound's ...
  27. [27]
    [PDF] Laplace and the Speed of Sound
    Marin Mersenne, and later Joshua Walker and Newton, obtained respectable results by deter- mining how far they had to stand from a wall in order to obtain an ...
  28. [28]
    Ultrasonic Measurements - SERC (Carleton)
    Mar 3, 2012 · Ultrasonic measurements measure sound wave velocity, using ultrasonic interferometry to measure two-way travel time through a specimen.
  29. [29]
    Echolocating bats rely on an innate speed-of-sound reference - PMC
    May 3, 2021 · An object's distance can be assessed using echolocation given a reference to the speed of sound. Since bats experience a range of speeds of ...
  30. [30]
    The Underwater Propagation of Sound and its Applications
    Mar 11, 2012 · Active sonar systems utilize an acoustic projector to generate a sound pulse, often referred to as a “ping,” and then listen for reflections ( ...
  31. [31]
    14.1 Speed of Sound, Frequency, and Wavelength - Physics
    Mar 26, 2020 · Sound is a wave. More specifically, sound is defined to be a disturbance of matter that is transmitted from its source outward.
  32. [32]
    The Audible Spectrum - Neuroscience - NCBI Bookshelf - NIH
    Humans can detect sounds in a frequency range from about 20 Hz to 20 kHz. (Human infants can actually hear frequencies slightly higher than 20 kHz.)
  33. [33]
    Wavelength To Frequency - The Speed Of Sound | Brüel & Kjær - HBK
    Wavelength of a sound in air at 20 Hz: 340 m / 20 = 17 m. Wavelenght Patterns. Membrane motion. knowledge, resource center, articles, sound, wavelength to ...
  34. [34]
    Fundamental Frequency and Harmonics - The Physics Classroom
    Fundamental frequency is the lowest frequency of an instrument, also called the first harmonic. Harmonics are related to each other by whole number ratios.
  35. [35]
    [PDF] Nonlinear waves - Center for Nonlinear Science
    Less apparent but equally nonlinear is the dynamics behind the sonic boom caused by a high-speed airplane passing overhead, the short- range shock wave ...
  36. [36]
    [PDF] ANALYSIS OF SONIC.BOOM PHENOMENA _*' NEAR THE SHOCK ...
    wave and sonic boom degenerate to acoustic or sound waves which propagate toward the ground. These acoustic waves are perceived as rumbles similar to ...
  37. [37]
    [PDF] What is Sound? Sound Wave Waveform: wavelength - UCSD Music
    Oct 3, 2019 · – amplitude: maximum particle displacement from rest position (Pa). – period: time to complete one cycle (s). – frequency: number of cycles per ...Missing: definition | Show results with:definition
  38. [38]
  39. [39]
    17.3 Sound Intensity - University Physics Volume 1 | OpenStax
    I=〈P〉A. The intensity of a sound wave is proportional to the change in the pressure squared and inversely proportional to the density and the speed.Human Hearing And Sound... · Interactive · Hearing And PitchMissing: p² / | Show results with:p² /
  40. [40]
    Sound Intensity and Sound Level | Physics - Lumen Learning
    The intensity of a sound wave is also related to the pressure amplitude Δp, I = ( Δ p ) 2 2 ρ v w , where ρ is the density of the medium in which the sound wave ...Missing: p² / | Show results with:p² /
  41. [41]
    Point sources | Sound Waves - University of Southampton
    From the above equation, the intensity decreases as an “inverse-square law” with distance r, that is like 1/r2. The sound-pressure amplitude of a travelling ...
  42. [42]
    Intensity and Distance – Understanding Sound - Pressbooks.pub
    Sound intensity follows the inverse square law ... Point source refers to an ideal sound source that sends sound out equally in all directions. Real world ...
  43. [43]
    Absorption and Attenuation of Sound in Air
    Mar 22, 2016 · Viscous and thermal conduction absorption are both proportional to the square of frequency, so on a log-log plot the classical absorption looks ...
  44. [44]
    [PDF] Chapter 8 – Absorption and Attenuation of Sound
    Attenuation includes absorption, which converts sound energy to heat, and scattering, which redirects energy. Absorption includes viscous and heat conduction ...
  45. [45]
    [PDF] 3d. Acoustic Properties of Gases - LEO L. BERANEK - MIT
    The attenuation caused by heat conduction and viscosity of the air a, is not known so accurately. The classical absorption due to these causes,' as discussed ...
  46. [46]
    Intensity and the Decibel Scale - The Physics Classroom
    A sound with an intensity of 1*10-12 W/m2 corresponds to a sound that will displace particles of air by a mere one-billionth of a centimeter. The human ear can ...
  47. [47]
    Sound_Intensity
    The THRESHOLD OF HEARING lies at 10-12 watts/m2, whereas the THRESHOLD OF PAIN is about 1 watt/m2. The measurement of sound intensity is its INTENSITY LEVEL ...
  48. [48]
    [PDF] LECTURE 1:
    Sep 9, 2004 · Music Physics and Engineering. Dover Press, 1967. A. Sound Pressure, p(t), is the variation about the baseline pressure that results from the ...
  49. [49]
  50. [50]
    sound pressure level - Underwater Acoustics
    Hydrophones measure the acoustic pressures in water, and are insensitive to the hydrostatic pressure, which is usually much greater than any fluctuation.
  51. [51]
    Comparison of Sound-Pressure Reference Levels in Air and Water
    The same source in water radiating the same pressure generates about 4.7 x 10-13 W/cm2—an intensity ratio of about 5,000. Thus, great care must be taken in ...
  52. [52]
    Lecture 15: Sound Waves: Sound Speed
    Pressure increase and volume decrease are related by the bulk modulus v vy ρ = B vy/v , which solving for sound speed v gives indeed v = (B/ρ)
  53. [53]
    [PDF] The Decibel Scale
    (symbol: B), in honor of Alexander Graham Bell (1847 -1922). The decibel is a unit representing one tenth (deci-) of a bel. The expression for a.
  54. [54]
    The Story Behind Decibels - EXAIR Blog
    Feb 7, 2025 · In 1928, the Bell folks proposed using a new word they'd coined: 'decibels', instead of TU's, in honor of the founder of their technology and ...
  55. [55]
    2. The bel and decibel scale | Basic Acoustics
    The bel (B) scale, named after Alexander Graham Bell - this scale was simply a logarithmic comparison of two signal powers (one at the start of a cable and one ...
  56. [56]
    Sound Pressure Level - The Engineering ToolBox
    =10 log(p / pref)2. = 20 log(p / pref) (1). where. Lp = sound pressure level (dB). p = sound pressure (Pa). pref = 2×10-5 - reference sound pressure (Pa).
  57. [57]
  58. [58]
    Sound Intensity Terms and Definitions - Acoustic Glossary
    Instantaneous Sound Intensity Definition (IEC 802-01-10) acoustic ... Reference Sound Intensity (Io) = 10-12 W/m² the threshold of hearing at 1Khz ...
  59. [59]
    Sound Intensity - The Engineering ToolBox
    The dynamic range of human hearing and sound intensity spans from 10-12 W/m2 to 10 - 100 W/m2. ... Sound intensity decreases with the distance to the source.Missing: kHz | Show results with:kHz
  60. [60]
    Frequency-Weightings for Sound Level Measurements - NTi Audio
    A-weighting represents human hearing, C-weighting is for when sound is loud, and Z-weighting is for sound source analysis. A and C are used for sound level ...
  61. [61]
    Comparative Examples of Noise Levels - IAC Acoustics
    This blog post compares examples of noise levels. It is broken down by Noise Source, Decibel Level, and Decibel Effect.Missing: OSHA NIOSH
  62. [62]
  63. [63]
    Physiology, Ear - StatPearls - NCBI Bookshelf
    The outer ear, also called auricle, is composed of cartilage and it is the part with the most contact with the external world. · The middle ear is an air-filled ...Issues of Concern · Mechanism
  64. [64]
    How Do We Hear? - NIDCD - NIH
    Mar 16, 2022 · The outer ear includes the pinna, temporal bone, and ear canal. The middle. Source: NIH/NIDCD. Sound waves enter the outer ear and travel ...
  65. [65]
    Auditory System: Structure and Function (Section 2, Chapter 12 ...
    Hair cells in the Organ of Corti in the cochlea of the ear respond to sound. Hair cells in the cristae ampullares in the semicircular ducts respond to angular ...
  66. [66]
    Anatomy, Head and Neck, Ear Organ of Corti - StatPearls - NCBI - NIH
    The hair cells convert mechanical energy into electrical energy transmitted to the central nervous system via the auditory nerve to facilitate audition. Go to: ...
  67. [67]
    Hair cell transduction, tuning and synaptic transmission in the ...
    Sound pressure fluctuations striking the ear are conveyed to the cochlea, where they vibrate the basilar membrane on which sit hair cells, the mechanoreceptors ...
  68. [68]
    On the Tonotopy of the Low-Frequency Region of the Cochlea - PMC
    Jul 12, 2023 · A tonotopic arrangement implies that high-frequency stimuli evoke largest displacements at the base, near the ossicles, and low-frequency sounds have their ...
  69. [69]
    Neuroanatomy, Auditory Pathway - StatPearls - NCBI Bookshelf
    Oct 24, 2023 · Hair cell depolarization sends an impulse toward the auditory nerve. Sound energy is thus converted to electrical energy and nerve signals ...
  70. [70]
    The Developing Concept of Tonotopic Organization of the Inner Ear
    Feb 4, 2020 · This study aims to document the historical conceptualization of the inner ear as the anatomical location for the appreciation of sound at a continuum of ...The Classical World Of... · Seventeenth Century · Twentieth Century
  71. [71]
    Presbycusis - StatPearls - NCBI Bookshelf - NIH
    [4] Presbycusis is a type of sensorineural hearing loss with the involvement of the inner ear or neurologic pathways that form connections to the auditory ...Missing: process tonotopic<|control11|><|separator|>
  72. [72]
    Current concepts in age-related hearing loss: Epidemiology and ...
    Age-related hearing loss (AHL), also known as presbycusis, is a universal feature of mammalian aging and is characterized by a decline of auditory function.
  73. [73]
    Music perception, pitch, and the auditory system - PMC - NIH
    Music involves the manipulation of sound. Our perception of music is thus influenced by how the auditory system encodes and retains acoustic information.
  74. [74]
    [PDF] 13. Psychoacoustics - UC Davis Mathematics
    Psychoacoustics is concerned with the relationships between the physical characteristics of sounds and their perceptual attributes. This chapter describes:.
  75. [75]
    The Mel Scale - jstor
    mel scale in the revised version presented in the 1940 paper. In the first technique a tone of fixed frequency was presented to listeners and they were required ...Missing: psychoacoustics | Show results with:psychoacoustics
  76. [76]
    The Bark Frequency Scale - Stanford CCRMA
    Based on the results of many psychoacoustic experiments, the Bark scale is defined so that the critical bands of human hearing each have a width of one Bark.
  77. [77]
    [PDF] 9. SOUND AND NOISE Prepared by E. M. Roth, M. D., Lovelace ...
    in a tone of I000. Hz or less. Beyond this frequency, the just-noticeable-difference remains fairly constant at 0. 3 of one percent of the tone's frequency.
  78. [78]
    13 Pitch - MIT Press Direct
    Feb 12, 2024 · The perception of tone simultaneities depends on the perception of individual tones. The pitch of a pure tone or partial is spectral, whereas ...<|separator|>
  79. [79]
  80. [80]
    Phon
    psychoacoustics. Phon. A unit used to describe the Loudness Level of a given sound or noise. The system is based on Equal Loudness Contours, where 0 phons at ...
  81. [81]
    Sones phons loudness decibel sone 0.2 - 0.3 - 0.4 - 0.5 - 0.6 define ...
    In acoustics, loudness is a subjective measure of the sound pressure. One sone is equivalent to 40 phons, which is defined as the loudness of a 1 kHz tone at ...
  82. [82]
    [PDF] Loudness, Its Definition, Measurement and Calculation
    Loudness, Its Definition, Measurement and Calculation*. By HARVEY FLETCHER and W. A. MUNSON. An empirical formula for calculating the loudness of any steady ...
  83. [83]
    ISO 226:2003 - Acoustics — Normal equal-loudness-level contours
    This International Standard specifies combinations of sound pressure levels and frequencies of pure continuous tones which are perceived as equally loud by ...Missing: 226:2007 | Show results with:226:2007
  84. [84]
    [PDF] On the Theory of Scales of Measurement
    ments at a concrete example of a sensory scale. This was the Sone scale of loudness (S. S. Stevens and. H. Davis. Hearing. New York: Wiley, 1938), which.
  85. [85]
    [PDF] The Human Ear - Hearing, Sound Intensity and Loudness Levels
    f nv L n = ⇒ Boosts our hearing sensitivity in the f ~ 2-5 KHz frequency range!!! ... However, the sensitivity of human hearing is frequency dependent over the ...
  86. [86]
    A Review of the Neurobiological Mechanisms that Distinguish ...
    Apr 1, 2022 · Loudness recruitment means that the affected ear perceives an abnormally rapid increase in loudness as the sound intensity increases [5, 6].
  87. [87]
    Temporal integration of loudness as a function of level - PubMed - NIH
    The amount of temporal integration, defined as the level difference between equally loud 5- and 200-ms stimuli, varies nonmonotonically with level.
  88. [88]
    Temporal Effects in Simultaneous Masking and Loudness
    One purpose was to investigate the difference between threshold curves for tones masked by bands of noise and the corresponding displacement curves obtained ...
  89. [89]
    Sound Quality or Timbre - HyperPhysics
    Timbre is the sound quality that distinguishes sounds with the same pitch and loudness, mainly determined by harmonic content, attack, decay, and vibrato.Missing: inharmonicity | Show results with:inharmonicity
  90. [90]
  91. [91]
  92. [92]
    Chapter 3.3 Harmonic Series I: Timbre and Octaves"
    ### Summary of Harmonics in Clarinet vs. Other Instruments for Timbre
  93. [93]
    Temporal Ordering and Auditory Resolution in Individuals with ...
    The mean threshold in the random gap detection test was of 14.1 ms. A comparison with the criteria established for normal subjects without peripheral hearing ...
  94. [94]
    The Precedence Effect in Sound Localization - PMC - PubMed Central
    In this paradigm, intended to simulate a source signal and single reflection, a human subject is seated equidistant from two loudspeakers in a sound-treated ...
  95. [95]
    [PDF] effective pitch and Weber-Fechner law in discrimination of duration ...
    The discrimination threshold or difference limen, also known as the just noticeable difference, in the perception of physical stimuli by humans is in many cases ...
  96. [96]
    Articulation and Dynamics Influence the Perceptual Attack Time of ...
    The attack phase of a sound ends at its maximum intensity (pmax, see Figure 1). The interval between PhOT and pmax is called onset rise time. PAT and POT are ...
  97. [97]
    Reverberation Time - McGill University
    ... formula for reverberation time as RT60 = 0.161 $V/(A + mV)$ , where $m$ is a constant that varies with air temperature, humidity, and frequency. More ...
  98. [98]
    Impulse Noise and Risk Criteria
    The human auditory system will provide full audibility when duration of a sound exceeds 200 ms at constant level. For shorter duration of sound less loudness is ...
  99. [99]
    Auditory Scene Analysis: The Perceptual Organization of Sound
    Auditory Scene Analysis addresses the problem of hearing complex auditory environments, using a series of creative analogies to describe the process requir.
  100. [100]
    Pitch, Harmonicity and Concurrent Sound Segregation
    Perceptual grouping affects pitch judgments across time and frequency. ... Spectral pattern and the perceptual fusion of harmonics.I. The role of ...
  101. [101]
    Texture – Open Music Theory - VIVA's Pressbooks
    There are many types of musical texture, but the four main categories used by music scholars are monophony, heterophony, homophony, and polyphony.
  102. [102]
    [PDF] Goodchild & McAdams 1 Perceptual Processes in Orchestration to ...
    orchestral layering. This concept of stratification involves two or more layers of musical material, separated into more and less prominent strands. At ...
  103. [103]
    MTO 31.1: Schwitzgebel, Texture and Teleology in Post-Millennial Pop
    This study highlights the role of texture as a source of local, perceptual input informing the real-time experience of musical form and climax, which includes ...
  104. [104]
    Sound texture perception via statistics of the auditory periphery
    The results suggest that sound texture perception is mediated by relatively simple statistics of early auditory representations, presumably computed by ...
  105. [105]
    Mechanisms of Sound Localization in Mammals
    Jul 1, 2010 · Many mammals, including humans, make use of the two binaural cues, ITD and ILD, to perform sound localization with an accuracy of just a few ...
  106. [106]
    Sound source localization - ScienceDirect.com
    The duplex theory of directional hearing developed by Lord Rayleigh in 1907 was the first to analyze sound source localization in terms of interaural ...
  107. [107]
    Auditory localization: a comprehensive practical review - Frontiers
    This is known as the Duplex Theory of binaural hearing. Stevens and Newman found that localization performances are best for frequencies below about 1.5 kHz ...Missing: seminal | Show results with:seminal
  108. [108]
    Sensitivity analysis of pinna morphology on head-related transfer ...
    Apr 12, 2021 · A head-related transfer function (HRTF) is a direction-dependent filter that describes the acoustic path from a sound source to the two ears. In ...
  109. [109]
    Contribution of spectral cues to human sound localization
    Aug 6, 2025 · The pointer was generated by filtering a 100-ms harmonic complex with equalized head-related transfer functions (HRTFs). Listeners controlled ...
  110. [110]
    Sound Spectrum Influences Auditory Distance Perception of Sound ...
    Jun 22, 2017 · For long distances, as a sound wave propagates through the atmosphere, high-frequency components become more attenuated than low-frequency ones ...
  111. [111]
    [PDF] Department of Music - Stanford CCRMA
    Nov 13, 1982 · The relationship between intensity and reverberation cues has been clarified to the extent that it is now possible to suggest that a hierarchy ...
  112. [112]
    Best Distance Perception in Virtual Audiovisual Environment - PMC
    Jun 28, 2022 · When the sound is far from listeners, reflected sound plays the main role in distance perception, while direct sound still obeys the inverse- ...
  113. [113]
    Sound Externalization: A Review of Recent Research - PMC
    Sep 11, 2020 · Sound externalization, or the perception that a sound source is outside of the head, is an intriguing phenomenon that has long interested psychoacousticians.
  114. [114]
    On the externalization of sound sources with headphones without ...
    Oct 10, 2019 · Sounds presented over headphones are generally perceived as internalized, i.e., originating from a source inside the head.
  115. [115]
    The representation of sound localization cues in the barn owl's ...
    Jul 11, 2012 · The barn owl is a well-known model system for studying auditory processing and sound localization. This article reviews the morphological and functional ...
  116. [116]
    Study of sound localization by owls and its relevance to humans
    This article reviews several lines of evidence that similar neural mechanisms must underlie the perception of sound locations in humans and owls.Missing: evolutionary adaptations
  117. [117]
    [PDF] Atmospheric Infrasound - atmo.arizona.edu
    The nominal range of human hearing extends from about 20 Hz to 20 000 Hz, so the inaudible sound waves with frequencies below 20 Hz were dubbed infrasound, ...Missing: definition | Show results with:definition
  118. [118]
    Infrasound for volcano monitoring - USGS Publications Warehouse
    Oct 4, 2024 · At local (<15 kilometers [km]) to regional (15–250 km) distances from volcanoes, arrays of infrasound sensors are commonly deployed to detect ...
  119. [119]
    Understanding Sound - Natural Sounds (U.S. National Park Service)
    Jul 3, 2018 · Frequency, sometimes referred to as pitch, is the number of times per second that a sound pressure wave repeats itself.
  120. [120]
    Wind turbine infrasound: Phenomenology and effect on people
    Natural sources include the eruption of volcanoes, sound produced by large animals (such as whales, elephants and rhinoceroses), thunder, avalanches and ocean ...
  121. [121]
    Assessing and optimizing the performance of infrasound networks to ...
    Infrasound can propagate over long distances without significant attenuation through atmospheric waveguides thanks to specific temperature and wind gradients.
  122. [122]
    Long‐Range Multi‐Year Infrasonic Detection of Eruptive Activity at ...
    Feb 21, 2022 · Infrasound from explosive volcanism can propagate hundreds to thousands of kilometers in atmospheric waveguides under favorable stratospheric ...
  123. [123]
    Infrasound monitoring - CTBTO
    Infrasonic waves cause minute changes in the atmospheric pressure which are measured by microbarometers. Infrasound has the ability to cover long distances with ...Missing: microphones perception
  124. [124]
    [PDF] Evaluation of Low Frequency Noise, Infrasound, and Health ... - CDC
    Nausea. Tinnitus. Difficulty with concentration. Ear pressure. Lightheadedness. Anxiety. Headache ... Physiological and psychological effects of infrasound on ...
  125. [125]
    The Health Effects of 72 Hours of Simulated Wind Turbine Infrasound
    Mar 22, 2023 · The 19 symptoms measured were: Headaches, Ringing in the ear, Itchy Skin, Blurred Vision, Dizziness, Racing Heart, Nausea, Tiredness, Feeling ...
  126. [126]
    Evaluation of Low-Frequency Noise, Infrasound, and Health ... - NIH
    Studies have shown noise-related annoyance as one of the main effects from exposure to low-frequency sound and infrasound. In addition, some case reports ...
  127. [127]
    Ultrasound
    Ultrasound probes, called transducers, produce sound waves that have frequencies above the threshold of human hearing (above 20KHz), but most transducers in ...
  128. [128]
    [PDF] Focusing of longitudinal ultrasonic waves in air with an aperiodic flat ...
    The lens design was optimized to operate at 100 kHz with a focal length of 6.7 mm and spatial resolution of 1.7 mm. The approach of computer simulation ...
  129. [129]
    Recent Advancements in Ultrasound Transducer - PubMed Central
    An ultrasound transducer is indispensable for various ultrasonic biomedical applications. The traditional ultrasound device is a type of a piezoelectric ...
  130. [130]
    Bat echolocation calls: adaptation and convergent evolution - PMC
    Bat echolocation calls vary in their dominant frequency approximately between 11 kHz (e.g. Euderma maculatum; Fullard & Dawson 1997) and 212 kHz (Cloeotis ...
  131. [131]
  132. [132]
    Sonography Doppler Flow Imaging Instrumentation - StatPearls - NCBI
    May 1, 2023 · Color Doppler is useful to interrogate organs for the presence or absence of blood flow and quickly investigate large areas for turbulent flow.Continuing Education Activity · Introduction · Issues of Concern
  133. [133]
    Ultrasound Imaging - FDA
    Sep 19, 2024 · Ultrasound waves can heat the tissues slightly. In some cases, it can also produce small pockets of gas in body fluids or tissues (cavitation).
  134. [134]
    A Review on Biological Effects of Ultrasounds: Key Messages for ...
    Feb 23, 2023 · The mechanical index (MI) is the indicator of potential non-thermal mechanical effects of cavitation determined by the negative pressure peak ...