Fact-checked by Grok 2 weeks ago

Auditory system

The auditory system is the sensory apparatus responsible for detecting and processing sound waves, converting mechanical vibrations into neural signals that enable hearing and . It encompasses peripheral structures in the ear—such as the outer, middle, and —and central neural pathways extending to the and , allowing humans to perceive frequencies typically ranging from 20 Hz to 20,000 Hz with optimal sensitivity around 3-4 kHz. This system not only facilitates communication and environmental awareness but also integrates with balance and other sensory functions through shared components. The , comprising the pinna (auricle) and external , collects and funnels sound waves toward the (tympanic membrane), directing airborne vibrations into the . These vibrations cause the tympanic membrane to oscillate, transmitting motion to the middle ear's —the , , and —which amplify the signal through mechanical leverage and area ratios between the and the oval window of the . The connects the middle ear to the nasopharynx, equalizing air pressure to optimize sound transmission. In the , the —a fluid-filled, coiled structure—performs , where sound-induced pressure waves in the displace the basilar membrane, bending on hair cells within the . This mechanical deflection opens ion channels, leading to of inner hair cells, which with 95% of auditory fibers and release neurotransmitters to generate potentials in the eighth cranial (cochlear ). The exhibits a tonotopic organization, with high frequencies processed at its base and low frequencies at the apex, enabling frequency discrimination. Central processing begins as auditory nerve fibers project to the cochlear nuclei in the , then ascend via the , , , and of the to the primary in the . interactions in the facilitate through interaural time and intensity differences, while higher cortical areas support complex functions like and auditory scene analysis. Disruptions in this pathway can lead to or auditory processing disorders, underscoring the system's vulnerability to noise, aging, and pathology.

Overview

Definition and Functions

The auditory system is the sensory system responsible for hearing, comprising peripheral structures that capture and transduce sound waves from the into mechanical vibrations, and central neural pathways that process these into electrical signals for by the . This system converts a wide range of weak mechanical signals—arising from pressure changes in air—into complex patterns of neural activity that enable the interpretation of sounds. Overall, it facilitates the awareness of auditory stimuli and their integration into meaningful experiences, such as recognizing environmental cues or human speech. The primary functions of the auditory system encompass sound detection, discrimination of (pitch) and (loudness), localization of sound sources, recognition of speech and other complex auditory patterns, and coordination with the to support and spatial orientation. Sound detection involves identifying acoustic stimuli within the audible range, while frequency and intensity discrimination allow for nuanced , such as distinguishing musical notes or volume levels. Sound localization relies on interaural time and intensity differences to pinpoint sources, and speech recognition processes phonetic elements amid noise for communication. Through shared inner ear components and the , it contributes to balance by integrating auditory cues with vestibular signals for postural stability. Sound propagation in the auditory system begins with airborne pressure entering the , where they are funneled and amplified before reaching fluid-filled structures that generate traveling along the basilar membrane, ultimately stimulating hair cells to produce neural impulses. These are defined by , measured in hertz (Hz) as cycles per second and determining , with human sensitivity spanning approximately 20 Hz to 20,000 Hz. , gauged in decibels sound pressure level (dB SPL) relative to a reference of 20 micropascals, quantifies loudness, enabling the system to handle a dynamic range of about 130 dB for everyday sounds. Beyond hearing, the auditory system supports non-auditory functions, including the perception of —a phantom auditory sensation, such as ringing or buzzing, generated internally without external sound input, often linked to cochlear damage or neural hyperactivity. It also mediates reflexive responses, like the acoustic , an involuntary triggered by abrupt, intense noises via pathways to protect against potential threats.

Evolutionary Context

The auditory system traces its origins to early vertebrates, where simple mechanoreception of vibrations in water evolved through structures like the system in fish, which detected hydrodynamic stimuli and served as a precursor to more specialized hearing organs. This system began transitioning during the period, approximately 360 million years ago, as sarcopterygian fish—ancestors to tetrapods—developed features such as the basilar papilla for enhanced sensitivity to pressure waves. By the late , fossils like exhibit tetrapod-like architecture, marking the shift from aquatic vibration detection to terrestrial sound perception. Key adaptations during this transition included the of air conduction mechanisms, with the hyomandibular in repurposing into the ossicle in tetrapods to transmit airborne sounds to the . In the lineage leading to mammals, the further specialized: the and derived from reptilian elements—the articular and quadrate bones—connected via Meckel's , which ossified and detached during the to periods, around 200 to 66 million years ago, freeing these structures for auditory function while a new dentary-squamosal joint emerged. This reconfiguration improved between air and fluid-filled structures. reveals variations across amniotes; retained a single ossicle (columella, homologous to the ) for efficient sound transmission suited to their lightweight skulls, whereas mammals' three-ossicle chain enabled finer tuning. The mammalian , evolving through elongation and coiling after the divergence around 220 million years ago, further adapted for high-frequency hearing by tonotopically organizing hair cells along its length. Evolutionary pressures driving these changes centered on survival advantages in diverse environments, such as predation avoidance through rapid and enhanced communication for social coordination in terrestrial and aerial niches. Specialized traits like echolocation in bats and dolphins exemplify convergent adaptations under selective pressures for prey detection and ; in these groups, auditory genes such as Prestin underwent to amplify high-frequency echoes, emerging independently around 50-60 million years ago in response to nocturnal foraging and aquatic hunting demands.

Peripheral Anatomy

Outer Ear

The , consisting of the auricle (pinna) and external auditory canal, serves as the initial interface for sound capture in the auditory system. The auricle is an elastic cartilaginous framework covered by and skin, featuring prominent ridges such as the , , , tragus, and lobule, which collectively form a funnel-like structure to gather ambient sound waves. The external auditory canal extends from the to the tympanic membrane, forming a curved tube approximately 2.5 cm long and 0.7 cm in diameter, with its lateral third cartilaginous and medial two-thirds bony; it is lined with containing ceruminous and sebaceous glands that secrete cerumen (). The primary functions of the outer ear involve sound collection, localization, and preliminary amplification while providing mechanical protection. The pinna's irregular shape and folds act as an acoustic filter, altering the frequency spectrum of incoming sounds to encode directional cues, particularly for vertical (elevation) localization through spectral notches in higher frequencies above 5 kHz and interaural level differences for horizontal azimuth in frequencies over 1.6 kHz. The external auditory canal enhances sound pressure through resonance, providing a gain of approximately 10-15 dB in the 2-4 kHz range—peaking near 3 kHz, which aligns with key speech frequencies—before directing the waves to the tympanic membrane. Cerumen production in the canal traps dust, bacteria, and insects, while the canal's S-shaped curvature and hair follicles further shield the delicate tympanic membrane from debris and trauma. Pathologies unique to the outer ear often disrupt these protective and acoustic roles. Cerumen impaction occurs when accumulates and hardens, obstructing the canal and causing , , or ear fullness, typically managed by irrigation or manual removal. , or swimmer's ear, involves inflammation of the canal skin due to moisture, trauma, or infection (bacterial like or fungal), resulting in pain, itching, discharge, and swelling that impairs sound conduction.

Middle Ear

The middle ear, or , is an air-filled space located between the and the , containing key structures that facilitate sound transmission. The , also known as the , is a thin, semitransparent, cone-shaped structure that separates the external ear canal from the and vibrates in response to incoming sound waves. Attached to the medial surface of the is the first of three tiny auditory : the (hammer), which connects to the (anvil), and the in turn articulates with the (stirrup). These form a chain that mechanically couples the vibrations of the to the oval window of the . The , a narrow passage connecting the to the nasopharynx, allows for pressure equalization between the and ambient air, preventing inward or outward bulging of the that could impair vibration. The primary function of the is to transmit and amplify sound vibrations from the air medium of the external ear to the fluid-filled of the , overcoming the that would otherwise result in significant energy loss. This is achieved through two main mechanisms: the area ratio between the tympanic membrane and the footplate at the oval window, approximately 17:1, which concentrates the force of vibrations, and the lever action of the , where the longer arm of the and shorter arm of the provide additional . Together, these mechanisms reduce the approximately 30 loss due to the air-fluid boundary, enabling efficient sound transfer to the via the pressing against the oval window. The also provides protection against excessive sound intensity through the , mediated by two small muscles: the stapedius, which attaches to the and pulls it posteriorly to stiffen the ossicular chain, and the tensor tympani, which attaches to the and tenses the tympanic membrane. These muscles contract bilaterally in response to loud sounds exceeding about 80 dB sound pressure level (SPL), attenuating transmission by 10-20 dB, particularly in the low-frequency range below 2 kHz, to safeguard the delicate structures. The auditory ossicles are the smallest bones in the , with the measuring just 2.5-3.5 mm in length, highlighting their specialized role in precise mechanical conduction. Evolutionarily, the and derive from reptilian bones—the articular and quadrate, respectively—which detached from the during the transition to mammalian ancestors, freeing them to form part of the while a new jaw articulation evolved between the squamosal and dentary bones.

Inner Ear

The inner ear is housed within the petrous part of the temporal bone and consists of the bony labyrinth, a series of cavities filled with perilymph—a fluid similar in composition to cerebrospinal fluid—and the membranous labyrinth suspended within it, which is filled with endolymph, an extracellular fluid rich in potassium ions. The cochlea, the auditory portion of the inner ear, is a coiled, spiral-shaped structure approximately 35 mm in length when uncoiled and making about 2.75 turns around a central modiolus. This spiral configuration allows for the spatial organization of sound frequencies along its length, contributing to the ear's ability to distinguish pitches. Within the cochlea, the rests on the basilar membrane, a flexible structure that divides the cochlear duct and vibrates in response to sound-induced pressure waves. The contains sensory hair cells, including approximately 3,500 inner hair cells arranged in a single row that primarily transmit afferent signals and about 12,000 outer hair cells in three rows that are modulated by efferent fibers. These hair cells feature bundles of connected by tip links, which are critical for mechanotransduction as they gate ion channels in response to mechanical deflection. Sound transduction begins when vibrations from the create a traveling wave along the basilar membrane, with peak displacement occurring at frequency-specific locations due to : high frequencies at the base near the oval window and low frequencies at the apex. This wave generates shear forces between the tectorial membrane and , opening mechanotransduction (MET) channels and allowing potassium influx from the , which depolarizes the hair cells and triggers release. The also includes vestibular components for , comprising three that detect rotational head movements and two otolith organs (utricle and saccule) that sense linear acceleration and gravity through deflection by otoconia crystals. These vestibular structures share the eighth cranial nerve () with the , integrating auditory and functions at the peripheral level. Outer hair cells provide active amplification via the cochlear amplifier mechanism, driven by the motor protein prestin, which enables rapid length changes in response to voltage shifts, enhancing basilar membrane motion and boosting auditory sensitivity by 40-60 dB for faint sounds. Neural signals from inner hair cells travel via neurons in the cochlear division of the eighth nerve to the in the .

Central Auditory Pathways

Cochlear Nucleus

The is situated at the dorsolateral aspect of the at the junction between the and , serving as the primary site of for auditory nerve fibers originating from the neurons. It receives input from approximately 30,000–32,000 myelinated auditory nerve fibers in humans, marking the convergence point where peripheral auditory signals begin central processing. The nucleus is divided into three main subdivisions: the anteroventral cochlear nucleus (AVCN), posteroventral cochlear nucleus (PVCN), and dorsal cochlear nucleus (DCN), each exhibiting distinct laminar and morphological features while collectively preserving the tonotopic organization established in the . This tonotopic mapping ensures that neurons responding to specific frequencies are spatially segregated, facilitating efficient of auditory information. Functionally, the cochlear nucleus initiates diverse decoding of acoustic features, with the ventral divisions (AVCN and PVCN) primarily handling temporal precision and intensity coding, while the DCN focuses on and detection. The ventral regions preserve phase-locking to timing, essential for encoding fine temporal structures like speech onsets, whereas the DCN integrates broadband spectral cues, contributing to source segregation in complex environments. Additionally, the DCN represents the first central site for multimodal integration, where auditory inputs converge with somatosensory signals from the , potentially aiding in the perception of influenced by head and pinna movements. Key neuronal populations within the include bushy cells, predominantly in the AVCN, which maintain precise phase-locking to auditory inputs for high-fidelity timing representation; stellate (or multipolar) cells, found mainly in the ventral divisions, that generate or onset responses to track amplitude modulations and transients; and fusiform (or pyramidal) cells in the DCN, which exhibit broadband inhibition and complex spectral tuning through inhibitory sidebands. These cell types, along with octopus and granule cells, form intricate local circuits that transform the temporally precise but feature-limited auditory signals into multifaceted representations of attributes. Outputs from these subdivisions project to higher structures, such as the , to support further and temporal processing.

Superior Olivary Complex

The superior olivary complex (SOC) is a group of brainstem nuclei located in the caudal pons that serves as the first site of binaural integration in the auditory pathway, receiving inputs from both ears to process spatial cues for sound localization. It consists primarily of the medial superior olive (MSO) and lateral superior olive (LSO), along with associated periolivary nuclei, and is embedded within the trapezoid body, a fiber tract that facilitates decussating connections between the cochlear nuclei. These nuclei receive bilateral excitatory projections from spherical bushy cells in the ventral cochlear nucleus, which preserve precise timing information through phase-locking to sound waveforms, particularly for low frequencies up to approximately 1.5 kHz. The MSO, characterized by bipolar neurons with mediolaterally oriented dendrites, primarily encodes interaural time differences (ITDs) via coincidence detection mechanisms, where neurons fire maximally when excitatory inputs from both ears arrive nearly simultaneously. This process, first proposed in the Jeffress model, relies on axonal delay lines that compensate for ITDs, enabling resolution finer than 1 ms and supporting azimuthal localization with a maximum ITD of about 600 μs at 90° . The MSO is optimally tuned for low-frequency sounds in the 500–2000 Hz range, where head-related delays are most prominent, and it also processes ITDs in amplitude-modulated signals to extend sensitivity to higher frequencies. Inhibitory glycinergic inputs from the contralateral medial nucleus of the trapezoid body (MNTB), relayed via the trapezoid body, sharpen this temporal coding by modulating coincidence windows. In contrast, the LSO processes interaural level differences (ILDs) through an excitation-inhibition (EI) framework, where ipsilateral excitatory inputs from bushy cells are balanced against contralateral inhibitory glycinergic projections from the via the MNTB and trapezoid body. This configuration allows LSO neurons to respond to relative disparities, contributing to particularly for high-frequency components above 2 kHz, where ITDs are less effective due to phase ambiguity. The SOC's outputs project via the to higher auditory centers, integrating these cues for comprehensive spatial hearing.

Inferior Colliculus

The (IC) serves as a critical hub in the auditory pathway, integrating ascending auditory information from lower brainstem structures and descending modulatory inputs for both reflexive and perceptual sound processing. Its anatomy is divided into the central nucleus (CNIC), which forms the core and receives the majority of ascending projections via the from nuclei such as the and , and surrounding shell regions including the dorsal cortex (DCIC), ventral division, and lateral cortex (LCIC). The CNIC is tonotopically organized with a logarithmic scaling of representation, where low frequencies map to the dorsolateral regions and high frequencies to the ventromedial areas, enabling precise spectral processing across the audible range. The shell regions, particularly the DCIC, ventral division, and LCIC, facilitate multimodal integration by receiving non-auditory inputs, such as visual signals from the and , alongside auditory afferents, allowing for cross-modal calibration of spatial perception. For instance, approximately 8-9% of IC neurons in cats respond to visual stimuli, with projections from targeting these zones to modulate auditory responses during orienting behaviors. Functionally, the IC synthesizes cues for by combining interaural time differences (ITD) and interaural level differences (ILD) derived from lower inputs, with neurons in the CNIC exhibiting sensitivity that varies by frequency, such as ITD dominance at low frequencies and ILD at high frequencies. It also codes through temporal response patterns that track envelope fluctuations, contributing to the perception of rhythmic and dynamic sounds. Additionally, IC neurons detect sound motion by integrating changing ITD and ILD cues over time, supporting the tracking of moving acoustic sources in space. Key cell types in the CNIC include disc-shaped neurons, characterized by flattened dendritic fields oriented parallel to fibrodendritic laminae, which provide sharp tuning by aligning with specific afferent layers for precise tonotopic selectivity. Duration-selective neurons, modeled as leaky integrate-and-fire units with offset inhibition, exhibit bandpass or band-suppression responses to specific sound durations in the range, aiding in the discrimination of temporal features like echoes or gaps. The is a major recipient of efferent feedback from the via corticofugal projections, which sharpen frequency tuning and modulate in response to behavioral context. It plays a pivotal role in reflexive behaviors, including the auditory , where lesions attenuate amplitude to sudden loud sounds, and orienting reflexes that direct to salient stimuli. Outputs from the project primarily to the , relaying integrated auditory information to higher cortical areas.

Medial Geniculate Nucleus

The medial geniculate nucleus (MGN), also known as the medial geniculate body, serves as the primary thalamic relay for auditory information, receiving inputs from the inferior colliculus and projecting to the auditory cortex, thereby acting as a critical gateway that refines and modulates sensory signals before cortical processing. This structure is located in the medial aspect of the thalamus and is organized into distinct subdivisions that exhibit specialized anatomical and functional properties, enabling parallel processing streams for different aspects of sound representation. The MGN preserves key features from lower auditory pathways, such as tonotopy in certain divisions, while introducing thalamic-level integration of spectral and temporal sound attributes. The ventral division (MGV) is the lemniscal component, characterized by a strict tonotopic organization that mirrors the frequency mapping in the , with high frequencies represented rostrally and low frequencies caudally. It receives primarily excitatory inputs from the central nucleus of the and projects densely to the core regions of the , particularly layers 3 and 4, maintaining precise frequency tuning (e.g., quality factor Q10 values up to 15.9 in ). The MGV excels in rapid temporal processing, synchronizing to modulations up to 200–300 Hz, and supports by combining components for enhanced sound discrimination. In contrast, the dorsal division (MGD) lacks clear and features broader tuning, receiving inputs from the dorsal cortex of the and non-lemniscal regions, with projections to belt areas of the and some supragranular layers. It processes slower temporal features, synchronizing to modulations below 50 Hz, and incorporates multimodal influences, such as visual inputs, for associative auditory functions. The medial division (MGM), often considered polysensory, integrates auditory signals with tactile and pain-related inputs from the and , projecting to both and non-auditory structures like the ; it shows heterogeneous and supports high-rate temporal synchronization (>100 Hz) for complex, emotionally salient sounds. Functionally, the MGN gates auditory signals to modulate attention and salience, with corticofugal feedback from the auditory cortex facilitating or suppressing neuronal responses—potentiation in lemniscal neurons enhances relevant sounds, while hyperpolarization in non-lemniscal regions inhibits distractions. This gating mechanism adapts to temporal regularity, suppressing redundant stimuli (e.g., via evoked potential amplitude reduction in response to repeated tones) and is disrupted by noise exposure but partially restored by high-frequency stimulation. Temporal processing in the MGN, particularly in the suprageniculate nucleus (associated with the medial division), enables directional selectivity for frequency-modulated (FM) sweeps, with 53–78% of neurons preferring upward or downward sweeps at rates of 400–3,000 kHz/s, aiding in echo suppression for rapid sound sequences like those in echolocation. As the first thalamic site for advanced spectral integration beyond brainstem levels, the MGN combines multi-frequency inputs to form coherent representations of complex spectra, contributing to speech and environmental sound parsing. During sleep, MGN activity shows state-dependent modulation, with early auditory evoked potentials remaining stable across wakefulness and slow-wave sleep, while later components enlarge, reflecting reduced filtering efficiency compared to wake states. These properties position the MGN as a dynamic filter that projects refined auditory streams to the cortex for higher-order analysis.

Auditory Cortex

The primary (A1), corresponding to , is located within Heschl's gyrus on the of the . It receives major inputs from the ventral division of the via the auditory radiation. A1 exhibits a tonotopic organization, where neurons are arranged in an orderly map reflecting the frequency selectivity of the , with low frequencies represented laterally and high frequencies medially. Surrounding A1 are belt regions, which process more integrated auditory features, and further parabelt areas that extend into cortex, forming a hierarchical core-belt-parabelt structure observed in both and humans. Auditory processing in the follows a dual-stream model, with a ventral stream dedicated to "what" processing for sound identification and a stream for "where" processing related to spatial and motion aspects. The ventral stream originates in the anterior (STG) and emphasizes for recognizing auditory objects, such as voices or species-specific vocalizations. In contrast, the stream arises in the posterior and caudolateral STG regions, supporting , , and temporal sequencing of auditory events. This model, proposed by Rauschecker and colleagues in the , highlights segregated pathways emerging from the lateral belt areas adjacent to A1. Hierarchical processing in the begins in with neurons responsive to basic spectrotemporal features like and modulations, progressing to higher-order areas in the and parabelt for invariant representations of complex sounds, including speech and . For instance, encodes elementary acoustic elements, while association areas integrate these into multidimensional patterns tolerant to variations in sound intensity or context. The is bilateral, yet exhibits left-hemisphere dominance for , particularly in phonetic and semantic analysis. Additionally, auditory cortical enables adaptive changes in response properties during learning, such as expanded representational areas for behaviorally relevant frequencies following perceptual training.

Auditory Processing

Sound Transduction

Sound transduction in the auditory system occurs primarily in the cochlear hair cells, where mechanical vibrations from sound waves are converted into electrical signals through mechanoelectrical transduction (MET). This process begins when acoustic pressure waves cause the basilar membrane to vibrate, generating a shearing motion relative to the overlying tectorial membrane. This shear deflects the stereocilia bundles on the apical surface of hair cells, stretching tip links—protein filaments composed of cadherin-23 and protocadherin-15—that connect adjacent stereocilia. The tension in these tip links gates MET channels located at the tips of shorter stereocilia, allowing an influx of potassium ions (K⁺) from the potassium-rich endolymph. The MET current, primarily carried by K⁺, depolarizes the hair cell from its resting potential of approximately -60 mV toward 0 mV, generating a receptor potential that modulates neurotransmitter release. Cochlear hair cells are specialized into two types with distinct roles in . Inner hair cells (IHCs) function primarily as afferent sensors, forming ribbon synapses that transmit precise electrical signals to the auditory nerve with high fidelity, encoding sound intensity and timing without significant mechanical feedback. In contrast, outer hair cells (OHCs) exhibit electromotility driven by the prestin in their lateral membranes, enabling somatic length changes that amplify basilar membrane vibrations by 50-100 times (equivalent to 40-60 gain). This amplification enhances the and selectivity of the , particularly for weak sounds, by actively boosting the motion at the IHC . Prestin responds to the with rapid contractions and extensions, occurring at velocities up to 10 µm/s and forces of about 0.1 nN/mV. The ion dynamics supporting transduction rely on the unique electrochemical environment of the . The in the scala media maintains a +80 mV endocochlear potential, generated by the stria vascularis through , which provides a driving force for K⁺ entry via MET channels (conductance ≥100 pS per channel, with 50-100 channels per bundle). K⁺ ions enter the apically and exit basolaterally into the low-potassium , with recycling facilitated by supporting cells and fibrocytes back to the stria vascularis to sustain the gradient. Calcium ions (Ca²⁺) also enter through MET channels and voltage-gated Cav1.3 channels, reaching tens of micromolar concentrations to trigger synaptic events. mechanisms adjust sensitivity during sustained deflection: fast (sub- to timescale) involves Ca²⁺-dependent myosin-1C motor slipping along filaments to relieve tip-link , while slow (tens of milliseconds to seconds) repositions the bundle via additional motor adjustments, maintaining an operating of 50-100 nm deflection. release in IHCs is mediated by otoferlin, a Ca²⁺-binding protein essential for multivesicular fusion at ribbon synapses, enabling sustained glutamate release at rates of 20-100 vesicles per second without . Frequency selectivity arises from the traveling wave propagation along the basilar membrane, where the wave velocity decreases from to and the characteristic frequency decreases tonotopically (high frequencies at the , low at the ), peaking at organized sites that match specific frequencies. This biomechanical filtering, amplified by OHCs, ensures sharp tuning before the electrical signal is relayed to the auditory nerve.

Neural Encoding of Sound

Neural encoding in the auditory system refers to the ways in which auditory neurons represent acoustic features such as , , and timing through patterns of action potentials, or . This process begins in the auditory nerve and continues through central pathways, enabling the to reconstruct sound information from distributed neural activity. The primary mechanisms include place coding, rate coding, and temporal coding, which collectively handle the wide range of sound attributes encountered in natural environments. Place coding, or tonotopy, exploits the spatial organization of the cochlea and auditory centers, where different frequencies activate distinct locations along the basilar membrane and corresponding neural populations. High frequencies stimulate the base of the cochlea, while low frequencies affect the apex, creating a frequency map preserved in structures like the cochlear nucleus and auditory cortex. This topographic arrangement, first demonstrated through mechanical models and direct observations, allows frequency information to be encoded by the position of peak neural activation rather than spike timing alone. Rate coding conveys stimulus intensity via the firing rate of neurons, typically increasing from about 30 to 300 per second over a 20 range before saturation. In auditory fibers, this monotonic relationship links louder sounds to higher discharge rates, though individual fibers cover only a limited of 20-40 . Central neurons often exhibit broader sensitivity through population integration. Temporal coding, in contrast, relies on the precise timing of relative to the sound , with phase-locking where align to stimulus cycles and below 1 ms for frequencies up to approximately 4 kHz in the auditory . This preserves fine temporal structure essential for and periodicity perception. Population coding enhances representation of complex sounds by combining activity across multiple fibers, forming across-fiber patterns that distinguish vowels or other spectra beyond what single neurons can achieve. For intensity, the system expands its 120 dB through mechanisms like synaptic at hair cell ribbons, which rapidly depletes vesicles to prevent overload, and hair cell saturation around 120 dB SPL, ensuring robust encoding without . The explains phase-locking to higher frequencies, where ensembles of auditory nerve fibers fire in coordinated volleys, maintaining temporal fidelity up to several kHz despite individual fiber limitations. Adaptation further refines encoding by reducing responses to steady tones over milliseconds to seconds, shifting sensitivity toward novel or changing stimuli and preventing saturation during prolonged exposure. In noisy environments, stochastic resonance can paradoxically enhance weak signal detection by adding optimal noise levels, improving phase-locking and discrimination in auditory nerve and central neurons. These mechanisms collectively support efficient, sparse coding in higher centers, where only a fraction of cortical neurons activate selectively for behaviorally relevant sounds.

Binaural Hearing and Localization

Binaural hearing refers to the auditory system's use of inputs from both ears to determine the spatial location of sounds, enabling precise perception in three-dimensional space. This process relies on comparing subtle differences in the timing, intensity, and spectral content of sounds arriving at each ear, which are generated by the head's acoustic shadowing and filtering effects. By integrating these cues at multiple neural levels, the brain constructs a spatial map of the acoustic environment, facilitating behaviors such as orienting toward threats or focusing on relevant sounds amid noise. The primary cues for horizontal are interaural time differences (ITDs) and interaural level differences (ILDs). ITDs arise from the slight delay in sound arrival between ears due to the head's width, reaching a maximum of approximately 700 μs for sounds at 90° in humans. ILDs occur because the head shadows higher-frequency sounds (>1.5 kHz), creating intensity disparities of 20-30 dB between ears, with the nearer ear receiving a stronger signal. For vertical localization, spectral cues dominate, as the pinnae filter incoming sounds in a direction-dependent manner, introducing frequency-specific notches and peaks that encode . Neural processing of these cues begins in the , where the medial superior olive (MSO) primarily computes ITDs and the lateral superior olive (LSO) processes ILDs through coincidence detection and excitatory-inhibitory interactions. These computations are then relayed to the , which integrates information with cues to form initial representations of auditory space, and further refined in the to generate coherent 3D perceptual maps. Human localization accuracy achieves about 1° resolution in the azimuthal plane and 10° in elevation, reflecting the precision of this hierarchical integration. Perceptually, binaural processing underlies phenomena like the , where the first-arriving sound (direct wave) suppresses perception of subsequent echoes, aiding localization in reverberant environments by prioritizing the leading . Similarly, it contributes to solving problem, enabling selective attention to a target voice amid competing sounds through enhanced spatial unmasking and stream segregation. Key to vertical localization are spectral notches in the (HRTF), which describe the individualized acoustic filtering by the head, , and pinnae; these notches, often in the 5-10 kHz range, shift with elevation to provide unique directional signatures. Infants calibrate their binaural system to these personalized HRTFs during , rapidly learning to interpret ITDs and spectral cues through exposure to self-generated head movements and environmental sounds by 6-12 months of age.

Development and Physiology

Embryonic Development

The embryonic development of the auditory system begins during the fourth week of , when the otic placode emerges as a thickening of the surface adjacent to the , induced by signals from the and surrounding . This placode rapidly invaginates to form the otic vesicle, or otocyst, by the end of the fourth week, establishing the of the . The otocyst elongates and differentiates into the cochlear and vestibular components, while mesenchymal cells surrounding it condense to form the otic capsule. Concurrently, contributions from the branchial arches shape the structures; the first arch provides precursors for the and , and the second arch for the , with these appearing in cartilaginous form around the sixth week and beginning by the eighth week. In the , the prosensory domain within the cochlear duct is specified through Pax2 and Fgf signaling pathways, which regulate epithelial and cell fate commitment starting around the sixth to eighth weeks. follows, driven by the Atoh1, essential for sensory cell development with expression initiating around gestational week 9, coinciding with the onset of , with significant maturation occurring between weeks 10 and 20 as inner and outer s emerge in the . The undergoes coiling by approximately week 12, achieving a spiral configuration that supports tonotopic organization, while the ossifies progressively, reaching near-completion by week 23. These processes establish the peripheral transduction apparatus, with functional responses to possible by week 26 as s connect to neurons. Centrally, the auditory nerve arises from neurons derived from the otic placode, forming the statoacoustic ganglion by the sixth week and extending axons toward the . Auditory nuclei in the , including the , begin to form during the embryonic period around week 8, with more defined organization by week 14 as fibers from the eighth cranial nerve innervate these structures. Thalamocortical projections develop during the fetal period, with the connecting to the via the ; these pathways mature progressively, peaking in density around birth to enable postnatal auditory processing. Disruptions in this timeline can lead to congenital anomalies, such as caused by Eya1 mutations, which impair otic placode induction and result in alongside branchial and renal defects. Additionally, a spanning late prenatal and early postnatal stages is vital for establishing in central auditory maps, during which sensory experience refines frequency-specific connections.

Physiological Mechanisms

The auditory system's physiological mechanisms encompass the homeostatic processes that sustain cochlear function and the plastic adaptations that enable ongoing neural reorganization. Homeostasis in the inner ear relies on the maintenance of endolymph, a potassium-rich fluid essential for hair cell transduction, produced by the stria vascularis through active transport mechanisms involving Na+/K+-ATPase pumps located on the basolateral membranes of marginal cells. This enzyme facilitates the uptake of potassium from the intrastrial fluid, generating the endocochlear potential of approximately +80 mV that drives sensory transduction. Potassium recycling is equally critical, with Deiters' cells in the organ of Corti absorbing K+ ions released from outer hair cells during depolarization and channeling them back to the stria vascularis via gap junctions and transporters like Kcc4, preventing ionic imbalances that could impair mechanoelectrical transduction. The cochlea's high metabolic demands, characterized by elevated oxygen consumption to support active processes like outer hair cell motility, make it particularly vulnerable to hypoxia, as its energy-intensive environment requires continuous oxidative phosphorylation for ATP production. Neural plasticity in the auditory system allows for adaptive changes throughout life, with critical periods shaping development and adult mechanisms enabling recovery. During early childhood, a sensitive period for auditory processing extends up to approximately age 7, during which exposure to language sounds refines central representations, facilitating native phoneme acquisition; beyond this window, plasticity diminishes, leading to challenges in second-language learning. In adults, post-deafness reorganization involves cross-modal plasticity, where deprived auditory cortical areas are recruited by visual inputs, as evidenced by enhanced visual activation in the auditory cortex of cochlear implant users, which can predict auditory recovery outcomes but may hinder reinstatement of pure tone processing. Reflexive mechanisms provide rapid modulation to protect and optimize auditory function. The acoustic reflex, a bilateral contraction of the stapedius and tensor tympani muscles in response to intense sounds (typically >80 dB SPL), exhibits a latency of about 10 ms for the contralateral pathway, stiffening the ossicular chain to attenuate transmission by 10-20 dB and reduce low-frequency damage. The olivocochlear efferent system, originating from the , exerts feedback control on cochlear gain via synapses on outer hair cells and auditory nerve fibers, suppressing responses to noise by 4-15 dB to enhance signal detection in masking conditions and protect against acoustic overstimulation. Cochlear mechanics exhibit inherent nonlinearities that contribute to , particularly at high stimulus levels where outer saturates; the basilar displacement grows compressively with a of approximately 0.2–0.3 (output per input ) at higher stimulus levels, effectively compressing the ~120 auditory into a ~ neural range to prevent saturation and preserve sensitivity to faint sounds. Age-related physiological changes, such as , primarily manifest as high-frequency due to progressive death of outer s in the basal , driven by cumulative and metabolic exhaustion, resulting in reduced and elevated thresholds above 2 kHz by middle age.

Clinical Aspects

Common Disorders

Common auditory disorders encompass a range of pathologies that impair sound transmission, transduction, or central processing, leading to or perceptual abnormalities. These conditions can be classified as conductive, sensorineural, or central, each with distinct etiologies and functional impacts. Conductive losses arise from mechanical obstructions in the outer or , while sensorineural and central disorders involve damage to structures or neural pathways, respectively. Conductive hearing loss often results from otitis media, a prevalent condition in children characterized by middle ear effusion that dampens sound conduction to the inner ear. This disorder affects more than half of children by their third birthday, with bacterial pathogens such as Streptococcus pneumoniae and nontypeable Haemophilus influenzae as primary causes, leading to temporary conductive impairment and potential recurrent episodes if unresolved. Otosclerosis represents another common conductive pathology in adults, involving abnormal bone remodeling around the stapes footplate that causes its fixation and restricts vibration transmission. With a prevalence of approximately 0.3-1% in white adults, this condition typically manifests as progressive low-frequency hearing loss without affecting the tympanic membrane. Sensorineural hearing loss frequently stems from noise-induced damage, where prolonged exposure to sound levels exceeding 85 dB triggers and in cochlear s, resulting in permanent threshold shifts and high-frequency deficits. This is a leading preventable cause of hearing impairment, particularly in occupational and recreational settings, as hair cell loss disrupts neural encoding of auditory signals. Age-related , another sensorineural disorder, involves tonotopic degeneration of the , beginning at the basal turn and progressing to affect high-frequency perception symmetrically in both ears. Affecting about one-third of individuals over 65 years, presbycusis arises from cumulative factors including vascular changes and oxidative damage, leading to gradual auditory decline and challenges in speech discrimination. Central auditory disorders include auditory neuropathy spectrum disorder (ANSD), characterized by synaptic failure at the inner cell-auditory junction despite intact cochlear amplification, as evidenced by preserved otoacoustic emissions but absent or abnormal auditory brainstem responses. This condition impairs temporal coding of sound, resulting in poor in noise, and often stems from genetic mutations or perinatal insults. Cortical deafness, a rarer central pathology, occurs due to bilateral lesions in the temporal lobes encompassing the primary , leading to profound with normal peripheral function. Such lesions, typically from vascular events like , disrupt higher-order auditory processing and can evolve into if partial recovery occurs. Beyond structural losses, tinnitus manifests as a phantom auditory perception without external stimuli, often following cochlear or neural damage, with a global prevalence of 10-15% in adults. This condition arises from maladaptive in auditory pathways, exacerbating distress through persistent ringing or buzzing that correlates with severity. involves heightened sensitivity to everyday sounds, where discomfort or pain occurs at levels below 90 dB, reflecting lowered discomfort levels and central gain amplification. Prevalence estimates range from 9-15% in the general adult population, frequently co-occurring with tinnitus or neurological conditions, and impacting through avoidance behaviors. Diagnostic tools such as and otoacoustic emissions testing help differentiate these disorders from peripheral issues.

Diagnostic and Therapeutic Approaches

Diagnostic approaches to auditory function primarily involve behavioral and electrophysiological tests to assess hearing thresholds, neural integrity, and structural abnormalities. is a standard behavioral test that measures the lowest audible sound intensity across frequencies typically from 250 to 8000 Hz, helping identify sensorineural or by comparing air and thresholds. Otoacoustic emissions (OAE) testing evaluates outer function by detecting faint sounds produced by the in response to auditory stimuli, providing a quick, non-invasive screen for cochlear integrity, particularly in newborns. (ABR) audiometry records electrical activity from the auditory and in response to clicks or tones, with analysis of wave latencies (e.g., waves I-V) aiding of auditory neuropathy or retrocochlear pathology. Imaging techniques complement these tests for structural evaluation. (MRI) is the preferred modality for detecting abnormalities such as acoustic neuromas (vestibular schwannomas) compressing the auditory nerve, offering high-resolution visualization of the . (CT) scans excel at delineating bony structures, such as disruptions in the ossicular chain in cases of due to or . Therapeutic interventions for auditory disorders range from amplification devices to surgical and emerging biological approaches. Hearing aids amplify incoming sounds by 20-60 depending on the degree of loss, using to improve speech clarity in sensorineural hearing . Cochlear implants provide auditory for profound sensorineural by bypassing damaged hair cells; these devices feature 18-22 arrays inserted into the scala tympani to directly stimulate surviving neurons, restoring functional hearing in over 90% of adult recipients. For conductive losses where traditional aids are ineffective, bone-anchored s (BAHA) transmit sound vibrations via a titanium implant to the , bypassing the external and . In unilateral cochlear implantation, bimodal stimulation—combining the implant with a contralateral hearing aid—enhances and speech understanding by leveraging cues. Emerging therapies target underlying cellular deficits. Gene therapy for hereditary hearing loss has shown promising results; for example, the DB-OTO therapy targeting OTOF mutations restored hearing in nearly all pediatric participants in a phase 3 trial as of October 2025. Optogenetics, an experimental technique, enables precise neural activation through light-sensitive proteins expressed in auditory neurons, offering potential for high-fidelity prosthetic stimulation beyond current electrical methods, though still in preclinical stages.

References

  1. [1]
    Neuroanatomy, Auditory Pathway - StatPearls - NCBI Bookshelf
    Oct 24, 2023 · The auditory system processes how we hear and understand sounds within the environment. Peripheral and central structures comprise this organ system.Introduction · Structure and Function · Embryology · Blood Supply and Lymphatics
  2. [2]
    Auditory System: Structure and Function (Section 2, Chapter 12 ...
    The auditory system changes a wide range of weak mechanical signals into a complex series of electrical signals in the central nervous system.
  3. [3]
    Anatomy and Physiology of the Ear
    When a sound is made outside the outer ear, the sound waves, or vibrations, travel down the external auditory canal and strike the eardrum (tympanic membrane).
  4. [4]
    Auditory System - an overview | ScienceDirect Topics
    The auditory system is defined as the sensory system responsible for the perception of sound, which involves the transmission of sound waves from the outer ...
  5. [5]
    Hearing (How Auditory Process Works) - Cleveland Clinic
    Hearing refers to the awareness of sounds and placing meaning to those sounds. It's a complex process that involves many different parts.
  6. [6]
    Basics of Sound, the Ear, and Hearing - Hearing Loss - NCBI - NIH
    In this chapter we review basic information about sound and about how the human auditory system performs the process called hearing.
  7. [7]
    How Hearing Works - BrainFacts
    Jan 10, 2020 · The hearing system picks up several qualities of sounds including pitch, loudness, duration, and location. The auditory system analyzes complex ...Missing: definition | Show results with:definition<|control11|><|separator|>
  8. [8]
    Vestibular System: Function & Anatomy - Cleveland Clinic
    Jun 19, 2024 · Your vestibular system helps you maintain your sense of balance. It includes structures inside your inner ear called otolith organs and semicircular canals ...
  9. [9]
    The auditory and non-auditory brain areas involved in tinnitus. An ...
    Tinnitus is the perception of a sound in the absence of an external sound source. It is characterized by sensory components such as the perceived loudness, ...
  10. [10]
    Acoustic startle modification as a tool for evaluating auditory function ...
    In this review, we discuss the various ways to modify the ASR, including the caveats and interactions of non-auditory factors that should be taken into account ...
  11. [11]
    The evolution of the various structures required for hearing in ...
    Sarcopterygians evolved around 415 Ma and have developed a unique set of features, including the basilar papilla and the cochlear aqueduct of the inner ear.
  12. [12]
    Evolution of the mammalian middle ear and jaw - PubMed Central
    Jun 11, 2012 · The evolution of the three-ossicle ear in mammals is thus intricately connected with the evolution of a novel jaw joint, the two structures evolving together.
  13. [13]
    Major evolutionary transitions and innovations: the tympanic middle ...
    Feb 5, 2017 · So why did mammals evolve a three ossicle middle ear rather than a single ossicle ear? This appears to revolve around changes to the mammalian ...
  14. [14]
    A Functional Perspective on the Evolution of the Cochlea - PMC - NIH
    This review summarizes paleontological data as well as studies on the morphology, function, and molecular evolution of the cochlea of living mammals.
  15. [15]
    Diversity in Fish Auditory Systems: One of the Riddles of Sensory ...
    Typically, tetrapods (amphibians, reptiles, birds, mammals) developed thin membranes on the body surface laterally of the inner ears (tympana or eardrums) to ...<|control11|><|separator|>
  16. [16]
    Parallel Evolution of Auditory Genes for Echolocation in Bats and ...
    Jun 28, 2012 · The ability of bats and toothed whales to echolocate is a remarkable case of convergent evolution. Previous genetic studies have documented parallel evolution.<|control11|><|separator|>
  17. [17]
    Anatomy, Head and Neck, Ear - StatPearls - NCBI Bookshelf
    The external ear is the visible part of the hearing apparatus. It is comprised of the auricle (pinna) and external auditory canal.Introduction · Structure and Function · Embryology · Nerves
  18. [18]
    The Role of Occlusion of the External Ear Canal in Hearing Loss - NIH
    The external auditory canal behaves as a resonator with the resonance frequency represented at a frequency range of 27 kHz, with an amplitude between 10 and 20 ...
  19. [19]
    Sound pressure transformations by the head and pinnae of the adult ...
    Propagating sound waves are filtered by the head and external ears to produce the spatial and frequency dependent acoustical cues for sound source localization.
  20. [20]
    The External Ear - Neuroscience - NCBI Bookshelf - NIH
    The external ear, which consists of the pinna, concha, and auditory meatus, gathers sound energy and focuses it on the eardrum, or tympanic membrane.Missing: localization | Show results with:localization
  21. [21]
    Cerumen Impaction Removal - StatPearls - NCBI Bookshelf - NIH
    Although excessive accumulation of cerumen is typically asymptomatic, patients should be treated if they present with hearing loss, ear fullness, pruritus, ...Introduction · Anatomy and Physiology · Indications · Contraindications
  22. [22]
    Otitis Externa - StatPearls - NCBI Bookshelf - NIH
    Jul 31, 2023 · Otitis externa (OE) is an inflammation, that can be either infectious or non-infectious, of the external auditory canal.
  23. [23]
    Physiology, Ear - StatPearls - NCBI Bookshelf
    This meatus has a tube form and extends inward to end in the tympanic membrane. Two-thirds of this canal are cartilaginous, and the last third is bone, and ...
  24. [24]
    Anatomy, Head and Neck, Ear Ossicles - StatPearls - NCBI Bookshelf
    This bone is the smallest in the human body. The stapes articulates with the inner ear's oval window.[2] The stapes forms an angle of approximately 10.7 ...
  25. [25]
    Neuroanatomy, Ear - StatPearls - NCBI Bookshelf
    Apr 3, 2023 · Endolymph and perilymph vary significantly in their concentration of ions, which is essential to the overall function of the cochlea. Endolymph ...
  26. [26]
    Human Cochlea: Anatomical Characteristics and their Relevance for ...
    Oct 8, 2012 · The outer cochlear wall had a mean length of 42.0 mm (Table 1), while the first turn was 22.6 mm (range, 20.3–24.3 mm) representing 53% of the ...
  27. [27]
    Number of inner and outer hair cells in each cochlea
    Inner hair cells (IHCs), of which there are ∼3,500 in each human cochlea, are innervated by dendrites of the auditory nerve and are considered to be the primary ...
  28. [28]
    The Inner Ear - Neuroscience - NCBI Bookshelf - NIH
    The Sweet Sound of Distortion. The motion of the traveling wave initiates sensory transduction by displacing the hair cells that sit atop the basilar membrane.
  29. [29]
    Chapter 10: Vestibular System: Structure and Function
    The membranous labyrinth of the inner ear consists of three semicircular ducts (horizontal, anterior and posterior), two otolith organs (saccule and utricle), ...
  30. [30]
    Neuroanatomy, Cranial Nerve 8 (Vestibulocochlear) - NCBI - NIH
    May 22, 2023 · The vestibular nerve is primarily responsible for maintaining body balance and eye movements, while the cochlear nerve is responsible for hearing.Missing: otoliths | Show results with:otoliths
  31. [31]
    Prestin and the Dynamic Stiffness of Cochlear Outer Hair Cells
    Oct 8, 2003 · Hearing sensitivity in mammals is enhanced by >40 dB by mechanical amplification generated by length changes (termed electromotility) of outer ...
  32. [32]
    Cochlear nuclei | Radiology Reference Article | Radiopaedia.org
    Aug 12, 2020 · Cochlear afferent fibers enter the brainstem at the pontomedullary junction lateral to the facial nerve as part of the vestibulocochlear nerve.
  33. [33]
    Analysis of the human auditory nerve - PubMed
    We found from 32,000 to 31,000 myelinated nerve fibres in the cochlear nerve of normal hearing individuals and any lower number in cases of sensory neural ...
  34. [34]
    Species differences in the organization of the ventral cochlear nucleus
    Two types of bushy cells, “globular and spherical” are recognized and these have very different response properties and connections (Warr, 1972; Rouiller and ...
  35. [35]
    Relationships between neuronal birthdates and tonotopic position in ...
    We also demonstrated that bushy cells are arranged in a dorsal to ventral gradient based on their birthdates along the tonotopic axis of the AVCN, suggesting ...
  36. [36]
    Response Classes in the Dorsal Cochlear Nucleus and Its Output ...
    Spectral time-course analysis of firing patterns in the dorsal cochlear nucleus. ... Encoding timing and intensity in the ventral cochlear nucleus of the cat. J ...<|control11|><|separator|>
  37. [37]
    Spectral Edge Sensitivity in Neural Circuits of the Dorsal Cochlear ...
    One possible function of the dorsal cochlear nucleus (DCN) is discrimination of head-related transfer functions (HRTFs), spectral cues used for vertical sound ...Missing: intensity | Show results with:intensity
  38. [38]
    Multisensory activation of ventral cochlear nucleus D‐stellate cells ...
    Dorsal cochlear nucleus fusiform cells receive spectrally relevant auditory input for sound localization. Fusiform cells integrate auditory with other ...
  39. [39]
    The Multiple Functions of T Stellate/Multipolar/Chopper Cells in the ...
    T Stellate cells deliver acoustic information to the ipsilateral dorsal cochlear nucleus (DCN), ventral nucleus of the trapezoid body (VNTB), periolivary ...
  40. [40]
    Onset Neurones in the Anteroventral Cochlear Nucleus Project to ...
    The cochlear nucleus consists of three functionally and anatomically separate divisions: the anteroventral cochlear nucleus (AVCN), the posteroventral cochlear ...
  41. [41]
    Neuroanatomy, Superior and Inferior Olivary Nucleus ... - NCBI
    Jul 24, 2023 · The inferior and superior olives are a collection of brainstem nuclei near the border of the medulla oblongata and the pons.Introduction · Structure and Function · Clinical Significance
  42. [42]
  43. [43]
  44. [44]
    Neuroanatomy, Inferior Colliculus - StatPearls - NCBI Bookshelf
    The inferior colliculus (IC; plural: colliculi) is a paired structure in the midbrain, which serves as an important relay point for auditory information.Missing: paper | Show results with:paper
  45. [45]
    Functional organization of the mammalian auditory midbrain - PMC
    Simultaneous anterograde labeling of axonal layers from lateral superior olive and dorsal cochlear nucleus in the inferior colliculus of cat. J Comp Neurol ...
  46. [46]
    Classification of frequency response areas in the inferior colliculus ...
    Frequency analysis in the cochlea gives rise to V-shaped tuning functions in auditory nerve fibres, but by the level of the inferior colliculus (IC), the ...
  47. [47]
    Sounds and beyond: multisensory and other non-auditory signals in ...
    Dec 11, 2012 · The inferior colliculus (IC) is a major processing center situated mid-way along both the ascending and descending auditory pathways of the ...Missing: paper | Show results with:paper<|control11|><|separator|>
  48. [48]
    Stimulus-frequency-dependent dominance of sound localization ...
    Stimulus-frequency-dependent dominance of sound localization cues across the cochleotopic map of the inferior colliculus. Ryan Dorkoski. Ryan Dorkoski. 1 ...Missing: motion | Show results with:motion
  49. [49]
    An influence of amplitude modulation on interaural level difference ...
    In natural listening environments, ILD and ITD values co-vary with the sound source location and thus are congruent with each other.
  50. [50]
    Adaptive Response Behavior in the Pursuit of Unpredictably Moving ...
    Both ears receive time-varying ILD and ITD because of a moving sound in the horizontal plane, and the head turning at angular velocity H ˙ . Integration of ...
  51. [51]
    Computational Models of Millisecond Level Duration Tuning in ...
    Several studies conducted in mammals have found neurons in the auditory midbrain (inferior colliculus) that are selective for signal duration. Duration ...
  52. [52]
    Diverse functions of the auditory cortico-collicular pathway - PMC
    The auditory cortico-collicular system, which connects the auditory cortex to the inferior colliculus, or auditory midbrain, has received increasing attention ...Missing: paper | Show results with:paper
  53. [53]
    Hyperexcitability of Inferior Colliculus and Acoustic Startle Reflex ...
    Mar 27, 2017 · Chronic tinnitus and hyperacusis often develop with age-related hearing loss presumably due to aberrant neural activity in the central ...
  54. [54]
    The organization and physiology of the auditory thalamus and its ...
    The main auditory-responsive portion of the thalamus is called the medial geniculate body (MGB), and it is the information bottleneck for neural representations ...
  55. [55]
    Medial Geniculate Nucleus - an overview | ScienceDirect Topics
    The Medial Geniculate Nucleus (MG) is a group of subnuclei that receive auditory input from the midbrain and hindbrain and mainly project to the auditory ...
  56. [56]
    Linking Topography to Tonotopy in the Mouse Auditory ...
    In this acute preparation, the connection between the ventral medial geniculate body (MGBv) and auditory cortex (AI) is preserved, permitting an in vitro ...
  57. [57]
  58. [58]
  59. [59]
  60. [60]
  61. [61]
  62. [62]
  63. [63]
    Auditory evoked potentials from auditory cortex, medial geniculate ...
    Conclusions: The early AEP components are not modulated by the normal sleep-wake states, and are not impaired during SWD. A strong state-dependent modulation of ...
  64. [64]
    A unified framework for the organization of the primate auditory cortex
    Apr 29, 2013 · One of the oldest and best characterized organizational features in the auditory system is its cochleotopic or tonotopic organization. Tonotopy ...
  65. [65]
    Mechanisms and streams for processing of “what” and “where” in ...
    The cortical auditory system of primates is divided into at least two processing streams, a spatial stream that originates in the caudal part of the superior ...
  66. [66]
    Hierarchical Representations in the Auditory Cortex - PMC
    The emerging picture is that auditory processing becomes increasingly multidimensional. This is expected on computational grounds because the invariant ...
  67. [67]
    Differential representation of speech sounds in the human cerebral ...
    Mar 20, 2006 · Taken together, these studies challenge the notion that the left hemisphere is specialized for speech and the right hemisphere is specialized ...
  68. [68]
    AUDITORY CORTICAL PLASTICITY: DOES IT PROVIDE ...
    The evidence on adult plasticity in auditory cortex has been complemented by substantial changes in the way auditory cortical receptive fields (RFs) are ...
  69. [69]
    Hair cell transduction, tuning and synaptic transmission in the ...
    Detection of the sound stimulus and its conversion to an equivalent electrical waveform, termed mechanoelectrical transduction, occurs in the sensory hair cells ...
  70. [70]
    Cochlear hair cells: the sound-sensing machines - PMC
    Cochlear hair cells, inner and outer, transduce sound into electrical signals. Inner hair cells detect sound, while outer hair cells amplify it.Cochlear Hair Cells: The... · Mechano-Transduction · Afferent Synapse
  71. [71]
    Mechanisms in cochlear hair cell mechano-electrical transduction ...
    This review aims to summarize the progress on the molecular and cellular mechanisms of the mechano-electrical transduction (MET) channel in the cochlear hair ...
  72. [72]
    How We Hear: The Perception and Neural Coding of Sound
    Jan 4, 2018 · This review provides an overview of selected topics pertaining to the perception and neural coding of sound, starting with the first stage of ...<|control11|><|separator|>
  73. [73]
    Von Békésy and cochlear mechanics - PMC - NIH
    May 22, 2012 · Georg Békésy laid the foundation for cochlear mechanics, foremost by demonstrating the traveling wave that is the substrate for mammalian cochlear mechanical ...
  74. [74]
    Dynamic Range Adaptation to Sound Level Statistics in the Auditory ...
    Nov 4, 2009 · In contrast, the firing rates of most primary auditory neurons change with sound level over a range of only 20–40 dB and saturate at stimulus ...
  75. [75]
    Phase Locking of Auditory-Nerve Fibers Reveals Stereotyped ...
    May 22, 2019 · Phase locking of auditory-nerve-fiber (ANF) responses to the fine structure of acoustic stimuli is a hallmark of the auditory system's temporal precision.
  76. [76]
    Chapter 13 Neural population coding in the auditory system
    This chapter discusses with neural population coding in the auditory system, where a population code comprises activity in multiple units with different ...Missing: across- | Show results with:across-
  77. [77]
    Mechanisms of synaptic depression at the hair cell ribbon synapse ...
    Aug 21, 2017 · We present evidence showing that depletion of rapidly releasing vesicles produces an early depression of the synaptic response.
  78. [78]
    Diversity matters — extending sound intensity coding by inner hair ...
    Oct 6, 2023 · This review discusses mechanisms and proposes novel hypotheses for how the mammalian hearing organ can encode an exceptionally wide range of sound intensities.Missing: depression | Show results with:depression
  79. [79]
    The Volley theory and the Spherical Cell puzzle - PMC
    Wever and Bray reasoned that single fibers can be synchronized to the stimulus waveform even if they do not fire at every stimulus cycle, and that the combined ...
  80. [80]
    Adaptation in auditory processing - PMC - PubMed Central - NIH
    Adaptation is a fundamental process in the auditory system that dynamically adjusts the responses of neurons to unchanging and recurring sounds.
  81. [81]
    Stochastic resonance in the sensory systems and its applications in ...
    In this review, we discuss a growing empirical literature that suggests that noise at the right intensity may improve the detection and processing of auditory, ...Review · 5. Hearing · 6. Tactile Sensation And...Missing: environments | Show results with:environments
  82. [82]
    Auditory localization: a comprehensive practical review - Frontiers
    Auditory localization is a fundamental ability that allows to perceive the spatial location of a sound source in the environment.Missing: primary | Show results with:primary
  83. [83]
    A common periodic representation of interaural time differences in ...
    ITDs of ±500 μs are within, and ±1500 μs beyond, the human ethological range (±700 μs, Feddersen et al., 1957, Kuhn, 1977).
  84. [84]
    Interaural level differences and sound source localization for ...
    ILDs in the input signal are known to be large for frequencies above 1500 Hz (e.g., approximately 20 dB at 6 kHz for a 60° azimuth) and much smaller for ...
  85. [85]
    Perception and coding of high-frequency spectral notches
    This localization ability is believed to be mediated by the perception of high-frequency spectral notches generated by the filtering action of the human pinna ...
  86. [86]
    Principal neuron diversity in the murine lateral superior olive ...
    Apr 19, 2023 · The organization of frequency and binaural cues in the gerbil inferior colliculus. J. Comp. Neurol. 525, 2050–2074 (2017). Article PubMed ...<|separator|>
  87. [87]
    Auditory Processing of Spectral Cues for Sound Localization in the ...
    These spectral notches are a principal cue for the localization of sound source elevation. Physiological evidence suggests that the dorsal cochlear nucleus ...
  88. [88]
    Neural Time Course of Echo Suppression in Humans
    Feb 3, 2010 · This complex perceptual phenomenon (Wallach et al., 1949), known as the precedence effect or echo suppression, improves our ability to localize ...<|separator|>
  89. [89]
    The cocktail-party problem revisited: early processing and selection ...
    Important effects occurring at the peripheral and brainstem levels are mutual masking of sounds and “unmasking” resulting from binaural listening.
  90. [90]
    [PDF] Development of binaural and spatial hearing in infants and children
    Whether it be calibration of changing values of directional cues, utilization of sound level as a cue for distance, or weighting of leading and lagging sounds ...Missing: HRTF | Show results with:HRTF
  91. [91]
    Embryology, Ear - StatPearls - NCBI Bookshelf - NIH
    Aug 8, 2023 · The middle ear ossicles initially form around six weeks of development. They first appear in a cartilaginous form that arises from neural crest ...
  92. [92]
    The Key Transcription Factor Expression in the Developing ...
    This expression pattern suggests that Pax2 might have diverse roles in sensory cell development within the cochlea. Sox2 expression in the mammalian inner ear ...
  93. [93]
    Comparative assessment of Fgf's diverse roles in inner ear ...
    Apr 8, 2021 · In this review, I will discuss mechanisms by which Fgf controls key events in early otic development in zebrafish and provide direct comparisons with chick and ...
  94. [94]
    Atoh1 directs hair cell differentiation and survival in ... - PubMed - NIH
    Sep 15, 2013 · Atoh1 function is required for the earliest stages of inner ear hair cell development, which begins during the second week of gestation.Missing: human | Show results with:human
  95. [95]
    Cochlea - Voyage au centre de l'audition
    Stage 1 : First signs of differentiation (9-10 weeks of gestation (wg)) The coiling of the cochlear spiral has occurred (left). However, the sensory epithelium ...
  96. [96]
    The human auditory system: a timeline of development - PubMed
    The second trimester is a time of rapid growth and development, and by the end of this period, the cochlea has acquired a very adult-like configuration.
  97. [97]
    Branchiootorenal Spectrum Disorder - GeneReviews - NCBI - NIH
    Jun 26, 2025 · Audiologic considerations include hearing aids for individuals with mild-to-moderate sensorineural or mixed hearing loss and cochlear ...
  98. [98]
    Regulation of auditory plasticity during critical periods and following ...
    The crucial role of auditory experience for the proper development of the tonotopic map is evident during critical periods, when the quality of the acoustic ...
  99. [99]
    mechanism of production by marginal cells of stria vascularis
    We show that the cell potential is more positive than the EP+, and that the ion pump is conventional Na,K-ATPase, probably in the basolateral membrane.
  100. [100]
    Deafness and renal tubular acidosis in mice lacking the K-Cl co ...
    Our data suggest that Kcc4 is important for K(+) recycling() by siphoning K(+) ions after their exit from outer hair cells into supporting Deiters' cells, where ...
  101. [101]
    The cochlea is built to last a lifetime - PMC - PubMed Central - NIH
    ... high metabolic demands, which produce protein-damaging free radicals and reactive oxygen ... These cells produce high levels of extracellular matrix proteins ...Missing: consumption | Show results with:consumption
  102. [102]
    Cerebral auditory plasticity and cochlear implants - PubMed
    Previous animal research and clinical experiences in humans suggest the existence of an auditory critical period in language acquisition.
  103. [103]
    A sensitive period for the development of the central auditory system ...
    Plasticity remains in some, but not all children until approximately age 7. After age 7, plasticity is greatly reduced. These data may be relevant to the issue ...Missing: language | Show results with:language
  104. [104]
    Visual activation of auditory cortex reflects maladaptive plasticity in ...
    Jan 9, 2012 · Cross-modal reorganization in the auditory cortex has been reported in deaf individuals. However, it is not well understood whether this ...
  105. [105]
    Visual activity predicts auditory recovery from deafness after adult ...
    Oct 17, 2013 · Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery.
  106. [106]
    [PDF] High Frequency Acoustic Reflexes in Cochlea - PDXScholar
    Nov 16, 1990 · In subjects with normal hearing sensitivity, the acoustic reflex is typically elicited at a sensation level of between 85 and 100 dB for ...
  107. [107]
    Effect of Contralateral Medial Olivocochlear Feedback on Perceptual ...
    The amount of gain reduction was estimated as 4.4 dB on average, corresponding to around 18 % of the without-elicitor gain. As a result, the compression ...<|separator|>
  108. [108]
    Evaluating the effects of olivocochlear feedback on psychophysical ...
    A supplementary growth of masking experiment was used to determine the precursor level needed to shift signal threshold in quiet by 10–15 dB. This experiment ...
  109. [109]
    Mechanics of the mammalian cochlea - PubMed - NIH
    At the base of the cochlea, BM motion exhibits a CF-specific and level-dependent compressive nonlinearity such that responses to low-level, near-CF stimuli are ...
  110. [110]
    Quantitative evaluation of myelinated nerve fibres and hair cells in ...
    The audiogram of 7 individuals showed high-tone hearing loss, typical for sensory-neural presbycusis. The inner (IHC) and outer hair cells (OHC) and the ...
  111. [111]
    Age-Related Hearing Loss Is Dominated by Damage to Inner Ear ...
    Aug 12, 2020 · These data comprise the first quantitative survey of hair cell death in normal-aging human cochleas, and reveal unexpectedly severe hair cell ...Missing: outer | Show results with:outer
  112. [112]
    Acute Otitis Media - StatPearls - NCBI Bookshelf - NIH
    Approximately 80% of all children will experience a case of otitis media during their lifetime, and between 80% and 90% of all children will have otitis media ...Introduction · Epidemiology · Treatment / Management · Prognosis
  113. [113]
    Otosclerosis - StatPearls - NCBI Bookshelf - NIH
    ... leads to fixation of the stapes footplate.[2]. Otosclerosis causes conductive hearing loss that typically presents with a normal tympanic membrane. However ...Introduction · Etiology · Epidemiology · Evaluation
  114. [114]
    Prevalence of Middle Ear Infections and Associated Risk Factors in ...
    Infections of the middle ear cleft are common in children. More than half of children will have suffered at least one attack by their third birthday [1–3].
  115. [115]
    Etiology of Acute Otitis Media in Children Less Than 5 Years of Age
    Spn and nontypable Haemophilus influenzae (NTHi) have historically been the leading causes of AOM, with the former generally believed to be more associated ...Missing: prevalence | Show results with:prevalence
  116. [116]
    Noise-Induced Hearing Loss: Overview and Future Prospects ... - NIH
    May 21, 2025 · Additionally, chronic exposure to noise levels of 85 dB or higher for 8 h per day can cause permanent hearing loss [19]. Although hearing ...
  117. [117]
    Noise-induced loss of sensory hair cells is mediated by ROS/AMPKα ...
    Dec 14, 2019 · A main causative factor in noise-induced hearing loss (NIHL) is oxidative stress, inflicting damage on sensory hair cells [1]. In fact, the ...
  118. [118]
    Impact of Aging on the Auditory System and Related Cognitive ...
    Mar 5, 2018 · Age-related hearing loss (ARHL), presbycusis, is a chronic health condition that affects approximately one-third of the world's population.
  119. [119]
    Auditory Neuropathy Spectrum Disorders: From Diagnosis to ...
    Typical auditory patterns in ANSD include the preservation of OAEs and CM and absent or altered neural waves of the ABRs [41] by loss of neural response ...
  120. [120]
    Auditory synaptopathy, auditory neuropathy, and cochlear implantation
    Auditory neuropathy spectrum disorder (ANSD) is characterized by dysfunctional transmission of sound from the cochlea to the brain due to defective synaptic ...
  121. [121]
    Cortical deafness of following bilateral temporal lobe stroke - PubMed
    Cortical deafness is an extremely rare clinical manifestation that originates mainly from bilateral cortical lesions in the primary auditory cortex.
  122. [122]
    The Neural Mechanisms of Tinnitus: A Perspective From Functional ...
    Tinnitus is defined as a phantom auditory perception without external sound stimulation. Tinnitus is common in otolaryngology, with a prevalence rate of 10–15 ...
  123. [123]
    Hyperacusis Diagnosis and Management in the United States
    Nov 2, 2023 · Hyperacusis is relatively common, with prevalence estimates ranging from 9% to 15% of adults (Andersson et al., 2002; Fabijanska et al., 1999) ...
  124. [124]
    Audiometric Characteristics of Hyperacusis Patients - PMC
    May 15, 2015 · In the literature, it has been suggested that LDLs below 100 dB HL might indicate hyperacusis (16), or LDLs below 90 dB HL at least at two ...Materials And Methods · Results · Discussion