Fact-checked by Grok 2 weeks ago

Directional sound

Directional sound refers to the controlled emission and propagation of in a specific , enabling focused audio delivery unlike the omnidirectional spread of traditional sources, and is achieved through principles such as shaping, manipulation, and nonlinear acoustic interactions. This technology leverages the relationship between emitter size, sound wavelength, and frequency, where higher frequencies naturally exhibit greater directivity due to shorter wavelengths relative to the source dimensions. In human , directional sound is also fundamental to , where the uses interaural time differences (ITDs) for low-frequency cues and interaural level differences (ILDs) for high-frequency cues to determine a sound's and . Key principles underlying directional sound include the Huygens-Fresnel principle, which explains how monopolar sources radiate isotropically while planar or arrayed sources produce directional beams, and in parametric arrays, where ultrasonic carriers generate audible sound via air at a virtual focus point. Parametric loudspeakers, for instance, employ two close ultrasonic frequencies (e.g., 40 kHz carriers differing by an ) to create a highly directional through self-, offering narrow beamwidths as low as 10-20 degrees and ranges up to several , though limited by efficiency (typically <1%) and low-frequency . Acoustic phased arrays, adapted from technology, use electronic phase shifts across elements to steer beams dynamically, achieving precise control over direction and focus for frequencies from tens of Hz to MHz. Notable methods for realizing directional sound encompass conventional emitters like sound domes and horn loudspeakers, which enhance directivity through geometric focusing (up to 14 dB isolation across 150 Hz–20 kHz), and advanced artificial structures such as phononic crystals and acoustic metamaterials that manipulate wave propagation via local resonances or bandgap effects for subwavelength control. Emerging approaches integrate active control systems and to optimize , addressing challenges like thermoviscous losses and broadband operation. These technologies enable applications in targeted audio delivery for privacy in public spaces, underwater communication with reduced multipath interference, imaging for enhanced resolution, and immersive environments. Historically, foundational work traces to parametric arrays proposed by Westervelt in , evolving with interdisciplinary advances in materials and computation.

Principles and Theory

Acoustic Fundamentals

Directional sound refers to the of audio signals confined to narrow beams or zones, thereby reducing spatial in comparison to omnidirectional sources that radiate equally in . are longitudinal consisting of alternating compressions and rarefactions of the medium, typically air, where occurs parallel to the direction of wave . These ' directionality is influenced by , which causes bending around obstacles, interference from superposition of leading to constructive or destructive patterns, and off surfaces that alters paths. The of a sound source is quantified through its , commonly visualized using polar plots that depict variation with . The directivity index at a specific θ, denoted D(θ), is mathematically defined as
D(\theta) = 10 \log_{10} \left( \frac{I(\theta)}{I_{\text{avg}}} \right),
where I(\theta) represents the acoustic at θ relative to the source's , and I_{\text{avg}} is the integrated over a full . This metric highlights how effectively a source concentrates energy in preferred directions, with higher values indicating narrower beams; for an ideal source, D(θ) = 0 .
In parametric acoustic arrays, directionality arises from modulating an audible signal onto a high-frequency ultrasonic , exploiting the medium's nonlinearity for self-demodulation along the beam path to generate a collimated audible output. The resulting audible beam forms at the difference , with the beat wavenumber β given by β = k₁ - k₂, where k₁ and k₂ are the of the two primary ultrasonic components (k = 2πf/c, with f as and c as ). This nonlinear interaction confines the low-frequency sound to a narrow virtual length, enhancing beyond what linear transducers achieve at audible frequencies. Beamforming provides another fundamental approach to directional sound via phased arrays, where constructive interference steers energy toward a target direction. For a uniform linear array of N elements spaced by distance d, the array factor AF(θ) is expressed as
\text{AF}(\theta) = \sum_{m=1}^{N} e^{j (k d m \sin \theta + \phi_m)},
with k as the wavenumber (2π/λ), θ the angle from the array axis, and φ_m the progressive phase shift for the m-th element to control steering. This summation yields a directional lobe whose width narrows with increasing N and optimized d (typically λ/2 to avoid grating lobes), enabling precise control over the radiation pattern.

Psychoacoustic Mechanisms

The human auditory system localizes sound sources primarily through and cues that exploit the acoustics of the head, , and pinnae to encode spatial information. cues arise from differences in the signals arriving at the two ears, while cues stem from spectral alterations imposed by the listener's anatomy. These mechanisms enable precise directional , particularly in the horizontal plane, and form the perceptual basis for technologies that manipulate sound directionality. Seminal work by Lord Rayleigh established the duplex , positing that interaural disparities in time and intensity underpin horizontal localization, with distinct frequency dependencies. Interaural time differences (ITD) provide the primary cue for localizing low-frequency sounds below approximately 1.5 kHz, where the exceeds the head's dimensions, allowing delays between the ears to be resolved. The maximum ITD, occurring for sounds at the interaural axis (±90° ), is about 0.6 ms, corresponding to the time for sound to traverse half the head's width (roughly 21 cm) at 343 m/s. This cue is encoded in the auditory via coincidence detectors in the , enabling discrimination thresholds as fine as 10-20 μs for stimuli. For higher frequencies, ITD sensitivity diminishes due to ambiguities, shifting reliance to other cues. Interaural level differences (ILD), or intensity disparities, become dominant for high-frequency sounds above 3 kHz, where the head acts as an , attenuating the ipsilateral 's signal. ILDs can reach up to 20 for sources at extreme azimuths, with the contralateral receiving stronger stimulation due to and shadowing effects. This cue is particularly effective for or high-frequency noise, complementing ITD in the duplex framework, though its utility decreases at low frequencies where minimizes shadowing. The integrates ITD and ILD for robust azimuthal localization, with neural processing in the combining these inputs. Spectral cues, mediated by the pinnae and head, introduce direction-dependent filtering that shapes the sound's spectrum at each ear, forming the basis of head-related transfer functions (HRTFs). The pinna's convolutions create notches and peaks—such as antiresonances around 5-8 kHz for frontal —that disambiguate and fine , resolving front-back confusions inherent in cues alone. HRTFs vary individually due to anatomical differences but are broadly characterized by directional patterns, enabling localization of static sources. Experimental synthesis of HRTFs over has demonstrated that these cues alone can support accurate vertical-plane localization when disparities are absent. The enhances localization in reverberant environments by prioritizing the direct sound wavefront over subsequent echoes, suppressing their influence on perceived direction if they arrive within 5-10 ms. This phenomenon, first systematically studied with paired clicks, results in a single, stable image aligned with the lead signal, preventing echo-induced blurring. Neural mechanisms likely involve or inhibition in the , with buildup and decay times varying by stimulus type—shorter for transients like speech onsets. The effect is more pronounced for ITD-driven lateralization than ILD, aiding robust perception in everyday acoustic spaces. Human sound localization accuracy in the horizontal plane is highest at the median plane (0° ), with mean errors under 1° for stimuli, degrading to 10-15° near ±90° due to reduced cue salience and cone-of-confusion ambiguities. Minimum audible angles (MAAs), measuring just-noticeable directional changes, average 1-2° centrally but expand to 10-20° laterally, influenced by frequency content and tones yield poorer performance than noise. These limits reflect the integration of ITD, ILD, and subtle spectral variations, with overall acuity supporting adaptive behaviors like orienting to threats.

Technologies

Parametric Acoustic Arrays

Parametric acoustic arrays generate highly directional audible sound beams by exploiting the nonlinear propagation characteristics of air. In this process, an array of transducers emits two closely spaced ultrasonic primary frequencies, typically around 40 kHz and a slightly offset frequency such as 42 kHz, which interact through acoustic nonlinearity described by the Khokhlov–Zabolotskaya–Kuznetsov (KZK) equation. As these ultrasonic waves propagate, absorption and self-demodulation in the air medium produce an audible difference frequency (e.g., 2 kHz) that forms a virtual end-fire array, enabling the audible sound to emerge with extreme directivity independent of the source aperture size. This technology offers significant advantages in beam control and distance compared to conventional loudspeakers. The resulting audible exhibits narrow angles, typically 10–20 degrees, allowing precise targeting while minimizing off-axis sound leakage and in environments. Additionally, the low-frequency audible component experiences less atmospheric than the ultrasonic carriers, enabling long-range up to 100 meters without substantial in ideal conditions. A prominent example is HyperSonic Sound (HSS), developed by inventor Woody Norris in the early 2000s through American Technology Corporation (now part of Genasys). HSS employs arrays of piezoelectric transducers to modulate ultrasonic carriers above 40 kHz, demodulating them into focused audible beams for applications like targeted audio displays. Implementation typically involves transducer arrays operating at 40–50 kHz with Class-D amplifiers and for modulation schemes such as (SSB-AM) to optimize output. However, audible efficiency remains low, typically less than 1% for converting ultrasonic input to perceptible sound levels (e.g., 70–100 SPL at short ranges), due to energy losses in the nonlinear process. Despite these benefits, parametric acoustic arrays face limitations from the inherent physics of ultrasonic . High-frequency carriers suffer rapid in air (approximately 0.15 /m at 40 kHz), restricting the virtual length to 4–7 meters and overall in humid or obstructed conditions. Furthermore, nonlinear interactions can introduce audible artifacts, including distortions (e.g., second-order components at 4 kHz, 40 below primaries), which degrade unless mitigated by advanced techniques.

Phased Speaker Arrays

Phased speaker arrays utilize multiple conventional loudspeakers arranged in a specific geometry, with electronic delays applied to each driver to steer and focus audible sound beams through constructive and destructive . This approach, known as delay-and-sum , involves that introduces time delays to the input signal for each speaker m according to \tau_m = \frac{d \sin \theta}{c}, where d is the inter-speaker spacing, \theta is the desired , and c is the ; these delays ensure that signals from all speakers arrive in at the , amplifying the sound there while canceling it elsewhere. This technique enables precise control over the directionality of audible frequencies, typically from 20 Hz to 20 kHz, without relying on nonlinear acoustic effects. A key application of phased speaker arrays is in wave field synthesis (WFS), where arrays of secondary sources recreate complex virtual sound fields based on Huygens' principle, which posits that every point on a acts as a source of secondary spherical wavelets. In WFS, the reproduced pressure field p(\mathbf{r}) at a point \mathbf{r} is approximated by the superposition from array elements at positions \mathbf{r}_m: p(\mathbf{r}) \approx \sum_m A_m \frac{e^{j(\omega t - k |\mathbf{r} - \mathbf{r}_m|)}}{|\mathbf{r} - \mathbf{r}_m|}, where A_m is the for speaker m, \omega is the , and k = \omega / c is the ; this formulation allows for the of virtual sources at arbitrary positions, providing immersive 3D audio with accurate localization cues. The concept was pioneered in Berkhout's 1988 work on holographic acoustic control, which laid the theoretical for using arrays to manipulate wavefronts holophonically. Modern implementations, such as those in large-scale installations, demonstrate WFS's ability to generate multiple virtual sources simultaneously across a area. Commercial examples include the HOLOPLOT X1 Matrix Array, introduced in 2018, which employs thousands of small drivers in a configuration to achieve audio , enabling up to 12 parallel steerable beams per module for applications like venues and immersive exhibits; this system delivers targeted sound zones with minimal spillover, as evidenced by its deployment in venue in , where it powers a 167,000-driver array for uniform coverage across 18,000 seats. Unlike parametric arrays that rely on ultrasonic , phased speaker arrays like HOLOPLOT use direct audible with DSP-driven , offering broader and multi-beam flexibility but requiring dense driver spacing to maintain . Despite these advances, phased speaker arrays face significant challenges, including high computational demands for real-time delay calculations and filtering across hundreds or thousands of channels, often necessitating powerful hardware to process signals without . Spatial also limits performance, occurring when frequencies exceed the spatial f_N = \frac{c}{2d}, leading to unwanted lobes and distorted ; for typical spacings of 10-20 cm, this caps effective around 8-17 kHz without additional mitigation like shading or irregular geometries. Common configurations include linear arrays for azimuthal in one plane, circular arrays for control around a point, and 2D planar or matrix arrays for full 3D of fields, with the latter providing the most versatile coverage for immersive environments. These setups leverage scalable modular designs to adapt to venue sizes, from small exhibits to large auditoriums.

Directional Microphones in Hearing Aids

Directional microphones in hearing aids represent a advancement in assistive technology, designed to enhance speech intelligibility in noisy environments by focusing on originating from the front while attenuating those from other directions. Unlike , which capture equally from all angles and provide no inherent (SNR) improvement over unaided in reverberant settings, directional microphones employ spatial filtering to prioritize frontal signals. A common implementation uses dual ports or twin spaced approximately 4–12 mm apart to create a cardioid polar , where rearward drops by more than 6 through phase cancellation of off-axis arrivals at the microphone . This phase-based subtraction attenuates from behind the listener, yielding a directivity index that increases with frequency, typically ranging from 2 at 500 Hz to 5.5 at 4000 Hz in modern devices. Adaptive beamforming further refines this capability by dynamically adjusting microphone weights to null specific noise sources, such as those from the sides or rear. The Griffiths-Jim algorithm, a constrained adaptive beamformer, exemplifies this approach, using two microphones to form an output signal y = w_1 x_1 + w_2 x_2, where x_1 and x_2 are the inputs and the weights \mathbf{w} = [w_1, w_2] satisfy the constraint w_1 + w_2 = 1 to preserve frontal signals while an adaptive filter cancels correlated noise via estimates. This method excels in low-reverberation environments with single jammers, adapting in to maintain target preservation and achieve modest SNR gains of a few dB, though performance degrades with misalignment or multiple noise sources. However, directional processing introduces frequency response trade-offs due to microphone spacing, which limits low-frequency directivity below approximately 500 Hz, where wavelengths exceed the array dimensions (e.g., spacing of 1–2 cm versus 68 cm at 500 Hz), resulting in broader beamwidths and reduced sensitivity akin to behavior. To compensate, many hearing aids incorporate fallback modes at lower frequencies or via adaptive switching, ensuring audibility for low-frequency while minimizing spatial effects. These systems have been implemented in commercial devices since the ; for instance, Phonak's AudioZoom introduced twin- directional technology, improving SNR by 3-4 dB in moderate noise, while Widex's Inteo introduced a 15-channel fully adaptive directional , yielding SNR enhancements of 3-5 dB in realistic noisy settings like restaurants. As of 2024, advancements like Phonak's Infinio incorporate for up to 10 dB SNR improvement in complex noise. In bilateral fittings, binaural processing synchronizes the two hearing aids to preserve natural interaural time differences (ITD) and interaural level differences (ILD), critical for and spatial release from masking. High-rate wireless exchange of audio signals enables joint across devices, balancing SNR improvements (e.g., 1–2.5 dB over ) with cue preservation by constraining processing at low frequencies (<2 kHz) or using hybrid modes that limit distortions in ITD/ILD. This approach enhances overall speech understanding in diffuse noise without compromising the perceptual benefits of hearing.

History and Development

Early Theoretical Foundations

The foundational concepts of directional sound were first systematically explored in the late 19th and early 20th centuries through studies on auditory localization. In his seminal 1907 paper, Lord Rayleigh proposed the duplex theory of sound directionality, positing that listeners discern sound sources in the horizontal plane primarily via two interaural cues: the interaural time difference (ITD), which arises from the slight delay in sound arrival between the two ears for low-frequency tones, and the interaural level difference (ILD), stemming from head shadowing that attenuates higher-frequency sounds more at the far ear than the near one. This theory provided the initial theoretical framework for understanding binaural processing, emphasizing how phase and disparities enable azimuthal localization without relying on spectral cues. Rayleigh's work, grounded in psychophysical experiments with simple tones, highlighted the frequency-dependent nature of these mechanisms, laying the groundwork for later acoustic engineering applications. During the 1930s, acoustic research advanced these perceptual insights through experimental investigations at Bell Laboratories, where scientists constructed artificial head models to simulate hearing and study spatial audio cues. Researchers, including , developed dummy-head microphones to capture and reproduce binaural signals, demonstrating how ITD and ILD could create illusory spatial positioning over lines. These experiments not only validated Rayleigh's cues empirically but also explored their limits in controlled listening tests, revealing challenges like the in reverberant environments. Concurrently, early techniques emerged in sonar applications, with hydrophone arrays deployed for submarine detection using phased signal delays to form directive beams and enhance underwater. Such arrays, operational by the early in systems like ASDIC, represented the first practical implementations of acoustic through linear superposition, influencing subsequent auditory technologies. Post-World War II developments shifted focus toward , enabling more sophisticated directional sound generation. In the , H.O. Berktay theorized parametric acoustic arrays for underwater applications, leveraging the medium's nonlinearity to produce sources that emit low-frequency beams from high-frequency carrier interactions, achieving superior directivity without physical array scaling. Building on this, P.J. Westervelt's 1963 analysis derived the parametric array equation, describing how nonlinear wave interactions in fluids generate difference-frequency along the , forming an end-fire radiator with minimal . By the 1970s, these concepts were adapted to air , with experiments confirming diffraction-limited arrays through modulated ultrasonic carriers, overcoming challenges to demonstrate feasible aerial directional transmission.

Key Technological Milestones

In 2002, Holosonics Research Labs commercialized the Audio Spotlight, the first parametric speaker system designed for consumer use, revolutionizing targeted audio delivery by modulating ultrasonic carriers to generate audible sound in a focused beam up to 100 feet long. Founded by alumnus Joe Pompei, this technology addressed longstanding challenges in directing sound without physical waveguides, enabling applications like museum exhibits and personal audio zones while minimizing ambient . Elwood "Woody" Norris developed Hypersonic Sound (HSS) technology in the late 1990s under American Technology Corporation, utilizing ultrasonic to produce highly directional audio for precise targeting, such as in advertising displays and point-of-interest announcements. Norris, a prolific inventor, received the 2005 Lemelson-MIT Prize for this innovation, which improved upon earlier concepts by enhancing and for viability, with first products available around 2002. The 2010s brought widespread integration of directional sound into systems, notably through (now )'s adoption of (HRTF)-based rendering in headsets like the , starting with SDK updates around 2017 that enabled realistic spatial audio simulation by accounting for head and ear acoustics. This approach allowed sounds to appear to emanate from specific 3D locations in virtual space, significantly boosting user immersion and presence in experiences. In the 2020s, AI-driven enhancements to have transformed adaptive in consumer devices, as seen in Amazon's Echo Frames smart glasses released in 2023, which employ algorithms to dynamically steer audio output toward the wearer while suppressing leakage, improving and clarity in noisy environments. These systems process environmental acoustics in real time to adjust , marking a shift toward intelligent, user-centric directional sound. Recent advancements as of 2025 include further integration of adaptive in wearables like Apple's (updated 2022) for personalized spatial audio.

Applications

Public Address Systems

In public address systems, directional sound technologies enable targeted audio delivery in large-scale environments, such as museums, hubs, and spaces, by projecting sound beams that minimize spillover and . These systems often leverage acoustic arrays to create focused auditory zones, allowing for precise messaging without the inefficiencies of traditional speakers. In museums and exhibits, directional speakers provide exhibit-specific audio narration directly to visitors standing in front of displays, reducing and preserving a quiet atmosphere for surrounding areas. For example, Audfly's directional speaker systems, developed since the company's founding in , use to deliver localized sound beams, enabling immersive, headphone-free experiences at individual artifacts while avoiding interference with adjacent exhibits. This approach enhances visitor engagement by creating independent audio zones tailored to each installation's content needs. Transportation hubs like airports and train stations employ directional sound for announcements to direct clear messages to specific passenger areas, such as gates, platforms, or lines, thereby improving speech intelligibility amid high ambient levels. Installations using systems like Brown Innovations' SonicBeam speakers at TSA checkpoints deliver targeted instructions to queued travelers, reducing the volume of repetitions and enhancing overall communication efficiency without broadcasting broadly across the facility. Such applications focus sound within defined zones, allowing for better comprehension in acoustically challenging spaces compared to conventional public address setups. In retail and advertising contexts, "sound spotlighting" techniques project promotional audio to particular product zones or displays, capturing customer attention without overwhelming the entire environment. Sennheiser's AudioBeam technology, introduced in public installations around , exemplifies this by beaming directional audio for targeted , such as highlighting special offers in high-traffic aisles. Similarly, systems from companies like Holosonics enable precise audio messaging at point-of-sale areas, potentially increasing consumer interaction with advertised items by focusing sound like a . The primary benefits of directional sound in public address systems include greater , as focused beams require lower output to achieve audible levels in targeted areas than omnidirectional alternatives that must cover broader spaces. Additionally, these systems enhance by confining audio to 5-10 meter zones, preventing unintended and reducing overall in shared public settings. A notable from the involves the integration of directional audio at the RAF Museum in the for "The Scramble Experience," an interactive exhibit where Audio Spotlight speakers provided guided narration and ambient sounds directly to visitors at specific interactive stations, eliminating the need for and minimizing disruptions in the multi-user space. In September 2025, Audfly launched new ultrasonic modules for smart cities and digital signage, integrating directional sound projection with microphone arrays to enhance communication clarity in urban public spaces.

Immersive Audio Environments

Immersive audio environments utilize directional sound technologies to simulate realistic spatial acoustics in entertainment, gaming, and virtual reality, allowing users to perceive sounds as originating from specific locations in three-dimensional space. These systems enhance engagement by providing cues for depth, elevation, and azimuth, fostering a sense of presence without physical speaker arrays surrounding the listener. Binaural rendering employs (HRTF) convolution to deliver 3D audio over headphones, modeling how sound waves interact with the , , and pinnae to achieve accurate localization. This technique involves convolving audio signals with HRTF filters derived from averaged or individualized measurements, enabling virtual sound sources to appear positioned around the listener in games and simulations. For instance, implemented spatial audio enhancements in during Season 6 in 2018, incorporating HRTF-based processing for footstep and glider sounds to improve directional cues and tactical awareness in multiplayer battles. Ambisonics and object-based audio formats further advance immersive experiences by encoding sound scenes and discrete elements for flexible reproduction. captures full-spherical audio as a set of channels representing the sound field, which can be decoded for various playback systems, while object-based approaches treat sounds as movable entities with for position and trajectory. exemplifies this in , supporting up to 118 audio objects alongside a 9.1 channel bed, rendered in real-time across overhead and surround speakers to track sounds dynamically in 3D space, creating enveloping effects like rain falling from above or vehicles circling the audience. Wave field synthesis (WFS) enables holographic soundscapes through large loudspeaker arrays, reconstructing wavefronts to position virtual sources accurately over extended listening areas. Based on Huygens' , WFS drives hundreds of closely spaced speakers (typically 15-20 cm apart) to synthesize plane waves or point sources, surpassing the limitations of channel-based stereo. IRCAM's projects in the early , such as the CARROUSO initiative (2001-2003), developed WFS prototypes with multi-channel equalization and virtual panning for interactive installations, demonstrating precise spatialization in concert halls and art exhibits. In , head-tracked positional audio integrates user movement data to dynamically adjust sound rendering, enhancing in systems like Meta Quest. By monitoring head orientation via inertial sensors, these platforms apply real-time HRTF spatialization to anchor audio to the , simulating how sounds shift relative to the listener's and position. This approach, supported by the Meta XR Audio SDK, processes ambisonic or object-based inputs through tools like , providing directional cues that boost perceived realism in exploratory experiences. These directional techniques collectively improve sound localization accuracy in controlled immersive setups, often achieving errors below 5° in the horizontal plane for frontal sources, compared to human auditory limits of about 1° frontally and 5° rearward. Such precision supports applications from to cinematic , where subtle directional fidelity heightens emotional and navigational impact.

Assistive Listening Devices

Assistive listening devices leverage directional sound technologies to enhance speech clarity for individuals with hearing impairments, particularly in noisy environments. These devices include hearing aids equipped with directional microphones that focus on sounds from specific directions, reducing and improving the (SNR). Fixed directional microphones typically employ cardioid or supercardioid patterns to prioritize front-facing speech, while adaptive versions dynamically adjust their sensitivity based on the location of the desired . In hearing aids, directional microphones can boost front-facing speech by improving the SNR by 6-10 compared to modes, facilitating better in controlled settings like classrooms or conversations. For instance, adaptive systems in modern hearing aids analyze incoming sounds in and steer directivity toward the talker, achieving up to 7.6 SNR gains in multi-noise scenarios. These enhancements are particularly beneficial for school-age children with , where moderate evidence from controlled trials shows large effect sizes (r = 0.56-0.67) in at low SNRs. Remote microphone systems represent another key advancement, using wireless directional to capture speech at the source and transmit it directly to the user's or implant. The Roger system by Phonak, introduced in the early as a successor to earlier technologies from the , exemplifies this approach with its 2.4 GHz digital transmission, compatible with various hearing devices for use in classrooms or group settings. These systems overcome distance and noise barriers by placing the microphone near the , significantly improving speech understanding—often by 10-15 dB SNR in reverberant environments like schools. Integration of directional sound processing with s further extends these benefits through algorithms in sound processors. uses arrays to create focused lobes that suppress noise from non-frontal directions, enhancing in crowded, noisy settings by 20-30 percentage points compared to processing. For example, beamformers in devices like the 2 processor improve speech reception thresholds by 2.6-2.9 dB across noise types, with greater gains (up to 6.6 dB SNR) in multi-talker babble simulating crowds. Studies on bimodal users confirm these processors reduce listening effort and boost intelligibility in real-world noise, such as restaurants, by prioritizing the target signal. Looking to future trends in the 2020s, AI-driven directivity is transforming assistive devices by enabling real-time adaptation to user behavior and environments. The , launched in 2021, incorporates a Deep that processes sounds 500 times per second to balance speech and noise, laying groundwork for more advanced models. Subsequent devices like the (2024) integrate 4D sensors to detect head and body movements, dynamically adjusting directivity—such as widening the focus during turns in conversation—to maintain speech clarity without manual intervention. In October 2025, announced an upgraded miniBTE R style for the Intent, available to professionals starting November 2025, offering more connection options and styles for active users. This AI evolution promises personalized noise suppression, with ongoing research emphasizing seamless integration for active lifestyles. Performance metrics for these assistive technologies are standardized to ensure reliability and comparability. The ANSI/ASA S3.47-2014 standard specifies methods for evaluating hearing assistance devices, including measurements of , , , and SNR improvements under simulated real-ear conditions. This framework guides manufacturers in optimizing directional features for individual use, focusing on output levels and directionality to support accessibility in diverse settings.

References

  1. [1]
    Directional sound propagation in acoustic artificial structures - Nature
    Jun 3, 2025 · This review introduces various methods for achieving directional sound propagation, including both conventional techniques and emerging approaches based on ...
  2. [2]
    [PDF] BINAURAL SOUND LOCALIZATION
    The primary goal of this chapter is to provide an understanding of the basic mechanisms underlying binaural localization of sound, along with an appreciation of ...
  3. [3]
    [PDF] A Novel Directional Sound Generation Technology - ePrints Soton
    Intermodulation process inside the primary beam excites air molecules to oscillate at the audio frequency, and the oscillation is regarded as virtual source.
  4. [4]
    Directional Audio - USC Viterbi School of Engineering
    Researchers have discovered a way to project acoustic waves as a thin beam of sound: step into the beam, and the projected sound fills your ears.Missing: definition | Show results with:definition
  5. [5]
    Transverse and Longitudinal Waves - HyperPhysics Concepts
    A single-frequency sound wave traveling through air will cause a sinusoidal pressure variation in the air. The air motion which accompanies the passage of the ...
  6. [6]
    Propagation of sound - MW-Acoustics
    Sound propagation is affected by wavelength, diffraction, reflection at impedance changes, and interference, which can be constructive or destructive.
  7. [7]
    Calculation of the Directivity Index for Various Types of Radiators
    Jul 4, 2022 · This paper gives the derivations of the "directivity index" formulas for several types of sound radiators. The "directivity index" is ...
  8. [8]
    Parametric Acoustic Array - AIP Publishing
    This paper presents the theory of highly directional receivers and transmitters that may be “constructed” with the nonlinearity of the equations of fluid motion ...Missing: principle | Show results with:principle
  9. [9]
    [PDF] chapter 3 antenna arrays and beamforming - VTechWorks
    Beam forming and beam scanning techniques are typically used with linear, circular, or planar arrays but some approaches are applicable to any array geometry.
  10. [10]
    XII. On our perception of sound direction - Taylor & Francis Online
    On our perception of sound direction. Lord Rayleigh OM Pres. RS Terling Place, Witham. Pages 214-232 | Published online: 16 Apr 2009.
  11. [11]
    The Precedence Effect in Sound Localization - jstor
    These are precisely the conditions for the precedence effect, and the total sound will be heard as coming from the earlier window, in this case, W1. in width ...
  12. [12]
    On the Minimum Audible Angle - AIP Publishing
    The smallest angular separation that can be detected between the sources of two successive tone pulses (the minimum audible angle) was determined for each of ...
  13. [13]
    [PDF] A review of parametric acoustic array in air - Convex Optimization
    May 15, 2012 · In this review paper, we examine some of the recent advances in the parametric acoustic array (PAA) since it was first applied in air in ...
  14. [14]
    [PDF] The Parametric Array as an Audible Sound Source i-A - DSpace@MIT
    Jun 2, 2025 · A parametric array exploits the nonlinearity of the propagation medium to emit or detect acoustic waves in a spatially versatile manner, ...
  15. [15]
    High‐powered parametric acoustic array in air. - AIP Publishing
    Apr 8, 2009 · Parametric arrays offer a highly directional, narrow beam mechanism to deliver sound in air to desired targets typically within a 100 m range.
  16. [16]
    Directional loudspeakers - How they work - Explain that Stuff
    Sep 13, 2023 · Directional speakers can target sound like a stage spotlight to a precise place where only certain people can hear it.
  17. [17]
    HyperSonic(TM) Sound - Acoustical Society of America
    HyperSonicTM Sound (HSSTM) from American Technology Corporation employs ultrasonics to create audible sound in the air. It works by using harmless ultrasonic ...<|control11|><|separator|>
  18. [18]
    [PDF] Design of a Highly Directional Endfire Loudspeaker Array*
    As a starting point let us take a look at a simple delay-and-sum microphone array beamformer in endfire configuration. In this case the transducer elements ...
  19. [19]
    (PDF) Acoustic control by wave field synthesis - ResearchGate
    Aug 6, 2025 · The concept of electroacoustic wave front synthesis is introduced. The underlying theory is based on the Kirchhoff-Helmholtz integral.
  20. [20]
    Holoplot X1
    Beamforming capabilities: HOLOPLOT 3D Audio Beamforming Technology and Wave Field Synthesis. Number of beams. Up to 12 beams in parallel per X1 Matrix Array:8.
  21. [21]
    Sphere Immersive Sound, Powered By HOLOPLOT
    Jul 24, 2023 · Next-generation audio technology delivers crystal-clear, concert-grade sound to each seat in Sphere through 3D Audio-Beamforming and Wave Field Synthesis.
  22. [22]
    Directional Hearing Aids - PMC - NIH
    Directional + omni systems are easily identifiable because there are three microphone ports (two for the directional microphone and one for the omnidirectional ...
  23. [23]
    Evaluation of an adaptive beamforming method for hearing aids
    In this paper evaluations of a two-microphone adaptive beamforming system for hearing aids are presented. The system, based on the constrained adaptive ...
  24. [24]
    [PDF] Microphone arrays for hearing aids: An overview
    Furthermore, feedback associated with high output hearing aids distorts the frequency response of the hearing aid, which was carefully tuned to compensate ...
  25. [25]
    [PDF] Phonak Compendium
    Jan 16, 2019 · With these early directional microphone systems, SNR was found to improve by as much as 6 dB, relative to unaided hearing, in cases of slightly ...Missing: Widex history<|separator|>
  26. [26]
    GOING BEYOND: A Testament of Progressive Innovation
    Widex introduced the first fully adaptive directional microphone in the Diva by using two omnidirectional microphones in order to make it possible to switch ...Missing: history | Show results with:history
  27. [27]
    Binaural Signal Processing in Hearing Aids - Thieme Connect
    Sep 24, 2021 · The primary acoustic cues, ITD and ILD, are termed “binaural cues,” and the brain's ability to integrate information that it receives from the ...
  28. [28]
    Auditory perception: The near and far of sound localization
    From the proposal early this century by Lord Rayleigh [1] that our perception ... On our perception of sound direction. Philos Mag, 13 (1907), pp. 214-232.
  29. [29]
    Sound Recording Research at Bell Labs
    1931 - in April, Leopold Stokowski invited Bell Labs to begin sound recording experiments with his Philadelphia Orchestra. After a series of disappointing radio ...
  30. [30]
    [PDF] A Brief History of Active Sonar - Aquatic Mammals
    Operating frequencies varied from 20 to 50 kHz. During the 1920s and early 1930s, ASDICs were developed for use on destroyers for anti- submarine warfare (ASW).
  31. [31]
    [PDF] Notes on Underwater Sound Research and Applications Before 1939
    Both active and passive sonar were anticipated in the need of a defense system, and two equipments in this protective array had e-pecial features which are.<|separator|>
  32. [32]
    Some proposals for underwater transmitting applications of non ...
    Berktay, B.V. Smith. End-fire array of virtual acoustic sources arising from the interaction of sound waves. Electronics Letters, 1 (1965), p. 6. Crossref View ...
  33. [33]
    Early years of the parametric array—An anecdotal history.
    Apr 8, 2009 · Definitive experimental confirmation of the parametric array in air was finally reported in 1973. However, practical applications of the ...
  34. [34]
    Overview ‹ Frank Joseph Pompei - MIT Media Lab
    PhD 2002. President and Founder, Holosonics. Joe founded Holosonics to commercialize the Audio Spotlight technology he created as a Media Lab student.
  35. [35]
    Father and Son Tackle Heat, Sound | MIT Technology Review
    Jun 1, 2005 · Dr. Joseph Pompei, while a student at MIT's Media Lab, invented the Audio Spotlight, a breakthrough technology that allows directional control ...
  36. [36]
    Inventor earns Lemelson-MIT Prize for sound thinking - MIT News
    Apr 18, 2005 · His HyperSonic Sound (HSS) invention is said to be the first big improvement in acoustics since the loudspeaker was invented 80 years ago.Missing: 6434250 2004
  37. [37]
    The Sound War | MIT Technology Review
    May 1, 2004 · The holder of a once valuable but long-expired patent on diagnostic ultrasound, the self-taught inventor has made a personal fortune that he ...Missing: 6434250 | Show results with:6434250
  38. [38]
  39. [39]
    How Oculus audio engineers are using new sound technology to ...
    Sep 20, 2017 · With “Near-Field HRTF,” developers are able to control sound as it moves closer and farther from someone's head with great precision. Previous ...Missing: directional integration
  40. [40]
    The science behind Echo Frames
    The smart glasses feature enhanced audio ... Combining psychoacoustics, signal processing, and speaker beamforming enhances stereo audio and delivers an immersive ...Missing: 2020s | Show results with:2020s
  41. [41]
    Amazon introduces next generation Echo Frames smart glasses
    Sep 20, 2023 · Echo Frames and Carrera Smart Glasses have been completely reconfigured for an enhanced audio experience. All styles include a new custom-built ...Missing: beamforming directional 2020s
  42. [42]
    Directional sound system and other directional speakers -
    Jan 28, 2022 · Directional sound refers to the technology of using various devices to create sound patterns that spread out less than most conventional speakers.
  43. [43]
    About Us, Directional Speaker Company - Audfly
    Audfly is a high-tech directional speaker company that was established in 2015. We have many years of experience in researching directional loudspeakers ...
  44. [44]
    Use Directional Speakers in Museums and Galleries - Audfly
    Directional speakers in museums preserve calm, enhance displays, and can provide voice-overs, ambient sounds, and improve accessibility.
  45. [45]
    Directional Sound Technology Enhances Museum Experiences ...
    May 10, 2025 · Unlike traditional speakers that spread sound in all directions, Audfly's directional speakers use ultrasonic waves to project audio in narrow ...
  46. [46]
    How TSA Improves Airport ... - Brown Innovations Directional Audio
    Aug 15, 2025 · The SB-47 SonicBeam is used to deliver clear announcements directly to travelers in line, reducing the need for repeated instructions and ...
  47. [47]
    Loudspeaker System Design for Airports: Ensuring Everyone Gets ...
    Mar 31, 2022 · Training airport announcers on proper microphone technique can dramatically improve the intelligibility of live announcements. The JBL ...
  48. [48]
    Sennheiser's Audio Beam Creates a Sound Ceiling - ETNow.com
    "Sound Ceiling" sound art installation using four ultra-directional Sennheiser AudioBeam loudspeakers to deliver four unique aural environments within one ...Missing: patented | Show results with:patented
  49. [49]
    Creative Marketing - Audio Spotlight by Holosonics
    Audio Spotlight directional audio technology is currently used throughout the world to reach consumers through a wide range of creative marketing mediums.Missing: spotlighting | Show results with:spotlighting
  50. [50]
    Understanding the Benefits and Uses of Directional Speakers
    Jul 18, 2025 · Energy Efficiency: Directed sound demands less power to achieve effective audibility compared to omnidirectional speakers, aligning with ...Missing: PA | Show results with:PA
  51. [51]
    8 Reasons to Use Directional Speakers​ - Audfly
    In summary, directional speakers offer precision, privacy, efficiency, and innovation. As a leader in directional sound technology, Audfly delivers solutions ...Missing: PA range
  52. [52]
    How do directional speakers work? - Akoustic Arts
    Directional speakers, also known as focused speakers or parametric speakers, are loudspeakers that emit sound in a narrow beam.
  53. [53]
    Audio Spotlight in Interactive World War II Exhibit in the UK
    The new facility houses 'The Scramble Experience,' a sprawling interactive exhibit using Audio Spotlight's state-of-the-art directional sound technology along ...
  54. [54]
    [PDF] 3D Audio in Games
    Convolute with HRTFs for localization. 6. Binaural rendering through headphones. Pre-computed. Page 15. Visibility-based beam tracing (1). • Map reflectors to ...Missing: Fortnite 2017
  55. [55]
    Can You Hear Me Walking? - Spatial Audio Updates - Fortnite
    Sep 21, 2018 · With Fortnite's mobility mechanics and dynamic building/destruction, we are presented with unique challenges for spatial audio. It's something ...Missing: 3D implementation 2017
  56. [56]
    Dolby Atmos Cinema Sound
    Dolby Atmos creates powerful, moving audio by utilizing audio objects and overhead speakers. Immerse the viewers in your theater with breathtaking sound.Dolby Atmos Creates Powerful... · Fills The Cinema With... · Conveys The Artist's IntentMissing: Ambisonics directional
  57. [57]
    (PDF) Decoding Ambisonics to Dolby Atmos using beamforming and ...
    May 29, 2025 · This paper describes proposal for a possible solution to address this problem, which is the use of the beamforming method to decompose Ambisonic B-format.
  58. [58]
    [PDF] Sound Scene Creation and Manipulation using Wave Field Synthesis
    Wave Field Synthesis (WFS) is a sound reproduction technique using loudspeaker arrays to reproduce the true physical attributes of a sound field over an ...Missing: Boon | Show results with:Boon
  59. [59]
    Audio - Meta for Developers
    Oct 7, 2025 · Head-Tracking. Immersive headsets such as the Meta Quest use head-tracking technology to monitor the orientation and position of a user's head.Missing: beamforming AR
  60. [60]
    [PDF] Immersive Audio - Simulated Acoustics for Interactive Experiences
    May 10, 2022 · For sources behind the spatial resolution is slightly lower at circa 5° degree accuracy. The localization accuracy in the hor- izontal plane ...
  61. [61]
    Directional Hearing Aids: Concepts and Overview (2005) - Article 1012
    Nov 21, 2005 · Hearing aid manufacturers are providing more sophisticated versions of directional technology - the only technology available in hearing aids ...<|separator|>
  62. [62]
    An Evidence-Based Systematic Review of Directional Microphones ...
    Moderate evidence also indicates that directional microphones resulted in improved speech recognition in controlled optimal settings; however, additional ...Missing: boost | Show results with:boost
  63. [63]
    [PDF] Efficacy of an Adaptive Directional Microphone and a Noise ...
    Using this approach, a SNR improvement of 7.6 dB for the adaptive directional condition was estimated. The PI functions for results in omni-directional mode ...
  64. [64]
    Roger Pen and Roger Clip-on Mic: Adult Solutions - Article 12529
    Mar 18, 2014 · Roger is a new standard that replaces FM. Remember that FM broadcasts on 216 to 217 MHz. Roger operates on 2.4 GHz, which is a specific band called the ISM ...
  65. [65]
    Phonak's Roger: Designed to Surpass FM and Equivalent Digital ...
    Available for the education market beginning summer 2013, Roger is designed to surpass today's Dynamic FM and equivalent digital systems. According to Phonak, ...Missing: date history
  66. [66]
    Roger for Young Children - Article 23116 - AudiologyOnline
    Jun 18, 2018 · Roger is fully compatible with hearing aids, bone anchored devices, and cochlear implant systems. Phonak has made a universal receiver, as well ...
  67. [67]
    Speech Understanding in Noise by Patients With Cochlear Implants ...
    The use of the beamformer resulted in a 31 percentage point improvement in performance; in bilateral CIs, an 18 percentage point improvement; and in HP CIs ...Missing: recognition crowds
  68. [68]
    Beamforming and Single-Microphone Noise Reduction - NIH
    Jul 21, 2022 · Cochlear implantation generally results in good speech recognition in quiet. However, speech recognition deteriorates markedly in noise, and ...Missing: crowds | Show results with:crowds
  69. [69]
  70. [70]
    Oticon More | Bluetooth® hearing aids | Get more out of life
    Oticon More debuted a new hearing aid technology – an on-board Deep Neural Network – which learned through experience, like your brain does naturally.
  71. [71]
    Oticon Intent | Quality hearing aids | Engage in life like never before
    Discover something that's never been seen in hearing care before: A hearing aid that understands your intentions with 4D user-intent technology.
  72. [72]
    Oticon Intent: A Leap in Hearing Aid Technology
    New 4D sensor technology, these hearing aids can recognise when the user's needs change and adapt the settings accordingly.Missing: driven directivity 2020s
  73. [73]
  74. [74]
    Hearing Aid–Related Standards and Test Systems - PMC - NIH
    Hearing aid performance data of interest includes, but is not limited to, measurements of the amount of amplification, frequency response, distortion, internal ...