A vocoder, short for voice coder or voice encoder, is an audio processing system that analyzes the spectral characteristics of a modulator signal—typically human speech—by dividing it into frequency bands and then synthesizes a new sound by applying those characteristics to a carrier signal, such as a synthesizerwaveform or another voice, resulting in a robotic or harmonized vocal effect.[1][2] Developed starting in the late 1920s by American physicist Homer Dudley at Bell Laboratories, with the device demonstrated in the 1930s, the vocoder was originally engineered to compress speech bandwidth for efficient long-distance telephone transmission, encoding voice signals into slowly varying parameters transmittable over limited-frequency channels.[3][2]The vocoder's foundational design relied on a source-filter model, employing a bank of bandpass filters to extract amplitude envelopes from the input speech across multiple frequency bands, which were then used to control the amplitudes of corresponding filters on the carrier signal for reconstruction.[2] A related demonstration device, the Voder (Voice Operation Demonstrator), was unveiled by Dudley at the 1939 New York World's Fair, allowing manual control to synthesize speech sounds, though it required extensive training for intelligible output.[2] During World War II, the technology was adapted for secure military communications, notably in the U.S. Army's SIGSALY system, which scrambled voice transmissions for high-level conferences using vocoder-based encryption over transatlantic links.[4]In the post-war era, vocoder research advanced at institutions like MIT's Lincoln Laboratory, where developments in the 1950s and 1960s focused on pitch detection, linear predictive coding (LPC), and real-time hardware for narrowband applications in satellite and aircraft communications, achieving low data rates while maintaining speech intelligibility.[3] By the late 1960s, the vocoder transitioned into music production, with early commercial uses by composers like Bruce Haack and Wendy Carlos, who employed a custom Moog vocoder for the 1971 soundtrack to A Clockwork Orange.[4] Its adoption surged in electronic and popular music during the 1970s, popularized by German band Kraftwerk on albums like Autobahn (1974) and tracks such as "The Robots" (1978), which showcased its futuristic, metallic timbre.[5] Notable subsequent applications include Herbie Hancock's jazz-funk track "I Thought It Was You" (1978), Afrika Bambaataa's hip-hop pioneering "Planet Rock" (1982), and extensive use by Daft Punk across their discography, cementing the vocoder as a defining element in genres from synth-pop to electronic dance music.[4]
Technical Foundations
Operating Principles
A vocoder functions as an analysis-synthesis system that encodes the spectral envelope of a modulator signal—typically human speech—by extracting its time-varying amplitude characteristics across frequency bands, then applies these envelopes to modulate a carrier signal, such as broadband noise or a periodic tone, to synthesize an output retaining the modulator's intelligibility while altering its timbre.[3] This process, originally conceptualized by Homer Dudley at Bell Laboratories, enables bandwidth-efficient transmission or creative audio manipulation by transmitting only the essential spectral shape rather than the full waveform.[6]The core analysis begins with bandpass filtering, which divides the modulator signal m(t) into multiple contiguous frequency bands, commonly 10 to 32 channels spanning the typical speech range of 0 to 4 kHz, to isolate contributions from different formants and spectral components.[6] In each band k, the envelope amplitude E_k(t) is extracted to capture the slow-varying intensity, often via full-wave rectification followed by low-pass smoothing of the filtered output; a fundamental representation of this extraction is given byE_k(t) = \left| \int m(\tau) \cdot h_k(t - \tau) \, d\tau \right|,where h_k(t) denotes the impulse response of the bandpass filter for band k, and the absolute value approximates the instantaneous envelope before further smoothing.[3] These envelopes represent the vocal tract's filtering effect on the glottal source, preserving phonetic information with reduced data.During synthesis, an exciter signal—serving as the carrier, such as a buzz-like periodic waveform for voiced sounds or hiss-like noise for unvoiced ones—undergoes parallel bandpass filtering matching the analysis bands, with each channel's amplitude modulated by the corresponding extracted envelope E_k(t); the resulting band-limited signals are then summed to reconstruct speech that remains intelligible despite the carrier's dissimilarity to the original source.[6] This modulation reconstructs the spectral shape of the modulator onto the carrier, mimicking human speech production where the vocal tract shapes glottal excitation. Unlike a talkbox, which relies on direct acoustic coupling from the speaker's mouth to an instrument via a tube to impose formants mechanically, the vocoder performs fully electronic spectral analysis and resynthesis without physical linkage.[2]
Signal Analysis and Synthesis
In the analysis stage of a channel vocoder, the modulator input signal, typically human speech, is passed through a bank of parallel bandpass filters to divide the audio spectrum into multiple frequency bands, often around 10 to 20 channels covering the range of 250 to 3000 Hz.[7] Each filter's output is then processed by an envelope detector, consisting of a rectifier followed by a low-pass filter with a cutoff around 25 Hz, to extract the amplitudeenvelope of that band and produce a corresponding control voltage representing the spectral energy distribution.[8] These control voltages capture the slow-varying spectral envelope, enabling the separation of amplitude information from the rapid oscillations of the signal.[7]The extracted envelopes are quantized and multiplexed into a composite control signal for transmission, significantly reducing the required bandwidth by discarding phase and fine temporal details.[9] For instance, a 3 kHz speech signal can be compressed to approximately 300 Hz of controldata, achieving about a 10:1 reduction while preserving intelligibility.[9] This multiplexed signal includes the envelope data from all bands, typically transmitted at rates like 500 Hz for 20 channels, along with additional parameters for pitch and voicing.[8]In the synthesis stage, a carrier signal is generated based on the voicing decision: white noise for unvoiced segments such as fricatives, and a periodic waveform like a sawtooth or buzz from a relaxation oscillator for voiced segments.[7] This carrier is fed into a matching bank of bandpass filters, where each band's gain is modulated by the received envelope control voltages using voltage-controlled amplifiers (VCAs), one per band, to shape the spectrum.[8] The outputs from all VCAs are then summed to reconstruct the synthesized speech signal, approximating the original timbre and formants.[7]Pitch detection is integrated via a sidechain process, where the input signal is analyzed separately using methods such as zero-crossing counters on a low-pass filtered version (attenuating above 90 Hz) to measure the fundamental frequency and determine voicing (e.g., F₀ = 0 for unvoiced).[8]Autocorrelation techniques may also be employed in hybrid systems to refine pitchestimation by identifying periodicities, with multiple redundant detectors ensuring robust tracking.[10] The detected pitch modulates the carrier's frequency in the synthesis stage, enabling hybridexcitation that switches between noise and periodic sources.[7]The overall block diagram flow begins with the modulator input splitting into the main analysis path—through bandpass filters, envelope detectors, and multiplexers—and a parallel sidechain for pitch extraction via zero-crossing or autocorrelation modules, producing voicing and F₀ signals.[8] These are combined and transmitted as a low-bandwidth control stream to the receiver, where the synthesis path demultiplexes the envelopes and pitch data to drive the carrier generator (noise or oscillator), VCAs in the filter bank, and final summer, yielding the output speech.[7] This end-to-end process maintains the essential perceptual qualities of the original signal through parametric reconstruction.[9]
Historical Development
Invention and Early Uses
The vocoder was invented by Homer W. Dudley, a research physicist at Bell Laboratories, between 1936 and 1938 as a device known by the acronym "Voice Operated reCorder" (vocoder).[11] This invention stemmed from efforts to address bandwidth limitations in early telephone systems, where transmitting full speech waveforms required approximately 3 kHz of channel capacity.[3] Dudley's approach focused on analyzing speech to extract essential spectral characteristics, transmitting only those elements rather than the complete waveform, thereby enabling reconstruction at the receiving end.[12]The primary motivation was to drastically reduce transmission bandwidth to as little as 300 Hz while preserving speech intelligibility, allowing more efficient use of limited telephone lines for long-distance communication.[11] Early prototypes employed a bank of 10 bandpass filters spaced 300 Hz apart, covering frequencies from 0 to 2,950 Hz, to capture the spectral envelope of the voice signal.[7] These filters separated the speech into subbands, with envelope detectors producing low-frequency control signals that could be sent over narrow channels; at the receiver, a synthesis bank modulated a buzz or noise source to regenerate the audio.[12] This method prioritized conceptual efficiency over full fidelity, transmitting pitch, amplitude, and voicing information alongside the envelopes.[3]The vocoder was demonstrated to scientific audiences in the late 1930s, including at meetings of the Acoustical Society of America. At the 1939 New York World's Fair, the related Voder (Voice Operation Demonstrator), a manualsynthesizer based on similar analysis-synthesis techniques, was unveiled, allowing trained operators to produce synthetic speech.[11][3] Key intellectual property included U.S. Patent 2,151,091, filed by Dudley in 1935 and granted in 1939, which detailed the core system for signal transmission using variable speech characteristics over reduced bandwidth.[12]Despite these advances, early prototypes faced significant challenges, including limited intelligibility due to the coarse 10-band filtering, which often resulted in unnatural or muffled output requiring careful speaker articulation for comprehension.[7] Mechanical components, such as electromechanical relays and filters, introduced delays of about 17 milliseconds and susceptibility to noise, further degrading quality in real-world transmission scenarios.[11] These limitations highlighted the trade-offs in prioritizing bandwidth savings over perceptual accuracy in the nascent technology.[3]
Expansion in the 20th Century
During World War II, vocoder technology was classified as secret by the U.S. military due to its application in voice scrambling for secure communications, with details remaining confidential until its declassification in 1976.[13] The SIGSALY system, developed by Bell Laboratories and first deployed in 1943, exemplified this use by enabling encrypted transatlantic voice transmissions between key Allied leaders, including Winston Churchill and Franklin D. Roosevelt, for over 3,000 confidential conferences until 1946.[14] This 12-channel vocoder analyzed speech into ten spectral bands plus pitch and unvoiced energy components, quantizing them for low-bandwidth transmission over high-frequency radio links secured by one-time pad encryption.[13]Post-war, as restrictions lifted, vocoder adoption expanded into telecommunications for bandwidth-efficient voice coding, building on its original goal of reducing transmission requirements from continuous analog signals to discrete parameters. By the late 1940s and 1950s, declassified elements influenced civilian systems, though full public disclosure awaited the 1970s.[15] In the 1950s and 1960s, research advanced at institutions like MIT's Lincoln Laboratory, focusing on pitch detection, linear predictive coding (LPC), and real-time hardware implementations, achieving data rates as low as 2.4 kb/s while maintaining speech intelligibility for applications in satellite and aircraft communications.[3]In the 1960s and 1970s, commercialization accelerated with dedicated hardware for studio and experimental applications. Roland introduced the VP-330 Vocoder Plus in 1979, a studio-oriented device combining vocoding with string synthesis for enhanced creative control. This era marked a pivot toward music, with pioneers like Wendy Carlos employing custom vocoders in experimental pieces such as "Timesteps" (1969), which demonstrated synthesized vocal effects to bridge electronic and human expression.[16] Kraftwerk further popularized the technology on their 1974 album Autobahn, using a custom-built vocoder to process vocals into robotic timbres that defined electronic pop aesthetics.[17]Key hardware milestones included the EMS Vocoder 5000 (1976), featuring 22 bandpass filters for high-fidelity speech synthesis and modulation, and the Korg VC-10 (1978), a compact 20-band unit with polyphonic keyboard integration for live performance versatility.[18][19] These devices, with 16-22 bands typically, improved naturalness over earlier military prototypes, fostering widespread artistic experimentation by the late 1970s.[20]
Core Applications
Telecommunications
In telecommunications, the vocoder serves primarily as a speech compression tool to enable efficient transmission over narrowband channels, where traditional pulse-code modulation (PCM) at 64 kbps for 4 kHz voice bandwidth is impractical due to limited spectrum availability. By analyzing the spectral envelope and pitch of the input speech signal, vocoders can reduce the data rate to as low as 1-2.4 kbps while preserving intelligibility, making them suitable for mobile radios and satellite links with constrained bandwidth. For instance, in digital mobile radio systems, vocoders like the Advanced Multi-Band Excitation (AMBE) achieve these rates by modeling voice production parameters rather than sampling the waveform directly, allowing multiple users to share a single channel efficiently.[21][22]Vocoders have been integrated into military and early cellular standards to support secure voice communications. In the 1980s, the U.S. Department of Defense adopted the Advanced Narrowband Digital Voice Terminal (ANDVT) system, which employs a 2.4 kbps linear predictive coding (LPC) vocoder for encrypted narrowband transmission over high-frequency radios, ensuring interoperability across services. Similarly, the VINSON family of devices, introduced in the same era, incorporated continuous variable slope delta (CVSD) modulation alongside encryption for tactical secure voice in systems like the SINCGARS radio network. In early cellular contexts, such as the transition from analog Advanced Mobile Phone Service (AMPS) in 1983 to digital variants, vocoders facilitated bandwidth-efficient secure voice by enabling low-rate digitization compatible with emerging mobile networks. For scrambling applications, vocoders support encryption through techniques like envelope inversion—reversing the spectralamplitude—or band shuffling, which rearranges frequency channels to obscure content; the ANDVT exemplifies this by combining LPC compression with NSA-approved Suite A algorithms for military-grade security.[23][24][23]A key trade-off in vocoder design for telecommunications is the balance between bit rate reduction and speech naturalness, often quantified by Mean Opinion Score (MOS) ratings on a 1-5 scale. At 2.4 kbps, the Mixed Excitation Linear Prediction (MELP) vocoder, standardized by the U.S. DoD in 1997 as Federal Standard 1016, achieves MOS scores of approximately 3.5 in clean channels, outperforming earlier 4.8 kbps Code-Excited Linear Prediction (CELP) coders but introducing artifacts like buzziness in noisy environments due to simplified excitation modeling. Lower rates enhance bandwidthefficiency for bandwidth-limited scenarios, such as 1-2 kHz channels in tactical radios, but degrade perceived quality compared to higher-rate waveform coders.[25][26]The legacy of vocoders persists in modern telecommunications, influencing low-delay codecs for Voice over IP (VoIP). The ITU-T G.728 standard, introduced in 1992, uses Low-Delay CELP (LD-CELP) at 16 kbps with a 0.625 ms algorithmic delay to minimize latency in packet-switched networks, providing MOS-equivalent quality to 32 kbps ADPCM while supporting real-time applications like video conferencing and VoIP telephony. This evolution from early narrowband vocoders underscores their foundational role in achieving scalable, secure voice transmission across diverse infrastructures.[27]
Audio Processing and Effects
Vocoders enable the creation of robotic voice effects in audio production by applying the amplitude envelope of a human voice to a synthesizer carrier signal, resulting in a synthesized output that mimics mechanical speech while retaining intelligible formants.[28] This technique, rooted in envelope modulation, produces the characteristic metallic timbre associated with artificial voices and has been employed in film soundtracks to portray droid characters, such as the Cylons in the original Battlestar Galactica series.[29]In broadcast applications, vocoders facilitate voice disguise by altering vocal depth or height to an unnatural extent.[30] Key processing techniques include pitch shifting without formant adjustment, achieved by varying the carrier signal's frequency while holding the modulator's formants constant, which imparts an unnatural timbre ideal for sound design.[31] This approach contrasts with natural speech, where pitch and formant changes are coupled, and enables creative distortions in real-time effects.Notable hardware from the era includes the Electro-Harmonix EH-0300 Vocoder, a 1970s rackmount unit designed for studio effects that supported live voice processing with adjustable band filtering for precise envelope application.[32]Beyond entertainment, vocoders aid non-musical synthesis in speech therapy, where real-time noiseband processing modifies vowels for perceptual training and accent reduction by simulating degraded auditory environments to improve listener adaptation.[33]
Implementation Methods
Analog Approaches
Analog vocoders rely on hardware-based electronic circuits to analyze and synthesize speech signals, employing dedicated analog components for real-time processing. The core architecture consists of an analysis section that extracts spectral envelopes from the modulator signal (typically voice) and a synthesis section that applies these envelopes to a carrier signal (such as a synthesizer or noise source).[7]The primary components include a bank of bandpass filters, often implemented with inductors and capacitors (LC circuits) for precise frequency selectivity, dividing the audio spectrum into discrete channels—for instance, 16 bands spanning 0 to 5 kHz to capture formant information essential for intelligibility.[34] Each filter channel feeds into an envelope follower, commonly diode-based precision rectifiers followed by low-pass filtering to detect amplitude variations, producing control voltages that represent the modulator's dynamic spectral content.[35] These voltages then modulate voltage-controlled amplifiers (VCAs) in the synthesis filter bank, where early designs utilized lampblack variable resistors for gaincontrol, allowing the carrier signal—passed through a matching set of bandpass filters—to be shaped accordingly.[36]Design evolution began with Homer Dudley's original 1938 vocoder prototype, featuring a 10-band mechanical analyzer using electromechanical relays and tuned circuits to measure energy levels across the speech spectrum.[37] By 1940, Bell Labs advanced to a fully electronic vacuum tube unit with vacuum tube-based synthesis, incorporating improved LC bandpass filters and tube amplifiers for more stable operation and reduced mechanical wear.[34] Later iterations in the mid-20th century refined these elements, transitioning to solid-state components while retaining the fixed analog topology.A key advantage of analog approaches is their inherently low latency, enabling real-time interaction without digital buffering delays, which proved vital for early telecommunications applications.[38] Additionally, the non-linearities in analog circuits, such as those from vacuum tubes or diodes, introduce warm harmonicdistortion that enhances perceived richness in synthesized speech.[39]However, analog vocoders suffer from fixed band counts, limiting adaptability to varying signal complexities and often resulting in robotic artifacts if bands are insufficient (e.g., below 10 for basic intelligibility).[7] They are also prone to noise sensitivity from component drift and thermal effects in envelope followers and VCAs, amplifying hum or hiss in quiet passages.[40] Bulkiness remains a significant drawback, exemplified by the WWII-era SIGSALY system, which weighed over 50 tons across 40 equipment racks due to extensive vacuum tube arrays and power requirements.[14]Notable commercial devices include the 1976 Mu-Tron III, a compact pedal-style unit with 21 bands and an integrated oscillator for carriergeneration, popularizing analog vocoding in music production.[41]
Digital Techniques
The transition to digital techniques in vocoder design marked a significant shift from analog filter banks, which served as precursors by providing bandpass decomposition of signals, to computational methods leveraging digital signal processing (DSP) for greater flexibility and efficiency. In digital implementations, the fast Fourier transform (FFT) enables spectral analysis by replacing fixed analog filters with overlapping windowed segments of the input signal, allowing for precise estimation of the spectral envelope. This approach uses the short-time Fourier transform (STFT) to compute the magnitude spectrum for each frame, as given by |X(k)| = \left| \sum_{n=0}^{N-1} x(n) w(n) e^{-j 2\pi k n / N} \right|, where x(n) is the input signal, w(n) is the window function, N is the framelength, and k indexes the frequency bins; the envelope is then derived from these magnitudes to modulate a carrier signal during synthesis.[42]Linear predictive coding (LPC)-based vocoders represent a cornerstone of digital speech synthesis, modeling the vocal tract as an all-pole filter that captures the resonances of speech production. In this method, future samples are predicted from past ones using the equation \hat{s}(n) = \sum_{k=1}^{p} a_k s(n-k), where p is the prediction order (typically 10-12 for speech), a_k are the LPC coefficients estimated via methods like Levinson-Durbin recursion, and the prediction error serves as the excitation signal. LPC vocoders achieve low bit rates, such as 2400 bits per second in the LPC-10 algorithm standardized for military communications, enabling efficient transmission over narrowband channels while preserving intelligibility.[43][44]Related digital techniques, such as the phase vocoder, utilize STFT analysis and synthesis to manipulate both amplitude and phase information, enabling applications like time-stretching and pitch-shifting in audio processing. This method decomposes the signal into sinusoidal components with time-varying amplitudes and instantaneous frequencies, then resynthesizes by adjusting frame spacing and phase interpolation. While distinct from traditional channel vocoders that emphasize amplitude envelopes for voice modulation, phase vocoder principles inform some advanced spectral processing in modern vocoder designs.[45]Software realizations of digital vocoders have proliferated since the 2000s, with platforms like Max/MSP allowing users to construct custom patches that implement FFT-based analysis and LPC synthesis through visual programming. For instance, Max/MSP environments support real-time vocoding via objects for spectral processing and envelope extraction, enabling interactive audio effects in live performances. Similarly, plugins in digital audio workstations like Ableton Live, introduced in versions from the mid-2000s onward, integrate vocoder effects that combine carrier-modulator routing with adjustable band counts and dry/wet mixing for musical applications.[46]Hybrid methods, such as waveform interpolation, enhance digital vocoders by blending parametric modeling with direct waveform preservation to achieve smoother transitions between analysis frames. In this approach, speech is decomposed into pitch-cycle waveforms that are interpolated over time, reducing discontinuities in synthesized output while maintaining natural prosody at low bit rates around 2-4 kbps. These techniques, often combined with LPC for excitation, improve perceptual quality in speech coding by focusing on characteristic waveform shapes rather than purely spectral parameters.[47][48]In recent years, neural vocoders have emerged as a transformative approach in digital implementation, using deep learning models to generate high-fidelity waveforms directly from acoustic features like mel-spectrograms. Architectures such as WaveNet (autoregressive convolutional networks) and GAN-based models like HiFi-GAN employ generative adversarial training to produce natural-sounding speech with minimal artifacts, achieving real-time synthesis rates suitable for applications in text-to-speech systems. These methods surpass traditional parametric vocoders in perceptual quality, supporting bit rates as low as those of LPC while enabling expressive prosody control, as demonstrated in systems operational as of 2025.[49][50]
Contemporary Uses and Innovations
Music and Artistic Applications
The vocoder continues to influence modern music production, particularly in electronic, pop, and hip-hop genres, where it enables stylized vocal effects and harmonic layering. In hip-hop, its legacy persists through sampling, as in Jason Derulo's 2009 hit "Whatcha Say," which interpolates the harmonized vocals from Imogen Heap's "Hide and Seek" to create a melodic hook.[51]More recently, artists have integrated vocoders with digital production tools for innovative effects. For instance, in 2020, The Weeknd used vocoder-like processing on tracks from After Hours, blending it with auto-tune for a futuristic vocal texture in songs like "Blinding Lights."[52] Similarly, Billie Eilish employed subtle vocoder elements in her 2021 album Happier Than Ever to achieve ethereal, modulated harmonies on tracks such as "Your Power," enhancing emotional depth through synthetic vocal manipulation.[53]Production techniques often involve software plugins and AI-assisted layering for complex arrangements. For live performances, foot-pedal-controlled devices like the TC-Helicon VoiceLive series enable real-time vocoding, allowing performers to adjust modulation and carrier tones onstage for dynamic, interactive sets in electronic and pop acts.[54]The vocoder's influence extends to the evolution of digital vocal processing, paving the way for tools like auto-tune, which became prominent in 2000s–2020s pop for stylized effects.[55]
Emerging Technologies
Recent advancements in vocoder technology have been driven by the integration of artificial intelligence, particularly deep learning models that enhance speech synthesis and voice conversion. Neural vocoders, such as WaveNet introduced in 2016, represent a pivotal shift by generating raw audio waveforms directly from mel-spectrograms using autoregressive convolutional networks, significantly improving the naturalness of synthesized speech compared to traditional parametric methods.[56] Building on this, WaveGlow, proposed in 2018, employs flow-based generative networks to produce high-fidelity speech from mel-spectrograms, offering faster parallel generation while maintaining audio quality suitable for text-to-speech (TTS) systems.[57] These models have been instrumental in frameworks like Google's Tacotron 2, which combines sequence-to-sequence prediction with neural vocoding to achieve expressive, human-like TTS output.[58]Further progress in neural vocoders addresses efficiency challenges, with HiFi-GAN, developed in 2019, leveraging generative adversarial networks (GANs) for rapid waveformsynthesis from acoustic features, enabling inference speeds up to 167 times real-time on GPUs while preserving perceptual quality in TTS applications. In real-time scenarios, deep learning-based vocoders power low-latency voice conversion tools, such as Voicemod's AI-driven platform launched in 2022, which modulates user input in applications like gaming and streaming with minimal delay.[59] Similar techniques enhance virtual assistants, where models convert synthesized text to speech in interactive environments, reducing artifacts and supporting diverse voices for accessibility.[60]Vocoder innovations also extend to telecommunications, particularly in bandwidth-efficient codecs for 5G and emerging 6G networks. The Enhanced Voice Services (EVS) codec, standardized by 3GPP in 2014, incorporates vocoder principles to deliver super-wideband audio up to 20 kHz at bitrates as low as 64 kbps, enabling high-quality VoLTE and VoNR while optimizing for mobile data constraints.[61] Looking ahead, vocoders are integrating with virtual reality (VR) for immersive voice modulation, where AI-driven synthesis creates spatially aware audio that enhances user presence in metaverse environments as of 2025.[62] However, these developments raise ethical concerns, including the potential for deepfake audio misuse in misinformation and identity fraud, prompting calls for detection tools and regulatory frameworks to mitigate risks in 2025.[63]