Fact-checked by Grok 2 weeks ago

eSpeak

eSpeak is a compact, speech that employs to generate clear but synthetic-sounding speech from text in English and numerous other languages. It supports multiple platforms, including and Windows, and is available as a command-line program, , or SAPI5-compatible version for applications. With a total size of approximately 2 megabytes, including support for many languages, eSpeak prioritizes efficiency and portability while producing speech at high speeds. Originally developed by Jonathan Duddington, eSpeak traces its roots to an earlier project called "speak," created in 1995 for the / platform, which was later enhanced and rewritten in 2007 as eSpeak with relaxed licensing and expanded language capabilities. The synthesizer uses a -based approach, allowing it to translate text into speech via custom dictionaries and rules, and it can output to files or integrate with tools like MBROLA for alternative voice synthesis. Key features include support for (SSML) and reading, multiple voice variants (such as male/female or whisper modes), and the ability to edit data using the included espeakedit tool. The original eSpeak project, hosted on , has been largely succeeded by eSpeak NG, an active open-source initiated in late 2015 to modernize the codebase, add new features, and expand language support to over 100 languages and accents. eSpeak NG maintains compatibility with the original while incorporating improvements like Klatt formant synthesis, enhanced MBROLA integration, and ports to additional platforms such as , macOS, and BSD systems. Both versions are licensed under the GPL and continue to be used in assistive technologies, embedded systems, and text-to-speech applications worldwide.

History

Initial Development

The initial development of eSpeak traces back to 1995, when Jonathan Duddington created the "speak" program for Acorn/RISC OS computers, initially supporting British English as a compact speech synthesizer. This early version was designed with efficiency in mind, targeting the limited resources of RISC OS systems, and laid the foundation for formant-based synthesis techniques that would define the project. In 2007, Duddington enhanced and rewrote the program, renaming it eSpeak to reflect its expanded capabilities, including relaxed memory and processing constraints that enabled broader applicability beyond RISC OS. The first public release of eSpeak occurred around this time, establishing it as an open-source, formant-based text-to-speech synthesizer initially focused on English but quickly incorporating support for other languages. Key enhancements under Duddington's solo development included porting to multiple platforms such as Linux and Windows, which broadened its accessibility, and introducing initial multi-language support through rule-based phoneme conversion. Duddington maintained active development through numerous iterations, with version history progressing from early releases in the to more refined builds addressing prosody, voice variants, and language dictionaries. The last major release, eSpeak 1.48 (including subversions like 1.48.04), was issued in 2015, incorporating improvements in synthesis quality and integration before development slowed. His passing marked the endpoint of this individual-led phase, after which the project briefly transitioned to community-driven open-source maintenance in the form of eSpeak NG.

eSpeak NG Continuation

eSpeak NG was forked from the original eSpeak project in late 2015 to improve maintainability and enable ongoing development through a more collaborative structure. Following the passing of the original developer Jonathan Duddington, the project saw increased activity from a community of volunteers. The project transitioned to a dedicated repository under the espeak-ng organization, where volunteer developers, including native speakers, contribute feedback to refine language rules and pronunciations. This open-source effort has focused on enhancing compatibility and quality through iterative contributions. Key releases include version 1.50 in December 2019, which introduced support for SSML <phoneme> tags, along with support for nine new languages such as Bashkir. Version 1.51 in April 2022 added features like voice variants and a extension, while expanding language coverage with over 20 new additions including Belarusian, and improving platform integrations such as for . The latest release, 1.52 in December 2024, added a build system, stress marks for improved prosody, along with bug fixes and six new languages like Tigrinya. Community-driven improvements have emphasized better prosody through additions like stress marks in phoneme events and integration with modern toolchains, including a new build system to replace the older approach. As of 2025, eSpeak NG remains an active project, with over 500 issues resolved on and support for more than 100 languages and accents.

Overview and Features

Core Functionality

eSpeak is a and open-source, speech synthesizer designed to convert written text into audible speech. It operates primarily through , a method that generates speech by modeling the resonances of the human vocal tract, resulting in a compact suitable for resource-constrained devices. The core , including the program and data files for multiple languages, occupies less than 2 , making it lightweight and efficient for embedded systems or low-power environments. At its foundation, eSpeak provides a straightforward for basic text-to-speech operations, allowing users to input text directly via commands such as espeak-ng "Hello, world" to produce spoken output from files, standard input, or strings. For more advanced programmatic use, it offers an through a (or DLL on Windows), enabling developers to integrate into applications for automated reading of text. This supports embedding eSpeak within software for tasks like screen readers or voice assistants. eSpeak incorporates support for (SSML), which allows fine-grained control over speech attributes including pitch, speaking rate, and volume through markup tags in the input text. Output can be directed in various formats, such as generating audio files for storage, playing audio directly through the system's sound device, or piping the synthesized speech to other command-line tools for further processing. These capabilities ensure versatility in both standalone and integrated scenarios.

Advantages and Limitations

eSpeak demonstrates significant advantages in portability, supporting a wide range of operating systems including , Windows, (version 4.0 and later), BSD, , and macOS through command-line interfaces, shared libraries, and Windows SAPI5 compatibility. Its formant synthesis method enables rapid processing, achieving practical synthesis speeds of up to 500 words per minute, which facilitates efficient real-time applications. Additionally, eSpeak's compact footprint—totaling just a few megabytes—allows it to operate on resource-constrained systems with minimal computational demands. As an open-source project licensed under the GPL-3.0 or later, it is freely available and modifiable, promoting widespread adoption and customization. Despite these strengths, eSpeak's output often sounds robotic and artificial due to its formant-based approach, lacking the naturalness of neural text-to-speech systems like , which generate speech from human recordings or models. It exhibits limited capabilities in emotional intonation and expressive prosody, restricting its suitability for applications requiring nuanced vocal . For non-English languages, synthesis accuracy depends heavily on rule-based conversion rules, which can result in approximations or errors in complex , particularly for languages with intricate or tonal features. In comparisons, eSpeak is more compact than the speech synthesis system, which offers greater expressiveness through diphone or unit selection methods but at the cost of higher resource requirements. However, it falls short in naturalness and emotional range compared to modern AI-driven synthesizers, making it particularly well-suited for tools, such as screen readers, rather than or high-fidelity audio production. feedback highlights its clear enunciation even at elevated speeds, though potential artifacts may arise in handling intricate phonetic sequences. eSpeak NG's extensive multi-language support, covering over 100 languages and accents, further contributes to its versatility in diverse applications.

Synthesis Method

Text-to-Phoneme Conversion

eSpeak's text-to-phoneme conversion serves as the initial stage in its speech synthesis pipeline, transforming input text into a sequence of phonetic symbols that represent , including markers for and timing. This process relies on linguistic preprocessing to standardize the text and rule-based grapheme-to-phoneme (G2P) translation to map orthographic representations to phonemes, ensuring compatibility across supported languages. Preprocessing begins with text normalization, which handles elements such as abbreviations, numbers, and punctuation to convert them into a form suitable for phonemization. For instance, numbers are processed via the TranslateNumber() function, which constructs spoken forms from fragments based on language-specific options like langopts.numbers. Abbreviations and punctuation are addressed through replacement rules in the language's _rules file, such as the .replace section that standardizes characters (e.g., replacing special letters like ô or ő in certain languages). Tokenization occurs implicitly by breaking the text into words and applying rules or dictionary lookups word-by-word, preparing sequences for G2P matching. The core of the G2P conversion uses a combined with dictionaries for efficiency and accuracy. Rules, defined in files like en_rules for English, employ regex-like patterns to match letter sequences with context: <pre><match><post> <phonemes>, where <pre> and <post> provide surrounding context, and <match> identifies the to replace with . These rules are scored and prioritized, with the best match selected for conversion; for example, a rule might transform "b oo k" into the [U] for the "oo" in "". Dictionaries, stored in compiled files like en_dict, supplement rules with explicit entries for common or irregular words. The English , for instance, includes approximately 5,500 entries in its en_list file for precise pronunciations, such as "book bUk". For unknown words, the system falls back to algorithmic rules after checking for standard prefixes or suffixes, ensuring broad coverage. Language-specific rulesets are implemented through dedicated phoneme files (e.g., ph_english) and rule files, inheriting a base set of s while adding custom vowels, consonants, and translation logic tailored to the language's . Phonemes are represented using 1- to 4-character mnemonics based on the Kirshenbaum ASCII scheme, allowing compact yet precise notation. Ambiguities in , such as placement and boundaries, are resolved using prosodic markers embedded in the output. is assigned via symbols like &#36;1 for primary on the first or $u for unstressed , determined by entries or rule-based heuristics that analyze word structure. boundaries are implied through these markers and control from the base table, guiding the prosody for natural rhythm. The output of this stage is a stream of phonemes annotated with indicators and hints for and , which feeds directly into subsequent processes to generate speech with appropriate intonation.

Formant Synthesis and Prosody

eSpeak utilizes to produce speech audio from sequences, modeling the human vocal tract through time-varying sine waves that represent the primary formants—typically the first three formants (F1 for vowel openness, F2 for front-back position, and F3 for additional spectral shaping)—while incorporating sources to simulate fricatives and other unvoiced sounds. This approach enables compact representation of multiple languages, as it relies on algorithmic generation rather than stored waveforms, resulting in clear output suitable for high-speed up to 500 . The core waveform creation follows a Klatt-style , employing a combination of and digital filters to shape an excitation signal—periodic pulses for voiced and random for unvoiced ones—into resonant formants that mimic natural speech spectra. Prosody in eSpeak is generated rule-based, applying intonation "tunes" to clauses determined by punctuation, such as a rising pitch contour for questions to convey interrogative intent or falling contours for statements. These contours are structured into components like the prehead (rising to the first stress), head (stressed syllables with modulated envelope), nucleus (peak stress with final pitch movement), and tail (declining unstressed endings), achieved by adjusting pitch envelopes on vowels within phoneme lists. Rhythm and emphasis are handled through duration scaling, where stressed syllables receive longer lengths than unstressed ones based on linguistic rules for syllable stress, influencing overall speech timing without altering the fundamental phoneme identities. For example, emphasis on a word increases its phoneme durations to highlight prosodic prominence. A key aspect of prosodic variation involves modulation for smooth intonation transitions. This model allows dynamic adjustment of (F0) across utterances, with voice traits customizable via parameters such as base (scaled 0-99, corresponding to approximately 100-300 Hz for typical male-to-female ranges), speaking speed (80-500 ), and for control (0-200, default 100). These settings enable users to tailor the synthetic for clarity or expressiveness while maintaining the synthesizer's efficiency.

Language Support

Coverage and Accents

eSpeak NG provides support for over 100 languages and accents. This extensive coverage includes major world languages such as English (with variants including en-us, en, en-029, and Scottish en-gb-scotland), (cmn), (Spain es and Latin American es-419), (France fr, fr-be, and fr-ch), (ar), and (hi). Regional accents and voices are achieved through customized mappings and prosody adjustments tailored to specific dialects, enabling variations like (pt-br) alongside standard (pt). The rule-based method facilitates this multi-language support in a compact form. Language data primarily derives from community-contributed rules developed by native speakers and contributors, which has enabled broad but uneven coverage across global linguistic families. Speech quality varies across languages, depending on the maturity of the rules and dictionaries, with support for languages like and . eSpeak NG continues to expand through ongoing community efforts, including enhancements to underrepresented languages such as additional African dialects in recent releases.

Customization and Extension

eSpeak NG allows users and developers to modify existing voices and add support for new languages or accents by editing plaintext data files, enabling customization without altering the core source code. Voices can be adjusted by editing phoneme definitions in files located in the phsource/phonemes directory, such as inheriting from a base table and specifying custom sounds for vowels, consonants, and stress patterns. Dictionaries, which handle text-to-phoneme (G2P) conversion, are modified in the dictsource/ directory through rule files (e.g., lang_rules) for general pronunciation patterns and exception lists (e.g., lang_list) for irregular words, allowing fine-tuning of accents like regional variations in English or French. To add a new , contributors create a voice file in espeak-ng-data/voices/ or espeak-ng-data/lang/ defining parameters such as pitch, speed, and prosody rules, alongside new and dictionary files tailored to the 's and . For languages or those with unique prosody, additional rules in the voice file adjust intonation and rhythm. The espeak-ng --compile=lang command compiles these changes into usable formats like phontab for and lang_dict for dictionaries; the functionality of the older espeakedit utility has been integrated into the espeak-ng program itself for command-line access. Integration with external dictionaries is possible by referencing custom rule sets during compilation. Best practices for emphasize starting with a rough based on similar existing languages, followed by iterative refinement through native speaker feedback to ensure natural and intonation. Testing involves running synthesized audio against sample texts using tools like make check or playback, with new tests added to the tests/ directory to verify stability across updates. Contributions, including modified voices or new languages, are submitted via pull requests to the eSpeak NG repository, where maintainers review and integrate them into official releases. Community efforts have extended eSpeak NG to constructed languages like through custom phoneme tables and rules capturing its phonetic regularity. Similarly, user-contributed improvements for Welsh include refined G2P rules for its mutations and , while ongoing work on has enhanced rendering via updated prosody parameters. These extensions demonstrate how the modular file structure facilitates broad participation in expanding eSpeak NG's capabilities.

Integrations and Applications

Platform Compatibility

eSpeak NG demonstrates broad platform compatibility as a lightweight, open-source speech synthesizer, supporting major desktop operating systems through standard build processes and audio interfaces. On Linux, it integrates seamlessly with audio systems such as ALSA and PulseAudio via the pcaudiolib library, enabling output on distributions like Ubuntu and Fedora. For Windows (requiring Windows 8.1 SDK or later), eSpeak NG utilizes DirectSound and WaveOut for audio playback, facilitating integration with screen readers and accessibility tools such as NVDA. On macOS, it is supported through ports available via package managers like MacPorts and Homebrew, using the same POSIX-compliant build tools as Linux. In mobile and embedded environments, eSpeak NG extends its reach to resource-constrained devices. Android support begins with API level 4.0 () and is available through native applications on the Store, as well as in terminal emulators like for command-line usage. For , an official eSpeak NG app has been available on the since September 2023, providing text-to-speech functionality on and devices. On embedded platforms like the , it operates natively under Raspbian or other variants, making it suitable for applications due to its low resource footprint. Building eSpeak NG requires a C99-compliant such as or , along with autotools for systems; Windows builds use or MSBuild (latest stable version 1.52.0 as of 2024). Cross-compilation is supported for targets, allowing deployment on diverse hardware architectures, including ARM64. The synthesizer maintains minimal dependencies, relying solely on standard C libraries without external machine learning frameworks, which contributes to its portability across platforms. Optional libraries like pcaudiolib and handle audio processing and speed adjustments, but the core engine operates independently. This design ensures compatibility with a wide range of hardware and software configurations, from desktops to low-power devices.

Notable Use Cases

eSpeak serves as the primary text-to-speech engine in the NVDA for Windows, providing essential audio feedback for visually impaired users since its integration in early versions of the software. This bundling enables NVDA to deliver multilingual without requiring additional installations, supporting over 100 languages and accents directly within the application. Similarly, eSpeak functions as the default for the on systems, facilitating accessible navigation of graphical interfaces through synthetic speech output. In telephony applications, eSpeak integrates with the open-source PBX via dedicated modules, allowing text-to-speech rendering for interactive voice responses and automated announcements in communication systems. For software development, Python bindings such as python-espeak enable developers to incorporate eSpeak into chatbots and virtual assistants, supporting offline voice output in custom applications. In educational contexts, eSpeak supports language learning tools by providing pronunciation guidance across numerous languages, aiding users in practicing and accents through its compact, formant-based . It is also embedded in e-book readers like , where it leverages system-level text-to-speech capabilities to read aloud content on and other platforms, enhancing accessibility for extended reading sessions. Recent adoptions include its continued use in OS for Chromebooks, with updates to the eSpeak-NG port ensuring compatibility and performance improvements as of 2024 releases. Additionally, open-source AI assistants like Mycroft utilize eSpeak NG for offline text-to-speech, allowing customizable voice parameters such as pitch and speed in privacy-focused environments. Overall, eSpeak's design and broad support have significant impact in low-resource settings, enabling speech access for visually impaired individuals in developing regions where high-end synthesizers are impractical due to its low computational requirements.

References

  1. [1]
    eSpeak: Speech Synthesizer
    eSpeak is a compact open source software speech synthesizer for English and other languages, for Linux and Windows.Downloads · Usage · Speech synthesis · Documents
  2. [2]
    eSpeak NG is an open source speech synthesizer that ... - GitHub
    The eSpeak NG is a compact open source software text-to-speech synthesizer for Linux, Windows, Android and other operating systems.
  3. [3]
    espeak man | Linux Command Library
    espeak was developed by Jonathan Duddington and first released around 2006. Its design prioritized compactness and efficiency, making it highly suitable for ...
  4. [4]
    Switch to eSpeak NG in NVDA distribution · Issue #5651 - GitHub
    Jan 4, 2016 · The latest espeak testing release (1.48.15) was on the 16th April 2015 (nearly 9 months ago); the official release is 1.48.04 released on the ...Missing: history | Show results with:history
  5. [5]
    espeak-general Mailing List for eSpeak: speech synthesis
    ... original author of eSpeak, Jonathan Duddington, has been uncontactable for several years. But there are some developers working on a next-generation version ...Missing: history | Show results with:history<|control11|><|separator|>
  6. [6]
    Issues · espeak-ng/espeak-ng - GitHub
    eSpeak NG is an open source speech synthesizer that supports more than hundred languages and accents ... language "pt-br" is not supported by the espeak backend.
  7. [7]
  8. [8]
    2.2 command options - eSpeak
    The default value is 175. I generally use a faster speed of 260. The lower limit is 80. There is no upper limit, but about 500 is probably a practical maximum.Missing: TTS | Show results with:TTS
  9. [9]
  10. [10]
    Text-to-Speech Synthesis: an Overview | by Sciforce - Medium
    Feb 13, 2020 · Though close-to-perfect in its intelligibility and naturalness, WaveNet is unacceptably slow (team reported that it takes around 4 minutes to ...Approaches Of Tts Conversion... · Get Sciforce's Stories In... · Hybrid (deep Learning)...
  11. [11]
    3. languages - eSpeak
    The eSpeak speech synthesizer does text to speech for the following additional langauges. af Afrikaans: This has been worked on by native speakers and it should ...
  12. [12]
    6. adding or improving a language - eSpeak
    The master phoneme file needs to be edited to call your new ph_french file. dictsource/fr_rules. This contains the spelling-to-phoneme translation rules.Missing: 1.50 2021<|separator|>
  13. [13]
    The eSpeak Speech Synthesizer - LWN.net
    Jul 6, 2006 · eSpeak produces good quality English speech. It uses a different synthesis method from other open source TTS engines, and sounds quite different ...
  14. [14]
    eSpeak, Festival, Google TTS, Pico and PYTTSX3 - Circuit Digest
    Feb 17, 2021 · eSpeak is a compact open-source software ... It is a modern and easy-to-use TTS package compared to other open-source packages available.1. Espeak Text-To-Speech · 3. Google Text To Speech · 4. Pico Text To SpeechMissing: expressiveness | Show results with:expressiveness
  15. [15]
    NVDA 2025.3.1 User Guide
    NVDA is bundled with eSpeak NG, a free, open-source, multi-lingual speech synthesizer. Information about other speech synthesizers that NVDA supports can be ...Missing: artifacts | Show results with:artifacts
  16. [16]
    (Turkish) vocalization of words with double letters saying of one letters
    Oct 3, 2019 · Not only at high speeds, but at slow speeds, exactly what the word is. We can better understand this phonetic subject with the word 'eddard ...Missing: enunciation artifacts
  17. [17]
    Pronunciation Dictionaries - eSpeak
    The rules in the <language>_rules file specify the phonemes which are used to pronounce each letter, or sequence of letters.
  18. [18]
    espeak-ng/dictsource/en_list at master · espeak-ng/espeak-ng
    - **Content**: The provided text is a GitHub page snippet for `en_list` from `espeak-ng/espeak-ng`, but it does not include the actual file content. It contains navigation, feedback, and footer information only.
  19. [19]
    sSpeak: Phonemes - eSpeak
    These are the phonemes which are used by the English spelling-to-phoneme translations (en_rules and en_list). In some varieties of English different phonemes ...
  20. [20]
  21. [21]
    espeak-ng/klatt - GitHub
    This is an implementation of the Klatt Cascade-Parallel Formant Speech Synthesizer. The software for this synthesizer was originally described in Klatt1980.
  22. [22]
    Intonation - Phoneme tables - eSpeak
    In eSpeak's standard intonation model, a "tune" is applied to each clause depending on its punctuation. Other intonation models may be used for some languages.Missing: prosody | Show results with:prosody
  23. [23]
    There needs to be a way to explicitly set stress only via rules to ...
    Jul 16, 2020 · You can adjust length of long vowels by redefining length of phoneme : , length of : is declared in phsource/phonemes file as 70 ms in phsource ...Missing: emphasis | Show results with:emphasis
  24. [24]
    [PDF] Improving comprehensibility of rule-based text-to-speech output
    The work shown in this paper is based on eSpeak NG, which is a rule-based text-to-speech system originally developed by Jonathan Duddington [Duddington, 2010] ...
  25. [25]
    Voice Files - eSpeak
    If the size of the compiled dictionary data for the language (the file espeak-data/*_dict ) is less than this size then a warning is given. Alphabets names ...
  26. [26]
    Languages
    - **Total Supported Languages and Accents**: 127 languages and accents.
  27. [27]
    eSpeak NG: The Lightweight, Open-Source Voice That Speaks 100+ ...
    Sep 12, 2025 · eSpeak was originally created in 1995 by Jonathan Duddington. In 2015, the project forked into eSpeak NG to modernize the codebase, add new ...
  28. [28]
    Adding or Improving a Language
    ### Summary of Adding or Improving a Language in eSpeak NG
  29. [29]
    espeak-ng/docs/building.md at master · espeak-ng/espeak-ng
    Insufficient relevant content. The provided text is a GitHub page fragment with navigation, feedback, and footer elements, but it does not contain the requested details on build requirements, dependencies, or platform-specific instructions for espeak-ng.
  30. [30]
    Install espeak-ng on macOS with MacPorts
    Instructions · If not done already, install MacPorts. · To install espeak-ng, run the following command in macOS terminal (Applications->Utilities->Terminal).
  31. [31]
  32. [32]
    eSpeak-NG - App Store - Apple
    Rating 4.1 (20) · Free · iOSIt is based on the eSpeak engine created by Jonathan Duddington. This app adds a compatibility layer between eSpeak NG library and Apple VoiceOver. more ...
  33. [33]
    eSpeak voice synthesizer on Raspberry Pi - AranaCorp
    Jul 27, 2023 · You can turn your Raspberry Pi into an intelligent assistant by using a voice synthesizer like eSpeak. Thanks to this tutorial, you'll be able to make your ...Missing: IoT | Show results with:IoT
  34. [34]
    Orca Screen Reader and Magnifier - Oracle Solaris 11 Desktop ...
    The default language is set to English. Note - Using the default eSpeak text-to-speech engine, Orca supports about 45 languages. Determine whether to ...
  35. [35]
    eSpeak module for Asterisk PBX
    Module for the Asterisk open source PBX which allows you to use the eSpeak voice synthesis engine to render text to speech.Missing: telephony | Show results with:telephony
  36. [36]
    python-espeak - PyPI
    Jan 23, 2021 · It is a Python binding over the eSpeak speech synthesizer C library and does not simply make calls to the espeak binary.Missing: chatbots | Show results with:chatbots
  37. [37]
    Reading aloud - FBReader
    FBReader has no own speech generator. Instead, it uses text-to-speech engines installed in the system. On Android, you can install different engines and voices.
  38. [38]
    chromiumos/third_party/espeak-ng - Git at Google
    eSpeak NG uses a “formant synthesis” method. ... It also supports Klatt formant synthesis, and the ability to use MBROLA as backend speech synthesizer.
  39. [39]
    Text-To-Speech | Mycroft AI
    Feb 22, 2022 · Text-To-Speech (TTS) is the process of synthesizing audio from text. Mycroft uses our own TTS engines by default, however we also support a range of third ...Missing: NG | Show results with:NG