Google Translate
Google Translate is a free online machine translation service developed by Google, launched in April 2006, that provides instant translations of text, documents, websites, images, and speech across more than 240 languages using advanced artificial intelligence, including neural machine translation (NMT) technology introduced in 2016.[1][2][3] Initially built on statistical machine translation methods trained on vast corpora such as United Nations and European Parliament documents, Google Translate has evolved significantly, expanding from supporting just a few languages at launch to over 243 by 2024 through major AI-driven updates, including the addition of 110 new languages in June 2024 using the PaLM 2 large language model.[1][4][2] The service powers over 100 billion words translated daily and serves hundreds of millions of users worldwide, facilitating cross-cultural communication in education, travel, business, and everyday interactions.[1] Key features include real-time conversation translation in over 70 languages, camera-based instant translation for signs and menus, offline mode with downloadable language packs, handwriting and voice input, and document translation that preserves original formatting.[5] In 2025, enhancements powered by Gemini AI models introduced live back-and-forth audio translations that adapt to accents and pauses, along with personalized language learning tools for practicing speaking and listening in select language pairs like English-Spanish and French-Portuguese.[6] These advancements, part of Google's broader 1,000 Languages Initiative, aim to support underrepresented languages and improve translation accuracy and fluency, though it remains a tool best supplemented by human review for nuanced or professional contexts.[4][6]History
Launch and early development
Google Translate was developed in 2006 by Google engineers Franz Och and Ashish Venugopal, who led the creation of a statistical machine translation service designed to leverage large-scale data for language translation.[7] Och, as the principal researcher, emphasized a data-driven approach to overcome the limitations of earlier rule-based systems, drawing on probabilistic models to analyze patterns in bilingual texts.[7] This foundational work built on prior statistical methods but scaled them using Google's computational resources to enable faster and more accurate translations. The service initially relied on parallel corpora from official multilingual sources, including United Nations documents translated into its six official languages, to train its models and capture linguistic alignments.[7] These documents provided high-quality, aligned sentence pairs essential for the statistical algorithms to learn translation probabilities without manual rule creation. Google Translate launched publicly on April 28, 2006, initially supporting translations between English and a core set of languages that quickly expanded to 12, including Arabic, Chinese, French, German, Italian, Japanese, Korean, Portuguese, Russian, Spanish, and Turkish.[1][8] This debut marked a shift from Google's earlier use of third-party tools to its proprietary statistical system, prioritizing scalability and web accessibility. In its early operational phase, the service rapidly grew, reaching support for 25 languages by 2007 through iterative improvements in model training and data incorporation.[9] In 2009, Google integrated Translate directly into its search engine, allowing users to input queries in non-English languages and receive results translated into their preferred tongue, thereby enhancing global search usability.[10] These expansions laid the groundwork for broader adoption, focusing on practical utility in everyday online interactions while maintaining a commitment to statistical methods for core functionality.Key milestones and expansions
In 2010, Google Translate expanded its language support to 52 languages, marking a significant growth in its coverage and accessibility for users worldwide.[7] This expansion built on the service's statistical machine translation foundation, enabling translations for a broader array of web pages, documents, and text inputs.[11] A key advancement came in 2013 with the introduction of offline translation capabilities for the Android app, allowing users to download language packs for text and speech translations without an internet connection.[12] Initially supporting 50 languages, this feature addressed connectivity challenges in travel and remote areas, enhancing the app's utility for on-the-go users.[13] In 2016, Google announced a major shift to neural machine translation (NMT), replacing the previous statistical approach with a system that processes entire sentences for improved fluency and accuracy.[14] The rollout began with eight language pairs involving English and languages such as French, German, Spanish, Portuguese, Hindi, Russian, and Chinese, reducing translation errors by up to 60% in those pairs.[15] This update also enhanced existing features like conversation mode, which had been introduced earlier but benefited from the neural improvements for more natural bilingual dialogues.[16] In June 2019, instant camera translation was upgraded with neural models for real-time text recognition and translation in images, supporting 88 languages translated into over 100 languages and improving accuracy for low-light or complex visuals.[17] These updates expanded the feature's scope, making it a more reliable tool for translating signs, menus, and documents via mobile devices.[18] The service continued its expansion in 2024 by adding 110 new languages—its largest single update—powered by the PaLM 2 large language model, bringing the total to 243 supported languages as of June 2024.[4] Examples include Cantonese, Tok Pisin, N'Ko, and Tamazight, representing over 614 million speakers and prioritizing low-resource languages to promote global inclusivity.[19]Core Functions
Text translation
Google Translate's text translation feature enables users to convert written content between 249 languages as of November 2025 using neural machine translation models. This core function processes input text to generate accurate translations while preserving context and nuance where possible.[5] Users can input text by typing directly into the interface, pasting from external sources, or uploading compatible files, with a limit of up to 5,000 characters per request for direct text entry. Supported file formats for upload include .docx, .pdf, .pptx, and .xlsx, with documents capped at 10 MB and PDFs limited to 300 pages to ensure efficient processing. This allows for seamless translation of entire documents without manual copying, maintaining original formatting in the output where feasible.[20][21] Output options provide flexibility, including translations of individual sentences or phrases for quick reference, full-page or document renders for comprehensive results, and automatic language detection to identify the source language without user specification. For instance, users can paste a paragraph and receive an immediate translated version, or upload a multi-page report to obtain a downloadable translated file in the target language.[20][21] Additionally, the website translation feature permits users to enter a URL, after which Google Translate renders the entire webpage in the selected target language, enabling easy navigation of foreign-language sites while toggling back to the original as needed. This tool supports 249 languages as of November 2025 and is accessible directly through the Translate interface.[21]Multimodal translation options
Google Translate offers several multimodal translation options that extend beyond traditional text input, enabling users to interact with the service through visual, audio, and gestural inputs for more dynamic and accessible translation experiences. Enhancements powered by Gemini AI models, introduced in 2025, improve real-time processing, including adaptations to accents and pauses in audio features.[6] The camera translation feature, introduced in January 2015 following Google's acquisition of Word Lens technology, allows users to point their device's camera at printed text in images or live video feeds, instantly detecting and overlaying translated text using augmented reality.[18] Initially supporting seven languages and expanded to 27 by July 2015, this capability has been enhanced with neural machine translation models starting in 2019, improving translation accuracy and naturalness for real-time overlays in over 100 languages as of 2025.[22] This AR-based approach is particularly useful for translating signs, menus, or documents on the go without manual capture or typing. Conversation mode, first launched experimentally in the Android app in October 2011, supports real-time bilingual dialogues by capturing spoken input via the microphone, translating it across languages, and outputting the result as synthesized speech or on-screen text for seamless back-and-forth exchanges.[23] Originally limited to pairs like English and Spanish, it now accommodates over 70 language combinations as of August 2025, automatically detecting the speaker's language and facilitating natural interactions without requiring users to pass the device.[5][6] This mode leverages speech recognition and synthesis technologies to minimize latency, making it ideal for in-person conversations in multilingual settings. Handwriting input recognition, added to the Android app in December 2012 and extended to the web interface in July 2013, enables users to draw characters or words directly on the screen for translation, accommodating printed, cursive, or non-Latin scripts that are challenging to type.[24] Supporting 97 languages, including complex systems like Chinese, Arabic, and Hindi, this feature uses optical character recognition adapted for freehand input, allowing translation even for unfamiliar alphabets or when keyboards lack necessary symbols.[25] To support these multimodal functions in low-connectivity environments, Google Translate provides downloadable offline language packs for over 100 languages, which include models for text, camera, and conversation translations; these packs are periodically updated via the app to reflect improvements in accuracy and coverage.[26][27]User Interfaces
Web interface
The web interface of Google Translate provides a straightforward, browser-based platform for text translation, accessible via translate.google.com. The layout features two prominent text boxes: the left one for entering source text (up to 5,000 characters) and the right one displaying the translated output in real-time. At the top, users select source and target languages from dropdown menus supporting 249 options as of November 2025, with an auto-detection toggle for the source language that analyzes input to identify it automatically, enhancing usability for multilingual users.[28] Integrated tools support deeper linguistic exploration. The built-in dictionary activates via a "Look up details" option beneath translations, offering word-level breakdowns including definitions, synonyms, usage examples, and alternative translations for select language pairs, aiding learners in contextual understanding.[28] The phrasebook feature allows users to save frequently used translations for quick reference. By clicking a star icon under a translation result, phrases are stored and accessible via a dedicated icon in the upper-right corner of the interface; users can filter by language pair, search entries, and even listen to pronunciations, with saved items syncing across devices when signed in.[29][30] For seamless integration within browsing, Google offers a Chrome extension that enables right-click translation of selected text or full-page rendering. Users can highlight text on any webpage, right-click, and select "Translate to [language]" for instant results, or translate entire sites via the extension's toolbar icon; additionally, Chrome's built-in translation tool, powered by Google Translate, offers similar full-page translation from the address bar.[31][32]Mobile applications
The Google Translate mobile applications are available for both Android and iOS platforms, providing on-the-go translation capabilities optimized for touch interfaces. The Android version launched in January 2010, while the iOS version launched on February 8, 2011, via the App Store, enabling widespread mobile access to translation services.[18][33] These apps have achieved over 1.1 billion all-time downloads as of 2025, reflecting their popularity among users worldwide for portable language support.[34] A standout mobile-exclusive feature is the touch-optimized camera translation, which allows users to point their device's camera at text such as signs or menus for instant, real-time translation overlaid via augmented reality (AR). This functionality supports quick scanning and interpretation without manual input, making it ideal for travel and immediate comprehension scenarios. Additionally, the apps include a home screen widget on Android devices, enabling users to perform quick translations directly from the lock or home screen without opening the full application.[22][5][35] Split-screen mode support on Android further enhances usability, allowing simultaneous translation alongside other apps like messaging or browsers for multitasking.[36] Recent updates as of 2025 include the ability to set Google Translate as the default translation app on iOS and iPadOS 18.4 and later, integrating it more seamlessly into the system, and dual-screen conversation mode for foldable devices like the Pixel Fold and Galaxy Z Fold series, utilizing both displays for real-time interpretations.[37][38] The apps emphasize efficiency through a battery-conscious offline mode, where users can download language packs in advance to translate text, speech, or images without an internet connection, reducing data usage and power consumption compared to online operations. This mode supports over 100 languages and is particularly valuable in areas with limited connectivity, ensuring reliable performance on mobile devices.[37][39] While sharing core text and multimodal options with the web interface, the mobile versions prioritize sensor integration and portability for seamless, device-native experiences.[5]Integrations and Accessibility
API and developer access
Google Translate provides developer access through the Cloud Translation API, a service within Google Cloud that enables programmatic integration of machine translation capabilities into applications, websites, and services. The API is available in two editions: Basic (version 2) and Advanced (version 3), each tailored to different use cases. The Basic edition utilizes Google's pre-trained Neural Machine Translation (NMT) model for straightforward text translation and language detection, supporting over 100 languages.[40] In contrast, the Advanced edition (v3) extends these features with customization options, making it suitable for enterprise-level applications requiring higher control and scalability.[40] The Advanced edition supports three model types: the standard NMT model for general-purpose translation, advanced NMT for optimized performance on long-form content, and AutoML Translation models for custom-trained translations using user-provided datasets. Developers can select models via API parameters to balance accuracy, speed, and cost based on specific needs, such as domain-specific terminology in legal or technical documents.[41] Pricing for the API is tiered by usage volume and edition. For standard translation in both editions, the first 500,000 characters per month are free (up to a $10 monthly credit), with subsequent usage charged at $20 per million characters; higher volumes qualify for discounted rates upon contacting sales. Custom AutoML models in the Advanced edition incur additional costs starting at $80 per million characters, plus $45 per hour for training. Document translation is priced separately at $0.08 per page for NMT.[42] Key endpoints in the Advanced edition facilitate efficient large-scale operations. Batch processing allows asynchronous translation of up to 100 million characters per request by uploading input files to Cloud Storage and receiving outputs in the same bucket, ideal for bulk tasks like translating datasets or content libraries.[43] Glossary creation enables consistent terminology by defining custom term mappings, with the input file limited to 10,485,760 UTF-8 bytes across all terms, that the API applies during translation, supporting bidirectional equivalence for domain-specific vocabulary in fields like medicine or finance.[44] These features integrate via REST or gRPC protocols, with authentication handled through Google Cloud IAM roles and service accounts. Legacy statistical machine translation endpoints have been phased out in favor of neural-based models since the API's evolution to NMT, ensuring modern, high-quality outputs.[45]Voice and device integrations
Google Translate integrates seamlessly with Google Assistant, enabling hands-free translation through voice commands such as "Hey Google, translate 'hello' to French," which provides instant spoken and text output in the target language.[46] Translating single words or phrases remains supported, though real-time conversation mode was discontinued in 2025. On smart displays like the Nest Hub, Google Translate supports visual and audio translations of single phrases via voice commands, displaying text on the screen for enhanced understanding.[46] This integration combines the display's touchscreen for language selection with spoken output.[47] Google Translate maintains compatibility with Android Auto for in-car use, where Google Assistant or Gemini facilitates on-the-fly translation of directions, messages, or queries to keep drivers focused without handling devices. As of November 2025, Gemini AI integration enables real-time translation of messages and queries in over 40 languages during drives.[48][49] Similarly, on Wear OS devices, translation occurs via voice commands to Assistant, converting phrases or words directly on the wrist for quick reference during travel or workouts.[49] These hardware ties build on the mobile app's core capabilities, extending translation to wearable and automotive contexts.[50] Real-time earbud translation debuted with the Pixel Buds in 2017, allowing users to hear translations whispered in their ear during conversations by tapping the earbud to activate Google Translate's conversation mode. The feature expanded significantly with Live Translate in 2023, integrated into Pixel Buds Pro and later models, enabling seamless, low-latency translation across more than 40 languages without interrupting the flow of speech.[51] Powered by on-device AI, it supports modes like Conversation Mode for direct talk and Transcribe Mode for lectures or announcements, preserving natural voice tones where possible.[52]Language Support
Coverage and statistics
As of June 2024, Google Translate supports 243 languages for text translation, enabling users to translate between a vast array of linguistic pairs across the globe.[4] Translation is bidirectional for the majority of these language pairs, allowing seamless conversion in both directions without restrictions on primary input languages. Additionally, 24 languages benefit from full zero-shot capabilities, where translations between them are generated without direct parallel training data, leveraging advanced neural models for indirect inference.[53] Google has placed particular emphasis on low-resource languages—those with limited digital data—through AI scaling techniques, such as large language models that extrapolate from high-resource languages to support underrepresented ones.[4] Historically, Google Translate launched in 2006 with support for 12 languages and has since expanded dramatically through crowdsourcing contributions from users to refine translation quality and data partnerships with organizations to acquire diverse linguistic corpora. This growth reflects ongoing investments in machine learning to bridge global communication gaps.[1][54]Specialized features by language
Google Translate offers text-to-speech (TTS) capabilities across its 243 supported languages, enabling users to listen to translated text in the source or target language. Natural-sounding voices, generated using WaveNet technology from DeepMind, are available for more than 40 languages, providing expressive intonation that approximates human speech for improved comprehension.[55][56] Voice dictation, or speech-to-text input for translations, is supported in over 70 languages, allowing users to speak phrases directly into the app for real-time conversion and translation, particularly useful in mobile environments.[6] The platform accommodates right-to-left (RTL) scripts, such as those used in Arabic and Hebrew, by automatically adjusting text directionality, layout mirroring, and cursor alignment to maintain readability and cultural appropriateness in the user interface.[57][58] For tonal languages like Mandarin Chinese, Google Translate integrates tone diacritics in pinyin transliterations and employs neural models trained to preserve phonetic nuances, ensuring accurate pronunciation of tone pairs during TTS output.[4][59] Support for constructed languages includes Esperanto, integrated since 2012 with models achieving notably high translation quality due to its regular grammar.[60] Through partnerships, such as with the Living Tongues Institute, Google Translate has added experimental support for endangered languages, including recent expansions incorporating over 100 low-resource varieties like Afar and Tamazight to aid preservation efforts. In late 2024, voice AI support was extended to 15 additional African languages as part of efforts to enhance accessibility for underrepresented linguistic communities.[61][4][62]Translation Technology
Statistical machine translation
Google Translate's early translation engine was built on statistical machine translation (SMT), a probabilistic framework that generates translations by modeling the likelihood of target language outputs given source inputs, drawing from vast bilingual data. This approach, pioneered in the 1990s, treats translation as a noisy channel problem where the source sentence is "transmitted" through a translation model, and fluency is enforced via a target language model.[63] The system launched in 2006 with support for Arabic-English pairs, utilizing data-driven SMT for broader scalability.[64] Central to this method was the reliance on parallel corpora—collections of aligned sentence pairs in two languages—to train probabilistic models for phrase alignment and translation. Word-level alignments were derived using the IBM Models 1-5, a series of increasingly sophisticated algorithms that estimate translation probabilities and alignment links via the expectation-maximization (EM) algorithm; for instance, Model 1 assumes uniform alignment probabilities, while later models incorporate fertility and distortion to handle variable word counts and positions.[63] These alignments enabled the extraction of phrase tables, capturing multi-word units that preserve local context better than word-by-word translation. Scoring of translation hypotheses then employed log-linear models, which combine weighted features such as bidirectional phrase translation probabilities, lexical reordering indicators, and n-gram language model scores to select the highest-probability output: \hat{e} = \arg\max_e \prod_{i=1}^m h_i(e,f,a) \exp\left(\sum_{k=1}^K \lambda_k f_k(e,f,a)\right) where h_i enforces phrase extraction constraints, f_k are feature functions, and \lambda_k are learned weights.[65] Training involved processing billions of words from diverse sources, including aligned human-translated documents and monolingual text mined from web crawls, supplemented by public datasets like those from the Linguistic Data Consortium (LDC) exceeding 150 million words for key language pairs.[64] [https://aclanthology.org/www.mt-archive.info/MTS-2005-Och.pdf) This data allowed the system to generalize across domains, though coverage was stronger for high-resource languages like English-Arabic due to abundant parallel resources from news and official sites. Over time, Google expanded to dozens of languages by continuously harvesting web-scale data, emphasizing phrase-based extraction to mitigate sparsity in rare word translations.[64] A key limitation of phrase-based SMT was its restricted context window, typically limited to short phrases of 3-7 words, which hindered capturing long-range dependencies and led to frequent word-order errors, especially in languages with syntactic structures divergent from English, such as Japanese or German.[66] [67] This often resulted in unnatural reordering or incomplete sentence coverage, as the model prioritized local fluency over global coherence.[66]Neural machine translation and AI advancements
Google Translate marked a significant shift toward neural machine translation (NMT) with the rollout of Google Neural Machine Translation (GNMT) in 2016. GNMT employed long short-term memory (LSTM) networks within a sequence-to-sequence learning framework, enabling the system to process entire sentences as cohesive units rather than fragmented phrases. This approach addressed limitations in prior statistical methods by capturing contextual dependencies more effectively, resulting in translation error reductions of 55% to 85% across major language pairs such as English-Chinese and English-Spanish, as measured by human evaluations on diverse corpora.[3][68] In 2020, Google Translate adopted a hybrid architecture incorporating the Transformer model, which replaced the earlier RNN-based GNMT system. The Transformer, known for its self-attention mechanisms, served as the encoder to enhance contextual understanding, paired with an optimized RNN decoder for output generation. This upgrade improved translation accuracy, yielding an average BLEU score increase of 5 points across over 100 languages and up to 7 points for low-resource ones, while reducing latency for faster real-time performance.[69] By 2024, integration of the PaLM 2 large language model expanded Google Translate's capabilities, adding support for 110 new languages, many of which are low-resource and spoken by over 614 million people globally. PaLM 2 facilitates zero-shot translation for these languages by leveraging multilingual pre-training on vast datasets, allowing effective handling of linguistic variations—such as dialects related to Hindi or French creoles—without requiring parallel training data for each pair. This advancement prioritizes underrepresented languages, including Indigenous and African tongues like Fon and Kikongo, to broaden accessibility.[4] In 2025, further enhancements powered by Gemini AI models were introduced, including live back-and-forth audio translations that adapt to accents and pauses in over 70 languages, and a model picker allowing users to select between "Fast" and "Advanced" modes for optimized speed versus accuracy. These Gemini integrations build on prior NMT frameworks to improve fluency and contextual understanding, particularly for real-time and multimodal applications, as part of Google's ongoing 1,000 Languages Initiative.[6] Ongoing AI developments in Google Translate include multimodal NMT features that combine image recognition with text translation, enabling users to translate overlaid text in photos or live camera views for practical applications like signage or menus. Additionally, ethical AI filters address biases, particularly gender-related ones, through scalable post-editing techniques that rewrite translations to provide neutral or dual-gender options, achieving over 90% bias reduction in languages like Turkish, Finnish, and Persian while maintaining high precision.[5][70]Performance and Accuracy
Evaluation methods
Google Translate's translation quality is assessed through a combination of human and automated evaluation methods, each designed to capture different aspects of performance such as fluency, adequacy, and overall accuracy. Human evaluation remains a cornerstone, particularly for nuanced assessments that automated metrics may overlook. Bilingual experts typically rate translations on standardized scales, such as 1-5 for fluency—which measures how natural and grammatically correct the output reads in the target language, independent of semantic fidelity—and adequacy, which evaluates the extent to which the translation preserves the source material's meaning, even if the phrasing is awkward. These ratings are often collected via side-by-side comparisons of machine outputs against human references or competing systems, with inter-annotator agreement ensured through guidelines from frameworks like those developed in early machine translation evaluation efforts. For instance, in evaluating systems like Google Neural Machine Translation (GNMT), human evaluators conducted pairwise comparisons on isolated sentences, revealing significant error reductions compared to prior phrase-based models.[71][72][73] Automated metrics provide scalable alternatives, enabling rapid iteration without exhaustive human involvement. The Bilingual Evaluation Understudy (BLEU) score is a widely adopted automatic metric for Google Translate and other systems, calculating n-gram precision (typically up to 4-grams) between the machine-generated translation and multiple human reference translations, adjusted by a brevity penalty to account for length discrepancies. BLEU emphasizes surface-level overlap, offering a quick proxy for quality that correlates strongly with human judgments, though it has limitations in capturing semantic nuances or fluency. Google Translate's development incorporates BLEU alongside other metrics during training and testing phases to benchmark improvements.[74][75] Standardized benchmarks like those from the annual Workshop on Machine Translation (WMT) play a key role in external validation, where Google Translate submissions are rigorously tested on diverse language pairs. WMT evaluations combine automated scores (e.g., BLEU, chrF) with human assessments, often focusing on direct assessment scales for holistic quality. Google has consistently participated, leading in English-centric pairs such as English-to-French and English-to-German in earlier iterations like WMT'14. In recent cycles as of WMT 2025, Google's research submissions like GEMTRANS—fine-tuned with Gemma 3 for enhanced fluency—have achieved strong performance emphasizing fluency, while production systems remain competitive but mid-tier in preliminary rankings based on automatic metrics. These benchmarks ensure comparability across systems and highlight progress in low-resource languages.[73][76][77] Internally, Google employs A/B testing to refine Google Translate, deploying variant models to subsets of users and measuring real-world performance via engagement metrics and error rates. This process integrates user feedback loops, historically through features like the "Contribute Translation" tool that allowed corrections to improve algorithms, though such direct input mechanisms have evolved with larger-scale data practices. These iterative tests, informed by production traffic, enable ongoing enhancements by prioritizing variants that reduce post-editing needs or boost user satisfaction.[78][79]Comparative benchmarks
Google Translate's performance in comparative benchmarks is frequently assessed using the Bilingual Evaluation Understudy (BLEU) score, a standard metric that quantifies the overlap between machine translations and human references on a scale of 0 to 100. For high-resource language pairs, such as English-Spanish, BLEU scores typically average 30-40 as of 2019, reflecting strong alignment with human translations due to abundant training data. For instance, evaluations indicate a BLEU score of 38 for English-Spanish as of 2019, highlighting Google Translate's effectiveness for widely used languages, though scores have improved with subsequent AI advancements.[80][69] Performance varies by context, with lower scores in low-resource languages due to limited data; for example, English-Swahili translations achieved around 25 BLEU as of 2019.[81] In comparisons with competitors like DeepL, Google Translate outperforms in processing speed, handling large-scale translations rapidly due to its optimized infrastructure, making it ideal for real-time applications. Conversely, DeepL trails Google in speed but excels in capturing nuance, especially for literary texts, where human evaluators rate DeepL higher for idiomatic expression and contextual fidelity.[82][83] User satisfaction surveys in 2025 reveal 85% approval for casual translations, such as everyday conversations or simple documents, owing to reliable fluency in common scenarios. Satisfaction is around 85% for technical content as of 2025, though specialized terminology often leads to inaccuracies requiring post-editing.[84][85]| Language Pair | Resource Level | Approximate BLEU Score (as of 2019) | Source |
|---|---|---|---|
| English-Spanish | High | 38 | 2019 Evaluation Update |
| English-Swahili | Low | 25 | Empirical Evaluation |
| Average High-Resource | High | 30-40 | Google Research |