Fact-checked by Grok 2 weeks ago

Google Neural Machine Translation

Google Neural Machine Translation (GNMT) is an end-to-end system developed by , introduced in September 2016, that leverages to produce translations approaching human quality by modeling entire sentences rather than isolated phrases. At its core, GNMT employs an encoder-decoder architecture based on (LSTM) networks with eight layers each, augmented by an attention mechanism to align input and output sequences effectively. This design addresses limitations of prior systems, such as handling rare words through a wordpiece tokenization scheme that breaks vocabulary into subword units, enabling better generalization across languages. Upon deployment in , GNMT initially powered translations for the Chinese-to-English language pair, processing over 18 million sentences daily, and demonstrated a 55% to 85% reduction in errors compared to Google's previous phrase-based system, as measured by human side-by-side evaluations scoring translations from 0 (nonsense) to 6 (perfect). The system was trained using massive parallel corpora and advanced techniques like residual connections and with coverage penalties, achieving competitive results on benchmarks such as WMT'14 for English-to-French and English-to-German translations. GNMT's innovations extended to multilingual capabilities, supporting zero-shot translation—allowing translations between language pairs not explicitly trained on—by jointly learning from multiple languages in a single model, which enabled zero-shot translation capabilities and improved efficiency and fluency for major language pairs in , supporting over 100 languages overall by late 2016. GNMT laid the foundation for in , which has since evolved to advanced Transformer-based models supporting 249 languages as of November 2025.

Introduction

Definition and Purpose

Google Neural Machine Translation (GNMT) is an end-to-end system developed by for automated language translation, employing deep neural networks to process and generate translations for entire sentences rather than breaking them into individual phrases or words. This approach allows the model to learn direct mappings from source language inputs to target language outputs, leveraging vast amounts of bilingual data to produce more natural and contextually aware results. The core purpose of GNMT is to bridge the gap between machine-generated translations and human-level fluency and accuracy, addressing limitations in prior methods by better capturing syntactic and semantic relationships across sentences, which minimizes errors in idiomatic expressions, ambiguities, and long-range dependencies. By focusing on contextual understanding, GNMT aims to deliver translations that are not only precise but also idiomatic, while supporting efficient inference for real-time applications such as and voice translation. At a high level, GNMT operates through three main stages: input processing to represent the source text in a continuous , translation generation to produce the target sequence probabilistically, and output refinement to select the most coherent and fluent rendition. Launched in November 2016 for eight major language pairs in —English to and from , , , , , , , and Turkish—with the platform supporting over 100 languages overall, GNMT marked a significant advancement in scalable, multilingual translation capabilities.

Evolution from Statistical Machine Translation

Machine translation originated in the mid-20th century with rule-based systems, which dominated the field from the 1950s through the 1980s. These early approaches relied on hand-crafted linguistic rules, bilingual dictionaries, and structural analyses to map source language structures onto target languages, often involving direct word-for-word substitution or intermediate representations like . Pioneering efforts, such as those during the postwar era, aimed to automate translation using computational methods, but they were labor-intensive and limited by the need for extensive manual rule development. The marked a shift to (), which leveraged probabilistic models trained on large parallel corpora to generate s without explicit linguistic rules. By the early 2000s, phrase-based emerged as the dominant , treating multi-word phrases as translation units to capture local and improve fluency over word-based models. , launched in 2006, adopted phrase-based as its core technology, enabling scalable across multiple languages by estimating phrase probabilities from data. This era, spanning the to mid-2010s, saw power most commercial systems due to its data-driven efficiency and ability to handle diverse language pairs. Despite these advances, phrase-based exhibited significant limitations that hindered translation quality. It struggled with long-range dependencies, as models operated on short phrases and reordering mechanisms often failed to capture distant syntactic relationships, leading to errors in complex sentences. differences between languages posed another challenge, with limited reordering capabilities resulting in unnatural alignments, particularly for languages with flexible or divergent syntax like English and Japanese. Additionally, the reliance on local phrase contexts produced translations lacking global coherence, often yielding fluent but semantically inaccurate or awkward outputs. The transition to neural approaches was driven by breakthroughs in deep learning, particularly the introduction of sequence-to-sequence (seq2seq) models in 2014, which enabled end-to-end learning of translation mappings using recurrent neural networks. These models addressed SMT's shortcomings by processing entire sequences and incorporating mechanisms like attention to handle dependencies more effectively. Google's Neural Machine Translation (GNMT) system, developed starting in early 2015, represented a key implementation of this paradigm shift, integrating deep LSTMs and attention to produce more natural translations directly from raw text.

Technical Architecture

Encoder-Decoder Model

The encoder-decoder model forms the core architecture of Google Neural Machine Translation (GNMT), employing a sequence-to-sequence framework to transform source language input into target language output. The encoder processes the input sequence X = x_1, x_2, \dots, x_M through a stack of (LSTM) layers, converting it into a sequence of hidden states that encapsulate semantic and syntactic features of the source text. Specifically, GNMT utilizes eight LSTM layers in the encoder: the bottom layer is bidirectional, capturing both left-to-right and right-to-left dependencies, while the upper seven layers are unidirectional. This structure produces a fixed-dimensional , often referred to as a context vector or list of vectors, which summarizes the input for the decoder. The , also comprising eight LSTM layers, operates autoregressively to generate the output Y = y_1, y_2, \dots, y_N one at a time, conditioned on the previously generated symbols and the encoder's representations. It begins with a start-of-sentence and continues until an end-of-sentence () is produced, applying a over the vocabulary to predict the for each output . To handle variable-length inputs and outputs effectively, the integrates an attention mechanism that dynamically aligns source and target elements during generation. The attention mechanism in GNMT computes soft-alignment weights to focus the on relevant parts of the input sequence, addressing limitations of fixed context vectors in traditional models. For each output position i, the attention context a_i is calculated as a_i = \sum_{t=1}^M \alpha_{it} \cdot h_t, where h_t are the encoder hidden states and \alpha_{it} are the alignment weights derived via softmax: \alpha_{it} = \frac{\exp(e_{it})}{\sum_{k=1}^M \exp(e_{ik})} Here, e_{it} represents the raw alignment score between the previous output y_{i-1} and input position t, computed using a feed-forward network. This soft-alignment enables the model to weigh input elements proportionally to their relevance, improving translation quality for long sentences. To facilitate training of these deep networks on large-scale data, GNMT incorporates residual connections. Residual connections add the input of each LSTM layer (from the third layer onward) to its output, mitigating vanishing gradients and allowing information to flow directly through the network: y_l = f(x_l) + x_l, where f is the layer transformation and l denotes the layer. This technique enhances the model's capacity to capture complex linguistic patterns.

Training and Optimization Techniques

The training of Google Neural Machine Translation (GNMT) models relies on massive parallel corpora comprising billions of sentence pairs, primarily sourced from web crawls including articles and news websites. These datasets are substantially larger than public benchmarks; for instance, Google's internal corpora are two to three orders of magnitude bigger than the WMT'14 English-French dataset of 36 million sentence pairs. Preprocessing these corpora involves tokenization into subword units using the WordPiece model, which breaks words into 8,000 to 32,000 deterministic units (e.g., "_J et" for "Jet") to manage rare words, reduce vocabulary size, and improve handling of morphologically rich languages without explicit character fallback. This subword approach, akin to Byte-Pair Encoding, balances vocabulary coverage and model efficiency while avoiding out-of-vocabulary issues common in full-word tokenization. The core training paradigm for GNMT is end-to-end on parallel data, where the model directly optimizes sequence-to-sequence mappings without intermediate phrase-based alignments. is applied during training, providing the ground-truth preceding target s as input to predict the next , which accelerates and stabilizes flow compared to scheduled sampling alternatives. The primary objective function is the via loss, formulated for a target sequence as: L = -\sum_{t=1}^T \log P(y_t \mid y_{<t}, x) where y = (y_1, \dots, y_T) is the target sequence, y_{<t} denotes the preceding target tokens, and x is the source sequence. This loss is aggregated over the training batch to update model parameters, often augmented with reinforcement learning components for fine-tuning translation quality in production settings. Optimization proceeds with the Adam algorithm for the initial 60,000 steps at a learning rate of 0.0002, transitioning to plain stochastic gradient descent (SGD) with an initial learning rate of 0.5 that decays over time, enabling stable convergence on large-scale data. In multilingual GNMT variants, large vocabularies—spanning multiple languages—are efficiently managed through shared embeddings, utilizing a unified WordPiece vocabulary of approximately 32,000 units for both source and target languages, which minimizes parameter overhead and promotes cross-lingual transfer. Scalability is achieved via distributed training frameworks, such as across 12 replicas with model sharding over 8 GPUs per replica, allowing efficient processing of billion-scale corpora in weeks. Google Cloud further enhance this by supporting low-precision arithmetic and synchronous all-reduce operations for faster iterations, as demonstrated in optimized GNMT training runs that achieve up to 4x speedup on TPU clusters. Additionally, zero-shot translation for unseen language pairs emerges from in multilingual setups, where a single model trained on multiple supervised pairs (e.g., English-centric) generalizes to novel combinations like Japanese-to-Korean via shared representations, yielding improvements of 5-10 points over bilingual baselines.

Development History

Initial Announcement and Launch

The development of Neural Machine Translation (GNMT) originated from research conducted by Google scientists between 2015 and 2016, culminating in the seminal paper "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation" by Yonghui Wu and colleagues. This work introduced GNMT as an end-to-end model designed to surpass the limitations of prior systems, with initial testing focused on high-resource language pairs such as English-to-Japanese and English-to-Chinese. The research emphasized innovations like attention mechanisms and residual connections to improve translation fluency and accuracy, addressing key challenges in sequence-to-sequence learning for . Google publicly announced GNMT on September 27, 2016, through a research post titled "A for , at Production Scale," marking a pivotal shift from the company's longstanding phrase-based approach to a fully neural . This announcement highlighted GNMT's potential to produce more natural and context-aware translations by modeling entire sentences rather than isolated phrases. The system was integrated into starting in November 2016, initially supporting eight language pairs: English with , , , , Turkish, , , and Korean. At launch, GNMT achieved a landmark improvement, reducing translation errors by approximately 60% compared to Google's previous phrase-based system on several major language pairs, including English-French, English-German, English-Spanish, English-Chinese, and English-Japanese, as evaluated through human assessments and automated metrics like BLEU scores. This error reduction demonstrated GNMT's superior handling of syntactic and semantic nuances, bringing machine translation quality closer to human levels for select scenarios. The initial rollout was limited to these pairs to ensure scalability, with plans for rapid expansion to additional languages based on ongoing training refinements. A major hurdle in deploying GNMT was the immense computational requirements of training deep LSTM-based networks on billions of sentence pairs, which Google overcame by leveraging low-precision arithmetic for faster computations and deploying custom Tensor Processing Units (TPUs)—specialized hardware accelerators announced earlier that year—to enable efficient large-scale training and inference. These optimizations reduced training time significantly while maintaining model performance, allowing GNMT to operate at production scale within Translate's infrastructure.

Key Updates and Advancements

Following its initial launch, (GNMT) underwent significant expansions between and , particularly in supporting multilingual capabilities and improving efficiency. The multilingual model introduced in 2016 was expanded to cover over 100 languages, facilitating zero-shot translation—where the system translates between language pairs not explicitly trained on—by learning shared representations in a single model. By September , support for 70 additional languages was added to the Neural Machine Translation model in Google Cloud Translation API, broadening accessibility for diverse user bases. In 2020, integrated the Transformer architecture into its translation systems through a hybrid model using a Transformer encoder and (RNN) decoder, enhancing translation speed and quality across supported languages. This shift, building on the Transformer proposed by researchers in 2017, enabled and faster training on large datasets. Concurrently, the multilingual capabilities continued to expand. From 2020 to 2023, GNMT advancements focused on leveraging pre-trained language models and techniques for handling scarce data resources. incorporated BERT-like pretraining strategies, such as those in the multilingual mT5 model—a text-to-text Transformer pre-trained on data from 101 languages—to capture richer contextual understanding in translations, improving coherence in longer texts and nuanced expressions. For low-resource languages, back-translation was integrated into , where monolingual target-language data is automatically translated into the source language to augment parallel training corpora, boosting performance on under-resourced pairs. This approach enabled the addition of 24 new low-resource languages in 2022, using synthetic data generation to achieve viable translation quality without extensive parallel corpora. In 2024 and 2025, integrations with Google's family of models marked a pivotal evolution, extending GNMT beyond text to handle speech, images, and combined inputs for more natural interactions. In August 2025, updates added AI-powered live and learning tools using Gemini's capabilities. In November 2025, introduced Gemini-assisted translations, offering an "Advanced" mode for improved accuracy in select . Gemini's native multimodality allows real-time of conversations, visual content, and audio, as seen in updates to the app that incorporate Gemini for contextual fluency in live scenarios. These enhancements have improved performance on benchmarks like WMT, particularly for complex, context-dependent translations, while prioritizing ethical considerations such as mitigation through diverse training data and fairness audits aligned with Google's Principles. Efforts to reduce cultural and biases in outputs have been emphasized, ensuring more equitable representations across . Ongoing research in GNMT emphasizes adaptations for applications and specialized domains. Developments continue in low-latency for and wearable devices, enhancing features like live captioning and mode. Domain-specific tuning, such as for medical translations, leverages customizable models in Google Cloud Translation AI, where on sector-specific terminology improves precision in healthcare contexts without compromising general performance.

Performance and Evaluation

Benchmarking Metrics

The primary metric used to benchmark Google Neural Machine Translation (GNMT) is the Bilingual Evaluation Understudy () score, which quantifies translation quality by measuring n-gram between the machine-generated output and human reference translations, adjusted by a brevity penalty to penalize overly short translations. The formula is given by: BLEU = BP \cdot \exp\left(\sum_{n=1}^N w_n \log p_n \right) where BP is the brevity penalty, p_n is the modified n-gram for n up to N (typically 4), and w_n are uniform weights (usually 1/N). In GNMT evaluations, scores were computed using tokenized references via the multi-bleu.pl script on standard benchmarks, with representative results including 41.16 for English-to-French on the WMT'14 newstest2014 dataset using an ensemble model, establishing a significant improvement over prior statistical systems. Complementary automatic metrics include , which computes a of unigram , incorporating synonymy, , and word order penalties for better correlation with human judgments; TER, which measures the number of edits (insertions, deletions, substitutions, and shifts) needed to match a reference translation, reflecting effort; and chrF, a character n-gram that captures morphological . These metrics provide a more nuanced assessment beyond 's focus on exact matches, though GNMT's primary reporting emphasized BLEU for consistency with Workshop on Machine Translation (WMT) standards. Human evaluations remain essential for validating automatic metrics, typically employing direct assessment scales where annotators rate translations on (naturalness of ) and adequacy (fidelity to source meaning) from 0 to 6, or pairwise comparisons to determine preferences. In GNMT's case, human side-by-side evaluations on WMT'14 English-to-French data showed the system scoring 4.44, outperforming phrase-based baselines (3.87) but falling short of professional human translations (4.82), indicating near-parity in controlled news domains. Error analysis often categorizes issues by morphological accuracy, lexical choice, and syntactic structure, revealing GNMT's strengths in handling long-range dependencies while noting persistent challenges in rare morphological forms. Testing protocols for GNMT leverage WMT datasets for high-resource language pairs like English-French (36 million sentence pairs for training, newstest2014 for evaluation) to ensure standardized comparisons, while low-resource pairs rely on custom internal corpora derived from web-crawled data such as and news, often augmented with back-translation for robustness. Subsequent advancements in , building on GNMT, achieved human parity for major language pairs like English-German and English-French in news translation by 2019.

Comparisons with Other Systems

Google Neural Machine Translation (GNMT) marked a substantial advancement over () systems, primarily through its ability to capture long-range dependencies and contextual nuances more effectively. According to the seminal study by Google's research team, GNMT reduced translation errors by 55% to 85% compared to prior phrase-based models across major language pairs, with human evaluators preferring GNMT outputs in side-by-side comparisons. This improvement stemmed from GNMT's end-to-end neural architecture, which handled sentence-level context holistically, unlike 's reliance on fragmented phrase alignments, leading to error reductions on real-world corpora like and news sources. In comparisons with other neural machine translation systems, GNMT demonstrated strengths in low-resource scenarios and multilingual scalability. Early assessments showed GNMT providing effective zero-shot translation for under-resourced languages through its multilingual model. Against DeepL, an NMT system optimized for European languages, GNMT offered broader multilingual coverage, supporting over 100 languages compared to DeepL's focus on around 36, though DeepL was noted for higher naturalness in some European pairs. Microsoft Translator, with around 100 languages, provided solid performance but was considered less versatile for global applications than GNMT.

Applications and Coverage

Supported Language Pairs

Neural machine translation systems, building on (GNMT), power translations across over 240 languages as of 2025, enabling over 58,000 directed language pairs through multilingual modeling techniques that allow the system to handle translations without requiring parallel data for every specific pair. In June 2024, Google added 110 new languages using 2, many low-resource, expanding coverage significantly. This expansive coverage is achieved by training shared encoder-decoder architectures on diverse datasets, facilitating both direct and indirect translation paths across languages. High-resource language pairs, such as English to and from , , and , receive full bidirectional support with dedicated parallel training data exceeding millions of sentence pairs, ensuring high-fidelity translations for these widely used combinations. These pairs benefit from extensive optimization, resulting in robust performance for everyday and professional applications. For low-resource and zero-shot scenarios, the system supports over 50 under-resourced languages, including and , by leveraging from high-resource languages and monolingual data to generate synthetic parallel corpora. This approach enables zero-shot translation, where the model translates between language pairs never directly trained, such as to , via shared representations in the multilingual model. The system accommodates special cases, including right-to-left scripts like Hebrew through specialized text processing and rendering, and tonal languages such as via subword tokenization that preserves phonetic nuances. Additionally, the system addresses dialect variations, for instance by distinguishing from in recent expansions, allowing more accurate handling of regional linguistic differences.

Integration in Google Services

Neural machine translation systems, building on GNMT, have been the core engine powering the Google Translate app and website since its rollout in November 2016, enabling high-quality text translations across multiple languages by processing entire sentences for improved fluency and context. Initially deployed for eight major language pairs involving English, the system quickly expanded to support over 100 languages, handling more than a third of global translation queries through features like text input on the website and app. This integration also extends to voice and camera-based translations within the app, where users can speak or point a device camera at text for instant neural-powered conversions, facilitating seamless multilingual communication in everyday scenarios. Beyond Translate, these systems underpin real-time translation features in other Google services, enhancing collaborative and informational tools. In , introduced in 2025, , building on GNMT and powered by AI, drives live captions and speech translation, allowing participants to receive near-real-time and audio in their preferred language during video calls, starting with English-Spanish pairs and expanding to additional languages for broader accessibility in global meetings. For , enables multilingual query processing by translating user inputs and results on the fly, supporting seamless searches across languages without requiring manual switches, which covers the linguistic scope of supported pairs in Translate. Similarly, in , the auto-translate function leverages to convert entire documents into target languages while preserving formatting, aiding collaborative editing for international teams since its integration with the Translate API. In 2025, enhancements for on-device applications, particularly in Google Pixel devices, upgraded offline translation capabilities for live scenarios like phone calls and conversations. The Pixel 10 series introduced real-time voice translation during calls, using on-device neural models based on Gemini Nano, building on GNMT principles, to process speech without internet connectivity, supporting 11 languages and maintaining natural voice intonation for more intuitive interactions. This builds on earlier offline NMT deployments, reducing latency and enabling private, always-available translation in remote or travel settings. Additionally, integrations with Gemini AI in tools like Gmail and Google Assistant provide contextual refinements, where neural machine translation-powered translations are enhanced by Gemini's understanding of email threads or voice queries to suggest culturally nuanced or intent-aware adjustments, improving accuracy in professional correspondence and virtual assistance. Building on neural translation principles from GNMT, Google's role in accessibility has extended to experimental prototypes for sign language support, promoting inclusivity for Deaf and hard-of-hearing users worldwide. In May 2025, Google unveiled SignGemma, an on-device AI model to convert (ASL) gestures into spoken text or audio in real time, available via developer previews for integration into apps and devices. This prototype, trained on diverse sign data, aims to bridge communication gaps in education and daily interactions, with plans for multilingual sign support to align with the broad language coverage. Such efforts underscore the foundational impact of on equitable global communication tools.

References

  1. [1]
    A Neural Network for Machine Translation, at Production Scale
    Sep 27, 2016 · We announce the Google Neural Machine Translation system (GNMT), which utilizes state-of-the-art training techniques to achieve the largest improvements to ...
  2. [2]
    Google's Neural Machine Translation System: Bridging the Gap ...
    Sep 26, 2016 · In this work, we present GNMT, Google's Neural Machine Translation system, which attempts to address many of these issues.
  3. [3]
    Zero-Shot Translation with Google's Multilingual Neural Machine ...
    Nov 22, 2016 · Google Translate is switching to a new system called Google Neural Machine Translation (GNMT), an end-to-end learning framework that learns from millions of ...<|control11|><|separator|>
  4. [4]
    A Study on Google's Neural-Based Translation Model - SSRN
    Jun 26, 2025 · This study explores Google's Neural Machine Translation (GNMT) system, which was introduced in 2016 to enhance the fluency and accuracy of Google Translate.
  5. [5]
    [PDF] Google's neural machine translation system - arXiv
    Oct 8, 2016 · This work presents the design and implementation of GNMT, a production NMT system at Google, that aims to provide solutions to the above ...
  6. [6]
    Found in translation: More accurate, fluent sentences in Google ...
    Nov 15, 2016 · At a high level, the Neural system translates whole sentences at a time, rather than just piece by piece. It uses this broader context to help ...Missing: sources | Show results with:sources
  7. [7]
    [PDF] Machine Translation: A Literature Review - arXiv
    Dec 28, 2018 · Earlier research focused on rule-based systems, which gave way to example-based systems in the 1980s. Statistical machine translation gained ...
  8. [8]
    The Beginnings of Machine Translation: The First Rule-Based Systems
    Abstract: The postwar period saw the advent of the first computers, and machine translation was immediately considered a key application.
  9. [9]
    Statistical Machine Translation - an overview | ScienceDirect Topics
    In 2006, Google launched its internet translation service based on phrase-based SMT methods. Other companies such as Microsoft and Baidu also launched ...
  10. [10]
    Sequence to Sequence Learning with Neural Networks - arXiv
    Sep 10, 2014 · In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure.
  11. [11]
  12. [12]
  13. [13]
    Fast Deep Neural Network Training on Distributed Systems and ...
    Apr 30, 2019 · (3) we apply our approach to Google's Neural Machine Translation (GNMT) application, which helps us to achieves 4x speedup on the cloud TPUs.
  14. [14]
    Google Translate vs. DeepL: Which is better? - Smartling
    Apr 16, 2025 · Language service companies praise DeepL for its accuracy, especially in European languages. It can capture longer and more natural word ...Missing: GNMT | Show results with:GNMT
  15. [15]
    DeepL Vs. Google Translate Vs. Microsoft Translator (2025)
    Aug 26, 2025 · Google Translate wins for reach, DeepL for quality in fewer languages, and Microsoft Translator sits in the middle: broader than DeepL, ...Deepl Vs. Google Translate... · Supported Formats &... · Real-World Use Cases
  16. [16]
    Comparing Google Translate NMT Vs LLM Vs Gemini Vs ChatGPT ...
    Apr 28, 2025 · Here we take a brief 4 minute clip from a Russian television news channel last year and compare the translations produced by Google Translate NMT, Google ...
  17. [17]
    Best LLMs for Translation in 2025: GPT-4 vs Claude, Gemini
    Sep 25, 2025 · A 2025 academic study on Indian languages found Gemini beat GPT-4 in Telugu-to-English translations, though GPT-4 performed better overall in ...Missing: 2023-2025 | Show results with:2023-2025
  18. [18]
    110 new languages are coming to Google Translate
    Jun 27, 2024 · We're using AI to add 110 new languages to Google Translate, including Cantonese, NKo and Tamazight.Missing: multilingual 2017-2019
  19. [19]
    Google's Multilingual Neural Machine Translation System - arXiv
    Nov 14, 2016 · Google's system uses a single NMT model with a token for target language, enabling zero-shot translation and enabling implicit bridging between ...
  20. [20]
    Language support | Cloud Translation
    The Translation API's recognition engine supports a wide variety of languages for the Neural Machine Translation (NMT) model.
  21. [21]
    Unlocking Zero-Resource Machine Translation to Support New ...
    May 11, 2022 · We are expanding Google Translate to include 24 under-resourced languages. For these languages, we created monolingual datasets by developing and using ...Missing: 2020-2023 | Show results with:2020-2023<|control11|><|separator|>
  22. [22]
    How AI made Meet's language translation possible - The Keyword
    Sep 11, 2025 · Google Meet's real-time language translation feature is now available thanks to AI. This feature translates speech in near real-time, so you can ...
  23. [23]
    Translate documents or write in a different language - Google Help
    On your computer, open a document in Google Docs. · In the top menu, click Tools and then Translate document. · Enter a name for the translated document and ...
  24. [24]
    Google offers live translation for calls, AI journaling to Pixel 10 series
    Aug 20, 2025 · One of the more exciting live demos was the Google Pixel 10's new Voice Translate feature, which translates conversations in real-time.
  25. [25]
    Offline translations are now a lot better thanks to on-device AI
    Jun 12, 2018 · We're bringing neural machine translation technology on device, so you can get high-quality translations even when you're offline.
  26. [26]
    Gemini in Gmail - How to Use AI for Email | Google Workspace
    Discover how Gemini can revolutionize your Gmail experience. From drafting emails to prioritizing tasks, Gemini has you covered.
  27. [27]
    Google SignGemma: On-Device ASL Translation - MultiLingual
    May 28, 2025 · Google recently introduced SignGemma, an AI model designed to perform sign-language translation directly on smartphones, tablets and laptops.Missing: Neural prototypes
  28. [28]
    Google Invites Feedback for SignGemma, a New AI Sign Language ...
    Jun 3, 2025 · At its I/O 2025 conference, Google announced SignGemma, a new AI model designed to translate sign language into spoken text in real time.Missing: prototypes | Show results with:prototypes