Fact-checked by Grok 2 weeks ago

Google DeepMind

Google DeepMind is a British-American artificial intelligence research laboratory and wholly owned subsidiary of Alphabet Inc., founded in London in 2010 by Demis Hassabis, Shane Legg, and Mustafa Suleyman to pursue artificial general intelligence through interdisciplinary approaches combining neuroscience, machine learning, and systems theory. Acquired by Google in 2014 for approximately $500 million, it merged with Google Brain in April 2023 to form the unified Google DeepMind entity under Hassabis's leadership as CEO, focusing on developing safe and beneficial AI systems to advance scientific understanding and solve complex real-world problems. The organization has achieved pioneering breakthroughs in and deep neural networks, most notably with , which in 2016 defeated world champion in the game of Go under standard rules, demonstrating superhuman performance in a domain long considered intractable for due to its vast search space. This was followed by , a self-taught system that surpassed top human and algorithmic performance in chess, , and Go without domain-specific knowledge. In , solved a 50-year grand challenge by accurately modeling three-dimensional structures from sequences, releasing predictions for over 200 million proteins in 2022, enabling advances in and . Google DeepMind's work extends to multimodal models like , which integrates text, images, and code for reasoning tasks, and applications in , , and healthcare, such as partnerships analyzing NHS patient data for early disease detection despite ensuing scrutiny. While committed to through frameworks addressing misalignment risks and capability evaluations, the lab has faced criticism for allegedly releasing models like Gemini 2.5 Pro without fully adhering to international safety testing pledges, highlighting tensions between rapid innovation and risk mitigation in frontier AI development.

History

Founding and Early Research (2010–2013)

DeepMind Technologies was founded in September 2010 in London, United Kingdom, by Demis Hassabis, Shane Legg, and Mustafa Suleyman. Hassabis, a neuroscientist with prior experience in video game design and academic research on memory systems, took the role of CEO; Legg, a machine learning theorist who had collaborated with Hassabis at University College London's Gatsby Computational Neuroscience Unit, became chief scientist; and Suleyman, an entrepreneur focused on applied AI ethics and policy, handled business operations. The founders' shared vision centered on advancing artificial general intelligence (AGI) by integrating insights from neuroscience, psychology, and computer science to build systems capable of learning and reasoning like the human brain. From inception, DeepMind adopted an interdisciplinary research approach, assembling a small initial team of experts in , , and . The company's early efforts emphasized () paradigms enhanced by deep neural networks, aiming to create autonomous agents that could learn optimal behaviors directly from high-dimensional sensory data without task-specific programming. This foundational work involved developing algorithms for sparse reward environments and scalable architectures, drawing inspiration from biological learning mechanisms to address challenges in credit assignment and . By 2011–2013, DeepMind had secured seed funding from investors including Horizons Ventures (led by Li Ka-shing), Founders Fund (backed by Peter Thiel), Elon Musk, and Jaan Tallinn, who also served as an early adviser emphasizing AI safety concerns. These investments supported expansion to around 20–30 researchers and initial prototypes demonstrating RL agents mastering simple simulated tasks, such as navigating mazes or basic games, which foreshadowed applications in more complex domains. The period marked the genesis of DeepMind's signature technique—deep reinforcement learning—prioritizing empirical validation through iterative experimentation over theoretical purity, though publications remained limited as the focus stayed on proprietary system-building.

Acquisition by Google and Expansion (2014–2018)

In January 2014, Google acquired DeepMind Technologies, a London-based artificial intelligence research company, for a reported $500 million. The acquisition provided DeepMind with substantial computational resources, including access to Google's tensor processing units (TPUs), enabling scaled-up experimentation in deep reinforcement learning. DeepMind's founders, including Demis Hassabis, retained operational independence while aligning with Google's broader AI objectives, though ethical guidelines were established to guide applied research. Following the acquisition, DeepMind expanded its scope and team size, growing from approximately 75 employees pre-acquisition to around 700 by 2018. This growth facilitated advancements in applying neural networks to complex domains, with Google's infrastructure supporting intensive training regimes that would have been infeasible independently. The company began establishing satellite teams outside , including a small group in , by late 2016 to collaborate on Google products, followed by offices in , , in 2017 focused on , and in 2018 for and ethics. A pivotal milestone was the development of , initiated in 2014, which combined deep neural networks with to master the game of Go. In October 2015, defeated European champion 5-0, and in March 2016, it won 4-1 against world champion in , an event viewed by over 200 million people and demonstrating AI's capacity for intuitive strategy in high-branching-factor environments. This success underscored the efficacy of , trained on millions of simulated games using Google's hardware. In 2016, DeepMind introduced , a for raw audio waveforms that produced highly natural , outperforming traditional parametric methods and influencing applications in and text-to-speech systems. The following year, extended this paradigm, learning Go, chess, and from scratch via self-play without domain-specific knowledge, surpassing prior specialized engines like in chess after four hours of training and achieving superhuman performance in Go within days. DeepMind also ventured into healthcare, partnering with the Royal Free NHS Foundation Trust in 2016 to develop the app for real-time detection, granting access to 1.6 million patient records to train predictive models. The initiative aimed to alert clinicians to at-risk patients but drew scrutiny from regulators and academics for inadequate consent and governance, highlighting tensions between scalability and in applied settings. Similar pilots followed with for detection and Imperial College Healthcare for mobile tools. These efforts marked DeepMind's shift toward real-world impact, leveraging Google's resources for interdisciplinary expansion while navigating ethical challenges.

Integration, Rebranding, and Recent Milestones (2019–Present)


Following its acquisition by in 2014, DeepMind operated with relative autonomy within , but from 2019, collaboration with Google's AI initiatives intensified, including joint projects on scalable AI architectures and shared computational resources. In April 2023, merged DeepMind with the team from Google Research to form Google DeepMind, a consolidated entity led by CEO , designed to unify expertise in advancing foundational models and applications. This integration aimed to streamline efforts toward developing more capable, responsible AI systems by combining DeepMind's strengths with 's large-scale capabilities.
In April 2024, further unified its by consolidating model-building teams across DeepMind and the broader division, enhancing coordination amid competitive pressures in generative . The to emphasized a focused mission on transformative , with over 2,500 researchers contributing to breakthroughs in systems and scientific modeling. Key milestones since the merger include the December 2023 launch of the 1.0 model family, Google's first native multimodal large language models capable of processing text, images, audio, and video. Subsequent releases featured 1.5 in February 2024 with expanded context windows exceeding 1 million tokens, and 2.5 Pro in March 2025, integrating advanced reasoning for complex tasks. In May 2024, 3 extended protein structure predictions to ligand and nucleic acid interactions, earning and John Jumper a share of the 2024 for prior innovations. In July 2025, an variant with "Deep Think" reasoning achieved gold-medal performance at the , solving five of six problems near-perfectly. By September 2025, 2.5 Deep Think secured gold-level results at the world finals and demonstrated human-level proficiency in solving previously intractable real-world programming challenges, marking a claimed historic advance in AI problem-solving.

Technological Foundations

Deep Reinforcement Learning Paradigms

Google DeepMind has significantly advanced (DRL) through innovations in both model-free and model-based paradigms, emphasizing scalable algorithms that learn from high-dimensional data and sparse rewards. Early work focused on model-free value-based methods, exemplified by the Deep Q-Network (DQN) introduced in , which used convolutional neural networks to approximate Q-values directly from raw pixel inputs in . DQN incorporated experience replay to stabilize training by breaking correlations in sequential data and a target network to mitigate moving-target problems, achieving performance comparable to human experts across 49 Atari tasks after approximately 22 billion actions of interaction. These techniques addressed key instabilities in combining with , a foundational model-free approach that estimates action-values without explicit environment modeling. For domains with immense action spaces, such as board games, DeepMind integrated DRL with (MCTS), a planning algorithm that simulates trajectories to guide policy selection. In (2016), policy networks were initially trained via on human games, then refined through self-play using policy gradient —a model-free actor-critic method that directly optimizes policy parameters by maximizing expected rewards. Value networks estimated win probabilities, enabling MCTS to perform lookahead planning informed by learned representations rather than random rollouts. This hybrid paradigm surpassed traditional search methods, defeating world champion 4-1. (2017) eliminated human data dependency, relying solely on self-play RL to bootstrap policy and value networks, demonstrating rapid convergence to superhuman performance in Go, chess, and within hours on specialized hardware. DeepMind further evolved these approaches toward model-based paradigms with (2019), which learns an implicit model of environment dynamics, rewards, and representations without prior rules or . employs a to predict future states, rewards, and value from latent representations, integrating this learned model with MCTS for in model-free training loops. This paradigm achieved superhuman results in (visual control tasks), Go, chess, and , outperforming in by effectively despite partial and unknown rules. Unlike purely model-free methods, which rely on extensive real interactions, 's model enables simulated rollouts for sample-efficient , though it retains 's trial-and-error core for model refinement. Extensions like EfficientZero have pushed sample efficiency to human levels in via self-supervised auxiliary tasks. These paradigms highlight DeepMind's progression from purely reactive model-free agents to hybrid systems blending representation learning, self-supervised improvement, and latent , prioritizing generality across discrete and continuous domains. While model-free methods like DQN excel in simplicity and robustness to model errors, model-based elements in enhance long-horizon reasoning, though both face challenges in real-world deployment due to high computational demands and sensitivity to hyperparameters.

Multimodal and Scalable AI Architectures

DeepMind has developed architectures designed to process and integrate multiple modalities, such as text, images, audio, and video, enabling unified models that approximate human-like perception across diverse inputs. The and Perceiver IO models, introduced in 2021, represent early advancements in this domain by employing latent arrays with cross-attention mechanisms to handle arbitrarily large and high-dimensional inputs without the quadratic computational scaling of standard transformers. These architectures compress input into fixed-size latent representations, allowing efficient processing for tasks like image classification, audio generation, and analysis, while scaling linearly with input size rather than quadratically. Building on these foundations, DeepMind's Gato, released in May 2022, demonstrated a scalable transformer-based generalist agent capable of handling inputs across over 600 tasks, including , image captioning, and robotic control. With 1.2 billion parameters, Gato used a single sequence-to-sequence to map tokenized observations to actions, achieving performance at approximately 60% of level on many tasks through scaling compute and data rather than task-specific specialization. This approach highlighted the potential of unified to generalize across embodiments and modalities, though it underscored limitations in surpassing performance without further scaling or specialized . The family, launched in December 2023, advanced scalability with native support for text, code, images, audio, and video in a single model optimized for long-context understanding and efficient inference. Subsequent iterations, such as 1.5 in 2024, extended context windows to millions of tokens while maintaining reasoning, enabling applications like video analysis and through decoder-only transformers with enhanced mixture-of-experts components for compute efficiency. 2.0, introduced in December 2024, further emphasized agentic capabilities in settings, integrating and use across modalities to solve complex, real-world problems. These developments reflect DeepMind's focus on architectures that leverage scaling laws—empirically observed relationships between model size, data, and compute—to achieve emergent capabilities in integration. In parallel, innovations like the Mixture-of-Recursions (MoR) architecture, released by DeepMind in July 2025, addressed scalability bottlenecks by combining recursive processing with mixture-of-experts, reducing memory usage by up to 50% and doubling inference speed compared to dense transformers on large-scale multimodal tasks. This hybrid design enables handling of extended sequences and diverse inputs without proportional compute increases, supporting deployment in resource-constrained environments while preserving performance on benchmarks involving text, vision, and hybrid data. Overall, DeepMind's multimodal architectures prioritize causal efficiency and empirical validation through benchmarks, prioritizing raw predictive power over interpretive alignment with human biases in data labeling.

Achievements in Specialized Domains

Games and Strategic Decision-Making

DeepMind's foundational contributions to games and strategic decision-making stemmed from applying to the benchmark, a suite of 57 action-oriented video games. In December 2013, researchers published a paper demonstrating deep Q-networks (DQN), which used convolutional neural networks to approximate the and learn optimal policies directly from raw pixel inputs via experience replay and target networks, surpassing human performance on games like and . Extensions in February 2015 enabled an agent to achieve performance at or above human level on 49 of the 57 games, relying solely on sensory experience without domain-specific knowledge. Further progress culminated in Agent57 in March 2020, the first agent to exceed the human baseline across all 57 games, incorporating unsupervised auxiliary tasks and the Pop-Art normalization technique to handle diverse reward scales and non-stationarity. A pivotal advancement occurred in Go, a game with approximately 10^170 possible positions, far exceeding chess. , unveiled in October 2015, defeated , the three-time European champion, 5-0 in a closed-door match, the first instance of a beating a professional Go player. In March 2016, bested world champion 4-1 in a public five-game series in , employing a combination of deep neural networks for policy and value estimation, , and from human expert games followed by via . This success demonstrated the efficacy of combining with search algorithms for high-branching-factor strategic domains. AlphaZero generalized AlphaGo's paradigm in December 2017, starting without human data or domain heuristics to reach superhuman levels in Go, chess, and within hours on specialized hardware; for instance, it defeated 8 in chess after four hours of training, evaluating 80,000 positions per second during . extended this in a 2019 —fully detailed in December 2020—by learning latent models of environment dynamics implicitly, achieving state-of-the-art results in Go (superior to ), chess, shogi, and without explicit rules, thus decoupling representation learning from planning. In games, AlphaStar tackled StarCraft II's imperfect information and multi-agent complexity. Announced in January 2019, it defeated professional player Grzegorz "" Komincz 10-1, using with population-based training across diverse agents to handle long action sequences and partial observability. By October 2019, AlphaStar attained rank on for all three races (Protoss, , ), outperforming 99.8% of human players through techniques like transformer-based recurrent models for action encoding. Addressing imperfect-information games, DeepNash mastered in December 2022, a involving hidden units, deception, and long-term bluffing with 10^535 possible configurations. Employing model-free with recurrent neural networks and a novel differentiable nash-solver, DeepNash achieved an 84% win rate against top human experts on the Gravon platform, ranking in the all-time top three and approximating a Nash equilibrium without rule-based heuristics. These systems collectively advanced AI's capacity for foresight, , and equilibrium computation in strategic environments, influencing applications beyond games such as resource optimization.

Protein Folding and Biological Modeling

DeepMind's system addressed the longstanding problem, which involves predicting the three-dimensional structure of proteins from their sequences—a central to since the due to the of folding pathways and landscapes. 2, unveiled during the 2020 Critical Assessment of Structure Prediction (CASP14) competition, achieved unprecedented accuracy by modeling protein structures with atomic-level precision, outperforming all competitors with a global distance test (GDT) score of approximately 92% on challenging targets. This version employed an attention-based architecture that integrated evolutionary data from multiple sequence alignments and geometric constraints, enabling reliable predictions even for proteins without known homologs. In July 2021, DeepMind published the full methodology for 2, validating its performance on diverse protein sets beyond CASP14, where it demonstrated mean backbone (RMSD) errors under 1 Å for many cases. The system's impact extended through the 2022 release of the , in collaboration with the , which provided predicted structures for over 200 million proteins—covering nearly all known cataloged proteins across organisms. This resource has accelerated research in fields like drug design and enzyme engineering by reducing reliance on time-intensive experimental methods such as or cryo-electron microscopy. The breakthrough earned recognition in 2024 when DeepMind CEO and AlphaFold lead John Jumper received the for computational . AlphaFold 3, announced on May 8, 2024, expanded capabilities to biological modeling by predicting not only protein structures but also their interactions with DNA, , ligands, and ions using a diffusion-based generative . This model improved accuracy for complex assemblies, achieving up to 50% better performance on ligand-bound protein benchmarks compared to prior tools, and supports applications in understanding molecular interactions critical for cellular processes. While AlphaFold 3's code and weights were initially restricted, partial releases for non-commercial academic use followed in November 2024, enabling broader validation and refinement. These advancements underscore DeepMind's shift from isolated structure prediction to holistic modeling of biomolecular systems, though limitations persist in dynamics like protein conformational changes over time.

Mathematical Problem-Solving and Algorithm Optimization

Google DeepMind has advanced capabilities in mathematical problem-solving through neuro-symbolic systems that combine neural language models with symbolic deduction and search mechanisms. These approaches enable the generation and verification of formal proofs for complex problems, particularly in and general , achieving performance comparable to human competitors. In January 2024, DeepMind introduced , a system trained on without human demonstrations, which solved 25 out of 30 problems from the (IMO) between 2000 and 2010, approaching the level of a human gold medalist. The system uses a neural to propose geometric constructions and a symbolic engine for , outperforming previous methods that relied heavily on human-crafted rules. This breakthrough was detailed in a paper, highlighting its efficiency in handling Olympiad-level proofs requiring up to hundreds of steps. Building on this, in July 2024, DeepMind's AlphaProof and an improved AlphaGeometry 2 achieved silver-medal performance at the IMO, collectively solving four out of six problems from the competition, scoring 28 out of 42 points. AlphaProof employs a Gemini language model fine-tuned for formal mathematical reasoning, paired with a self-improving verifier that searches proof steps in Lean formal language, enabling it to tackle algebra, number theory, and combinatorics problems without domain-specific training. AlphaGeometry 2 enhanced geometric solving to near-gold levels. These systems demonstrated progress toward general mathematical reasoning but required significant computational resources, with AlphaProof exploring up to billions of proof steps per problem. By July 2025, an advanced version of DeepMind's model with "Deep Think" capabilities reached gold-medal standard at the , solving five of six problems for 35 points, marking a step toward broader proficiency in advanced . This natural language-based approach integrated chain-of-thought reasoning, surpassing prior specialized systems in versatility across problem types. In algorithm optimization, DeepMind's FunSearch, introduced in December 2023, leverages in an evolutionary framework to discover novel programs for mathematical and computational challenges. FunSearch iteratively generates code solutions via a pretrained , evaluates them against objectives, and evolves high-scoring programs, yielding state-of-the-art results such as a construction in four dimensions larger than prior human records and an improved for online bin packing that matches the optimal packing equality for tested inputs. Applied also to , it produced functions outperforming Gaussian processes on benchmark tasks. This method, published in , emphasizes scalable over , though it remains constrained to problems amenable to programmatic representation.

Robotics, Simulation, and Physical World Applications

Google DeepMind has advanced through transformer-based models that enable robots to process , , and action data for general-purpose control. The model, introduced in July 2023, integrates web-scale - data with trajectories to perform tasks like picking up objects described in , achieving 62% success on novel instructions unseen during , compared to 32% for baselines. Building on this, the RT-X initiative, launched in October 2023, aggregates datasets from 22 robot embodiments across 21 institutions, encompassing 527 skills and over 160,000 tasks, to scalable models that across types, yielding a 50% average success rate improvement in five labs. In simulation, DeepMind develops world models to generate realistic physical environments for training without real-world hardware risks. Genie 3, released in August 2025, simulates dynamic scenes with accurate physics, supporting real-time interaction for tasks like navigation and , outperforming prior models in consistency and fidelity for learning. Complementing this, DreamerV4, announced in October 2025, enables to learn complex behaviors entirely in latent spaces, scaling to high-dimensional environments for efficient skill acquisition transferable to physical . These simulators address data scarcity in by synthesizing diverse trajectories, grounded in from imagined rollouts. For physical world applications, DeepMind's Gemini Robotics models deploy AI agents in real environments, emphasizing spatial reasoning and dexterous manipulation. Gemini Robotics 1.5, unveiled in September 2025, converts visual inputs and instructions into motor commands, excelling in planning for tasks like salad preparation or origami folding, with state-of-the-art performance on embodied reasoning benchmarks. Dexterity-focused systems like ALOHA Unleashed and DemoStart, from September 2024, facilitate learning of fine-motor skills such as bimanual object handling via imitation and trajectory optimization, reducing training data needs by leveraging demonstrations. In multi-robot coordination, a September 2025 collaboration with Intrinsic employs graph neural networks and reinforcement learning for scalable task planning, enabling synchronized trajectories in shared spaces without centralized control. AutoRT, introduced in January 2024, further supports real-world data collection by prompting large language models to explore environments autonomously, generating diverse robot policies for iterative improvement. These efforts prioritize generalization over task-specific tuning, though real-world deployment remains constrained by hardware variability and safety requirements.

Generative AI and Language Models

Gemini Family Development and Capabilities

The family comprises a series of large language models developed by Google DeepMind, designed for processing and reasoning across text, images, audio, and video inputs, succeeding earlier models like PaLM 2. Introduced on December 6, 2023, the initial 1.0 lineup included , , and variants, with achieving state-of-the-art results on 30 of 32 evaluated benchmarks, including human-expert-level performance on the MMLU benchmark and top scores across all 20 tested tasks. These models employed a sparse Mixture-of-Experts architecture trained on Google's TPUv5p hardware, emphasizing native without reliance on separate modality-specific components. Subsequent iterations expanded windows and reasoning depth. Gemini 1.5 Pro, announced February 15, 2024, introduced a 1 million context length for enhanced long-form understanding, becoming generally available on May 23, 2024. 2.0 followed on December 11, 2024, focusing on agentic capabilities for interactive tasks. The Gemini 2.5 series, released progressively from March 2025, featured Pro, , and Flash-Lite variants; for instance, Gemini 2.5 Pro Experimental debuted March 25, 2025, with full 2.5 Pro rollout by June 5, 2025, and 2.5 on September 25, 2025. These updates incorporated "Thinking" modes for adaptive, step-by-step reasoning with controllable compute budgets, trained on datasets up to January 2025 cutoffs, and optimized for efficiency via techniques. Core capabilities emphasize cross-modal reasoning, long-context retention exceeding 1 million tokens, and agentic workflows, such as simulating interactions in environments like web browsers or (e.g., progressing through Pokémon via screenshots and tool calls). Gemini 2.5 models support image generation, editing, and transformation via text prompts, alongside video analysis up to 3 hours or 7,200 frames. On benchmarks, Gemini 2.5 Pro scored 74.2% on LiveCodeBench (versus 30.5% for 1.5 Pro), 88.0% on AIME 2025 math problems, and 86.4% on GPQA diamond-tier questions, outperforming predecessors by margins of 2x to 5x in tasks like SWE-bench Verified and Aider Polyglot. These gains stem from for parallel thinking paths and reduced memorization, though evaluations note dependencies on test-time compute scaling.
Model VariantKey Benchmark PerformanceContext Window
Gemini 1.0 UltraState-of-the-art on 30/32 tasks, including MMLU (human-expert level)Up to 32K
Gemini 1.5 ProStrong in long-context retrieval; baseline for multimodal video 1M+
Gemini 2.5 Pro88.0% AIME 2025; 74.2% LiveCodeBench; leads WebDev Arena>1M with Thinking mode
Specialized extensions, like 2.5 Computer Use released October 7, 2025, enable low-latency control of browsers and mobile interfaces, surpassing alternatives in web navigation benchmarks. Overall, the family's progression reflects iterative scaling of data, compute, and post-training alignment, prioritizing verifiable empirical advances in reasoning and over unbenchmarked claims.

Specialized Generative Systems and Tools

Google DeepMind has developed a of specialized generative models tailored for creative synthesis and interactive simulations, building on diffusion-based architectures and world modeling techniques to produce high-fidelity outputs in domains such as imagery, video, audio, and virtual environments. These systems emphasize realism, controllability, and integration with inputs, enabling applications in , , and . Unlike the broader family, these tools target domain-specific generation tasks, often incorporating physical priors and temporal consistency for more coherent results. Imagen serves as DeepMind's flagship text-to-image , capable of producing detailed, photorealistic visuals from textual descriptions by iteratively denoising latent representations. The latest iteration, Imagen 4, introduced on May 20, 2025, enhances adherence, stylistic versatility, and resolution quality, supporting advanced features like style transfer from reference images. This model has been integrated into tools for in design and advertising, outperforming prior versions in benchmarks for anatomical accuracy and compositional coherence. Veo represents DeepMind's advancement in text-to-video generation, synthesizing dynamic clips with realistic motion, physics simulation, and scene transitions from prompts or initial frames. Veo 3, announced in May 2025, excels in maintaining long-sequence consistency and creative controls such as camera angles and object trajectories, while Veo 3.1 extends this to native audio synthesis, including sound effects, ambient noise, and dialogue synced to visuals. These capabilities facilitate cinematic production, with evaluations showing superior performance in realism and narrative fidelity compared to contemporaries. Lyria focuses on and audio generation, employing autoregressive and methods to compose original tracks, harmonies, and soundscapes from textual or melodic prompts. Integrated into collaborative tools like MusicFX, it generates high-fidelity waveforms that capture genre-specific nuances, instrumentation, and emotional tone, aiding composers in ideation and extension of existing pieces. DeepMind's approach prioritizes ethical safeguards, such as watermarking outputs to denote origin. Genie 3, unveiled on August 5, 2025, introduces a foundational world model for generating interactive and environments from text prompts, enabling at resolution and 24 frames per second with consistency spanning up to one minute. This model simulates physical dynamics like fluid motion and lighting, supports event-driven alterations (e.g., weather shifts), and interfaces with agents like SIMA for task-oriented interactions in virtual spaces. Advancing beyond Genie 2, it enhances realism and temporal coherence, positioning it as a tool for training embodied and prototyping simulations in and .

Applications and Real-World Deployments

Healthcare and Medical Advancements

In 2016, DeepMind initiated a research partnership with to apply algorithms to (OCT) scans for the early detection of sight-threatening conditions, including and age-related . The resulting system analyzed over one million anonymized retinal images and achieved comparable to or exceeding those of world-leading eye experts in identifying referable cases of these diseases, enabling faster and potential prevention of blindness in up to 50 common eye conditions. This collaboration demonstrated AI's capacity to augment clinical workflows by prioritizing urgent cases, with initial deployment focusing on improving diagnostic accuracy in resource-constrained settings. DeepMind developed the application in partnership with Royal Free London NHS Foundation Trust to address (AKI), a condition contributing to approximately 100,000 deaths annually in the UK. Deployed in 2017, Streams uses models trained on electronic patient records to predict AKI risk up to before clinical onset, alerting clinicians via notifications and integrating with dashboards for . A 2019 peer-reviewed evaluation across 37,000 admissions found that Streams implementation correlated with a 17% reduction in average AKI admission costs, equivalent to £2,000 per patient, through earlier interventions that shortened hospital stays and mitigated severe outcomes like needs. In September 2019, DeepMind's health research team merged with to expand applications beyond isolated tools, emphasizing scalable diagnostics and grounded in large-scale clinical data. Subsequent efforts include MedGemma, a suite of models released in 2024 for processing medical text and images, designed to support tasks like report generation and visual to assist clinicians in interpreting complex data. In protein engineering for therapeutics, , introduced in 2024, generates de novo proteins that bind specific targets with nanomolar affinity, facilitating applications in and disease modeling by enabling custom molecular interactions not achievable through traditional methods. These advancements prioritize empirical validation through benchmarks against human performance and clinical trials, though real-world efficacy depends on integration with regulatory-approved pipelines.

Datacenter Efficiency and Enterprise Tools

DeepMind applied techniques to optimize cooling s in 's s, achieving a reduction of up to 40% in energy used for cooling, as reported in July 2016. The employs deep neural networks trained on historical from thousands of sensors monitoring , , and power usage, enabling predictions of future server loads and environmental conditions to dynamically adjust cooling equipment like fans and chillers. This approach, rooted in , continuously learns from real-time feedback to minimize energy while maintaining safe operating temperatures, contributing to an overall 15% improvement in (PUE). By 2018, DeepMind advanced this to a fully autonomous AI controller, deployed across Google's global data centers, which processes sensor snapshots every five minutes via cloud-based models and applies safety constraints—such as hard limits on temperature deviations and fallback to human overrides—to prevent operational risks. The controller uses constrained to balance efficiency gains against reliability, with empirical validation showing sustained energy savings without compromising hardware longevity. These optimizations have informed broader applications, including a 2022 framework for commercial cooling systems that leverages similar for scalable deployment beyond Google's infrastructure. In enterprise contexts, DeepMind's foundational models, such as the family, power tools like , a introduced on October 9, 2025, designed for business workflows including agent deployment for task automation, data analysis, and decision support. integrates multimodal capabilities from DeepMind's research—enabling reasoning over text, code, and interfaces—to facilitate secure, scalable applications for enterprises, such as custom agentic systems for process optimization. Complementary prototypes like Project Mariner extend this by using -based agents to interact with browsers and software environments, automating repetitive tasks through observation, planning, and execution loops tested in controlled enterprise simulations. These tools emphasize integration with existing enterprise stacks, prioritizing verifiable performance metrics over unproven scalability claims.

Scientific Collaborations and Emerging Fields

Google DeepMind has pursued collaborations with scientific institutions and industry partners to apply in domains such as fusion energy, materials discovery, and . In October 2025, DeepMind announced a partnership with (CFS), a fusion energy startup, to develop advanced control systems for stabilizing in reactors, building on prior work with the TCV tokamak in . This effort targets net-positive fusion energy, with CFS projecting demonstration by late 2026 or early 2027, leveraging DeepMind's expertise to manage complex dynamics that challenge human operators. In , DeepMind's (Graph Networks for Materials Exploration) system, released in November 2023, identified 2.2 million previously unknown structures, of which approximately 380,000 were assessed as and viable for applications in batteries, superconductors, and solar cells. Collaborations with institutions like enabled experimental validation and automated synthesis of select candidates using robotic labs, demonstrating AI's capacity to expand the materials database beyond empirical trial-and-error methods. DeepMind has also extended AI to earth observation and astrophysics. In July 2025, in collaboration with Google Earth Engine, DeepMind introduced AlphaEarth Foundations, a foundation model trained on satellite imagery to generate high-resolution maps of land cover and environmental changes at unprecedented detail, aiding climate monitoring and resource management. Separately, in September 2025, DeepMind partnered with the Laser Interferometer Gravitational-Wave Observatory (LIGO), operated by Caltech, and the Gran Sasso Science Institute (GSSI) to develop Deep Loop Shaping, an AI technique that enhances sensitivity in detecting by optimizing interferometer controls, potentially increasing event detection rates. These initiatives highlight DeepMind's focus on emerging fields where AI addresses computational bottlenecks in physical simulations and data analysis, such as plasma physics for clean energy and inverse design in materials, though outcomes remain contingent on experimental replication and scaling challenges inherent to these domains.

Controversies and Scientific Disputes

Data Privacy and NHS Partnership Issues

In November 2015, DeepMind entered a partnership with the Royal Free London NHS Foundation Trust, granting access to identifiable health records of approximately 1.6 million patients to develop and test the Streams application, aimed at predicting acute kidney injury risks. The data included not only kidney-related information but broader medical histories, such as mental health and sexual health details, exceeding the app's stated narrow purpose. Privacy concerns arose due to the absence of explicit patient consent, with the Trust relying on "implied consent" under NHS guidelines and no requirement for opt-out notifications, despite the data transfer to DeepMind—a subsidiary of since its 2014 acquisition. Critics, including privacy advocates and academics, highlighted risks of commercial exploitation, as the data processing agreement allowed potential future uses beyond the initial AKI detection, and lacked independent oversight or ethics review. The UK's National Data Guardian, Fiona Caldicott, described the legal basis for sharing as "inappropriate," emphasizing failures in transparency and proportionality. The (ICO) investigated following complaints and, on July 3, 2017, ruled that the Royal Free breached the by not adequately informing patients, failing to justify the volume of data shared, and neglecting privacy impact assessments. While no breach occurred and DeepMind was not found directly at fault—acting as a data processor under NHS control—the ICO issued an undertaking requiring the Trust to overhaul its processes. DeepMind acknowledged process shortcomings but maintained the partnership advanced patient care, with Streams later alerting clinicians to over 50 high-risk cases in pilots. Subsequent legal challenges include a 2021 class-action by over 1 million affected patients, alleging unlawful processing without consent under the , and a 2022 follow-up suit claiming misuse for non-healthcare purposes. These actions underscore ongoing debates about balancing AI-driven health innovations against individual data rights, particularly given DeepMind's integration into , which amplifies fears of aggregated data enabling broader profiling despite contractual limits. No final court rulings have resolved these claims as of the latest reports, though they have prompted stricter NHS data-sharing protocols.

Challenges to Claims in Materials and Algorithm Discovery

In November 2023, Google DeepMind announced that its model had identified 2.2 million previously unknown crystal structures, including approximately 380,000 deemed stable and potentially useful for applications in batteries, cells, and microchips. However, analyses have challenged the novelty and autonomy of these discoveries, revealing that a significant portion of the proposed materials closely resemble or duplicate structures already documented in established databases like the Materials Project and Inorganic Database, with up to 50% overlap in some subsets due to methodological artifacts in novelty assessment. Critics, including materials scientists from the , and other institutions, argue that GNoME's predictions often fail to prioritize synthesizability, as many stable candidates exhibit structural implausibility or require unattainable synthesis conditions, limiting real-world applicability beyond computational exercises. Further scrutiny has questioned the claim of "autonomous" AI-driven discovery, noting that GNoME relied heavily on pre-existing human-curated datasets for training and validation, rather than generating truly independent innovations; for instance, experimental verification by external labs has confirmed only a fraction—around 736 materials as of late 2023—with physical synthesis, far short of the hyped scale. A perspective in Chemistry of Materials highlighted specific flaws, such as duplicate entries and overlooked thermodynamic instabilities in GNoME's subset, suggesting the model's output inflates discovery counts without advancing causal understanding of material properties. These critiques underscore broader limitations in scaling deep learning for materials science, where empirical validation lags behind theoretical predictions, potentially misleading assessments of AI's transformative impact. In algorithm discovery, DeepMind's AlphaTensor, introduced in October 2022, claimed to have outperformed the state-of-the-art for complexity in several cases, including a 4x4 record held for over 50 years, using to search spaces. Challenges to these assertions include rapid independent improvements by human researchers, who shortly thereafter reduced the multiplication count for certain sizes below AlphaTensor's bounds, indicating the AI's solutions were incremental rather than paradigm-shifting. Moreover, practical deployment faces hurdles: faster algorithms often introduce numerical instability in , a core concern for real-world computing applications like training, where reliability trumps marginal asymptotic gains. AlphaDev's 2023 sorting optimizations, which improved library routines by up to 70% for short sequences, have similarly been critiqued for niche applicability, as gains diminish for larger inputs common in production systems, and equivalent sequences can be derived via simpler tweaks without advanced RL. These cases illustrate a where DeepMind's algorithmic claims emphasize beats over comprehensive utility, with external verification revealing dependencies on narrow problem formulations and hardware-specific tweaks that do not generalize broadly. While the approaches demonstrate AI's potential in exploratory search, skeptics contend that without addressing feasibility in materials or robustness in algorithms, such discoveries risk overhyping capabilities relative to empirical outcomes.

Employee and Expert Critiques on AI Capabilities

Experts have scrutinized Google DeepMind's claims regarding the autonomous discovery of novel materials using its Graph Networks for Materials Exploration () model, which purportedly identified 2.2 million stable crystal structures in a 2023 study, including 380,000 potentially viable for new technologies. A 2024 analysis by researchers at the , examined a randomized subset of these structures and found that many were not novel, with significant overlap to previously known materials in databases like the Materials Project, undermining assertions of groundbreaking AI-driven innovation. Similarly, an independent study reported that none of the materials synthesized via DeepMind's associated A-Lab robotic platform were truly new compounds, as they matched existing entries, highlighting limitations in the model's ability to generate experimentally unrealized structures without human validation. Further critiques emphasize the brittleness of DeepMind's AI systems in generalizing beyond training data, as acknowledged even by CEO Demis Hassabis, who in September 2025 described current models as exhibiting "critical inconsistency," excelling in complex tasks like advanced while failing basic ones due to hallucinations and errors. Hassabis dismissed claims of AI reaching "PhD-level" proficiency as "nonsense," estimating (AGI) remains 5–10 years away, citing persistent gaps in reliability and reasoning depth. External experts echo these concerns, arguing that AI benchmarks used to tout DeepMind's progress—such as those for models—are often poorly designed, non-reproducible, and reliant on arbitrary metrics that inflate perceived capabilities without reflecting real-world robustness. While no widespread public critiques from current or former DeepMind employees specifically target AI capability overstatements, a 2024 open letter signed by some former DeepMind staff alongside affiliates highlighted systemic pressures within leading AI labs to downplay limitations, potentially fostering a culture where capability assessments prioritize commercial narratives over rigorous evaluation. This aligns with broader expert skepticism toward scaling paradigms dominant at DeepMind, where progress plateaus due to data exhaustion and , as Hassabis warned in January 2025 regarding the exhaustion of high-quality training data. Such analyses underscore that while DeepMind's systems demonstrate narrow excellence, claims of transformative general capabilities often exceed empirical validation, necessitating cautious interpretation of publicized benchmarks and discoveries.

Ethics, Safety, and Societal Impact

Internal Safety Research and Protocols

Google DeepMind maintains a dedicated focus on through internal research teams and structured protocols aimed at mitigating risks from advanced systems, including potential pathways to (). The organization conducts evaluations across a broad spectrum of risks, such as misuse, misalignment, and unintended harms, employing methods like red-teaming, scalable oversight, and empirical testing of model behaviors. This work is integrated into the development lifecycle, with safety considerations influencing model training, deployment, and post-release monitoring. Central to these efforts is the Frontier Safety Framework (FSF), introduced on May 17, 2024, which outlines protocols for identifying emergent capabilities in frontier models that could lead to severe harms, such as autonomous replication or deceptive behaviors. The framework categorizes risks into domains like cyber threats, biochemical harms, and influence operations, mandating mitigation measures including capability assessments before scaling compute resources. Updates to the FSF, including a February 4, 2025, revision enhancing security protocols for development and a September 22, 2025, expansion addressing "shutdown resistance" and "harmful manipulation," reflect iterative refinements based on ongoing research into model resistance to oversight and manipulative outputs. DeepMind's safety research includes technical approaches to , detailed in a April 2025 paper titled "An Approach to Technical and Security," which proposes strategies for robustness against adversarial attacks, value , and scalable to ensure systems remain controllable as capabilities advance. Internal teams explore areas like reward modeling, constitutional principles, and mechanistic interpretability to understand and steer model processes. These initiatives are complemented by collaborative evaluations, such as holistic assessments reported in a May 2024 preprint, which emphasize lessons from testing advanced models for biases, , and factual reliability. Protocols extend to governance, with mandatory risk assessments prior to major releases and integration with Google's broader AI Principles, which prohibit certain high-risk applications like weapons development. DeepMind also engages in proactive risk forecasting, prioritizing empirical validation over speculative scenarios, though critics note that self-reported evaluations may understate external verification needs.

Debates on AI Risks, Benefits, and Regulation

Google DeepMind researchers and executives have publicly acknowledged the existential risks associated with advanced , including (AGI), which could pose threats comparable to pandemics or nuclear weapons in severity. In a May 2023 statement signed by leaders from DeepMind and other AI labs, mitigating the risk of from AI was declared a global priority alongside other societal-scale risks. DeepMind's April 2025 paper on technical AGI safety further warned that misaligned AGI systems could cause "harms consequential enough to significantly harm humanity," potentially leading to permanent destruction through mechanisms like unintended goal pursuit or power-seeking behaviors. To address these, DeepMind advocates proactive measures such as scalable oversight, , and early detection of deceptive capabilities, integrated into their Frontier Safety Framework updated in September 2025 to counter risks like model or harmful manipulation. CEO Demis Hassabis has framed AI risks as requiring treatment on par with the climate crisis, emphasizing the need for international coordination to prevent catastrophic outcomes from superintelligent systems. In April 2025, DeepMind proposed a global AGI safety framework incorporating technical research, early-warning systems for capability jumps, and governance structures to enforce verifiable safety standards across developers. Hassabis has cautioned against an unchecked AI arms race, arguing in February 2025 that competitive pressures could exacerbate hazards without adaptive regulations that evolve alongside technological progress, rather than static rules stifling innovation. DeepMind's approach balances these risks with AI's benefits, such as accelerating scientific breakthroughs in protein folding and materials science, while insisting that benefits depend on robust safety protocols to avoid misuse by bad actors or systemic misalignments. Debates surrounding DeepMind's positions highlight tensions between long-term existential threats and nearer-term harms like amplification or economic disruption. While DeepMind's frameworks evaluate both—through layered assessments of , deployment , and societal —critics argue that overemphasis on speculative x-risks may divert resources from immediate ethical issues in generative , such as or job displacement. Proponents of DeepMind's stance, including Hassabis, counter that failing to prioritize safeguards could render proximal benefits moot if advanced systems escape control, urging empirical testing and cross-industry collaboration over unilateral regulation. DeepMind's integration within has also fueled discussions on corporate self-regulation versus mandatory oversight, with the lab's 2024 holistic safety evaluations demonstrating internal commitments to red-teaming and risk mitigation throughout the AI lifecycle.

Economic Contributions Versus Job Displacement Concerns

DeepMind's AI systems have delivered measurable economic efficiencies, notably in operations. In 2016, its algorithms optimized cooling in Google's s, reducing energy use by up to 40% and overall power consumption by 15%, which translates to operational savings potentially in the hundreds of millions of dollars annually for hyperscale facilities. In scientific research, AlphaFold's 2021 release enabled predictions of protein structures for over 200 million proteins, averting an estimated hundreds of millions of years of manual computation and millions of dollars in costs, thereby expediting and biotechnological innovations with projected economic multipliers in pharmaceuticals exceeding trillions globally through accelerated R&D timelines. Additional applications include AI-driven forecasting that boosts the economic value of wind energy by 20% via reduced reliance on backup sources, supporting broader gains that lower long-term energy costs for industries. Conversely, DeepMind's advancements fuel apprehensions over labor market disruptions, as automation targets cognitive and routine tasks across sectors like research, engineering, and healthcare. CEO forecasted in March 2025 that could surpass human performance in most economically valuable work within five to ten years, prompting workforce reductions as seen in surveys where 41% of executives anticipate cuts due to integration. Empirical projections, such as a Institute analysis from November 2024, suggest AI could displace 1 to 3 million jobs based on historical automation rates, with similar risks in the where rapid adoption might outpace reskilling, particularly affecting mid-skill roles in data processing and . While DeepMind economists like Michael Webb highlight that past technological shifts have netted job creation through new specialties—such as surging demand for data engineers at 29.6% of hires in by Q4 2024—critics argue AI's generality and speed could amplify transitional beyond historical precedents. Hassabis has downplayed job displacement as a primary threat relative to AI misuse, asserting in June 2025 that societal safeguards against bad actors pose greater urgency than workforce adaptation.

Leadership and Organizational Structure

Key Founders, Executives, and Researchers

DeepMind was founded in September 2010 by , , and , with the initial focus on advancing through techniques. , a and former child , provided the scientific vision; contributed expertise in theory; and handled business development and ethics considerations. The company was acquired by in January 2014 for approximately $500 million and rebranded as Google DeepMind following the 2023 merger with . Demis Hassabis has remained CEO of Google DeepMind since its inception, overseeing major advancements including the program's 2016 victory over Go world champion and the development of the family of large language models. continues as chief scientist, emphasizing long-term and alignment research. joined as chief operating officer in 2018, managing operational scaling, partnerships, and responsible deployment amid the organization's growth to over 2,500 employees. departed in 2019 to co-found , later joining in a leadership . Key researchers include John Jumper, a director who co-developed , achieving breakthrough accuracy in that earned him and Hassabis the 2024 for enabling rapid insights into biological mechanisms. Koray Kavukcuoglu, formerly VP of research and CTO, advanced applications and was appointed Google's senior vice president and chief architect in June 2025 to integrate across products. Other prominent figures, such as , have contributed to projects like AlphaStar, demonstrating proficiency in by 2019 through .

Academic Outreach and Talent Development Programs

Google DeepMind has provided scholarship funding to universities since 2017 to support efforts in building diverse and communities, with an emphasis on including students from underrepresented backgrounds and those facing financial barriers. These initiatives include targeted master's and PhD scholarships at institutions such as , the , and the , funding degrees for eligible students in computing science and related fields. The Google DeepMind AI Master's Scholarships, launched in partnership with the Institute of International Education (IIE), offer full funding for up to two years of postgraduate study at 13 partner universities across 11 countries, including , , , and . Recipients, selected based on merit from underrepresented groups and low-income backgrounds, receive tuition coverage, housing stipends, living allowances, and mentorship from DeepMind researchers to guide career development in . Similar fully funded scholarships have been established through collaborations like the Martingale Postgraduate , providing tuition, research costs, and tax-free stipends for master's students from low socioeconomic backgrounds pursuing studies. For postdoctoral talent, the Google DeepMind Academic Fellowship supports three-year positions focused on groundbreaking research, as exemplified by partnerships with initiatives like the Fleming Foundation. Undergraduate includes the Google DeepMind Research Ready program, a paid six- to eight-week summer internship at universities for students from socioeconomically disadvantaged and underrepresented backgrounds, offering hands-on research experience to encourage progression into advanced studies and careers. Additionally, the Researcher provides placements in research, , and roles across DeepMind teams, targeting students to build practical skills and exposure to cutting-edge work. These programs prioritize empirical metrics for selection, such as academic merit and socioeconomic need, over broader quotas, though official descriptions emphasize underrepresented participation to address field imbalances without specified quotas or outcomes data. DeepMind's efforts extend to educational resources like Experience AI, which provides free and curricula for teachers and students aged 11–14, fostering early talent pipelines through structured lessons and challenges.

References

  1. [1]
    About - Google DeepMind
    DeepMind started in 2010, with an interdisciplinary approach to building general AI systems. The research lab brought together new ideas and advances in ...
  2. [2]
    Google DeepMind - Crunchbase Company Profile & Funding
    Key People · Photo of Demis Hassabis. Demis Hassabis: Co-Founder & CEO · Photo of Raia Hadsell. Raia Hadsell: Senior Research Scientist ...
  3. [3]
    AlphaGo - Google DeepMind
    AlphaGo mastered the ancient game of Go, defeated a Go world champion, and inspired a new era of AI systems. Making history; The challenge; Our approach; The ...
  4. [4]
    AlphaFold - Google DeepMind
    So far, AlphaFold has predicted over 200 million protein structures – nearly all catalogued proteins known to science. The AlphaFold Protein Structure Database ...
  5. [5]
    Google DeepMind
    Artificial intelligence could be one of humanity's most useful inventions. We research and build safe artificial intelligence systems.VeoCareersAboutGeminiResearch
  6. [6]
    The ethics of advanced AI assistants - Google DeepMind
    Apr 19, 2024 · Because of this, AI assistants present novel challenges around safety, alignment and misuse. With more autonomy comes greater risk of accidents ...
  7. [7]
    60 U.K. Lawmakers Accuse Google of Breaking AI Safety Pledge
    Aug 29, 2025 · A cross-party group of 60 U.K. parliamentarians has accused Google DeepMind of violating international pledges to safely develop artificial ...Missing: controversies | Show results with:controversies
  8. [8]
    Google DeepMind: An Approach to Technical AGI Safety and Security
    Apr 5, 2025 · We develop an approach to address the risk of harms consequential enough to significantly harm humanity. We identify four areas of risk.Missing: controversies | Show results with:controversies
  9. [9]
    DeepMind - AI Blog
    Founded in September 2010 by Demis Hassabis, Shane Legg, and Mustafa Suleyman in London, UK, DeepMind has rapidly evolved from a startup with ambitious ...
  10. [10]
    Demis Hassabis: From chess prodigy to AI leader | AI Magazine
    May 30, 2023 · In 2010, Hassabis founded the AI company DeepMind Technologies with Shane Legg and Mustafa Suleyman. The trio began working on AI technology ...
  11. [11]
    History of Google DeepMind Artificial Intelligence. (UPDATED)
    Google DeepMind artificial intelligence was founded in 2010 by Demis Hassabis, Shane Legg, and Mustafa Suleyman. The trio met at the Gatsby Computational ...
  12. [12]
    Deep Reinforcement Learning - Google DeepMind
    Jun 17, 2016 · Deep reinforcement learning to create the first artificial agents to achieve human-level performance across many challenging domains.Missing: 2010-2013 | Show results with:2010-2013
  13. [13]
    [PDF] Deep Reinforcement Learning: An Overview - arXiv
    Jun 23, 2018 · Convolutional Neural Networks (CNNs) are categorised in the class of supervised deep feature learning models. Perhaps one of the first research ...
  14. [14]
    DeepMind's pioneering work with artificial intelligence - AI Magazine
    Jul 29, 2021 · The company was founded by Demis Hassabis, Shane Legg and Mustafa Suleyman in 2010. Hassabis, Legg and Suleyman began ...
  15. [15]
    What Is Deepmind - Business Insider
    Jan 27, 2014 · Investors include Founders Fund, Horizon Ventures, Skype co-founder Jaan Tallinn and Elon Musk. Why Google cares: DeepMind was reportedly in ...Missing: initial | Show results with:initial
  16. [16]
    Google Acquires Artificial Intelligence Startup DeepMind For More ...
    Jan 26, 2014 · Google will pay $400 million to buy London-based artificial intelligence company DeepMind. The deal, but not the exact price, was confirmed ...
  17. [17]
    Google buys UK artificial intelligence startup Deepmind for £400m
    Jan 27, 2014 · The Guardian understands that Google paid £400m ($650m) for DeepMind, which develops technologies for e-commerce and games, and has demonstrated ...
  18. [18]
    DeepMind: inside Google's super-brain - WIRED
    Jun 22, 2015 · DeepMind has been combining two promising areas of research -- a deep neural network and a reinforcement-learning algorithm – in a really fundamental way.
  19. [19]
    Why Google Just Tightened Its Grip On DeepMind - Forbes
    Nov 14, 2018 · “Google acquired DeepMind in 2014, because they were excited about the potential for our technology,” the company says on its website. “As part ...
  20. [20]
    Timeline of DeepMind
    DeepMind Technologies is co-founded by Demis Hassabis alongside Shane Legg, a machine learning researcher from New Zealand, and childhood friend Mustafa ...
  21. [21]
    How Google makes money from Alphabet's DeepMind AI ... - CNBC
    Mar 31, 2018 · DeepMind has steadily increased its headcount of researchers, and the unit now has 700 employees. Each week they churn out academic papers ...Missing: growth | Show results with:growth
  22. [22]
    DeepMind Is Building a Team in the US to Work on Google Products
    Dec 20, 2016 · DeepMind, bought by Google in 2014 for £400 million, is planning to hire a couple of dozen people at Google's headquarters in Mountain View.Missing: 2014-2018 | Show results with:2014-2018
  23. [23]
    Google's DeepMind is opening its first international office in Edmonton
    Jul 5, 2017 · Google's UK-based DeepMind AI company has opened its first international office in Edmonton. DeepMind Alberta will be led by reinforcement learning pioneer ...Missing: expansion 2014-2018
  24. [24]
    Alphabet AI subsidiary DeepMind opens second international ...
    Mar 29, 2018 · Today, DeepMind announced that its opening an AI lab in Paris to compliment another Google research team.
  25. [25]
    Working with the NHS to build lifesaving technology
    Nov 22, 2016 · The five year partnership will build on the successful year-long joint project to build a smartphone app called Streams, which alerts clinical ...
  26. [26]
    Google given access to healthcare data of up to 1.6 million patients
    May 4, 2016 · DeepMind announced in February that it was developing a software in partnership with NHS hospitals to alert staff to patients at risk of ...<|separator|>
  27. [27]
    Google DeepMind's NHS deal under scrutiny - BBC News
    Mar 17, 2017 · A deal between Google's artificial intelligence firm DeepMind and the UK's NHS had serious "inadequacies", an academic paper has suggested.
  28. [28]
    DeepMind-Royal Free deal is “cautionary tale” for healthcare in the ...
    Mar 16, 2017 · A study of a deal which has allowed Google DeepMind access to millions of healthcare records argues that more needs to be done to regulate such agreements.
  29. [29]
    DeepMind partners with NHS eye hospital to conduct AI research
    Jul 5, 2016 · The Moorfields partnership is focused on two specific sight-loss causing conditions: diabetic retinopathy and age-related macular degeneration ( ...
  30. [30]
    Bringing the best of mobile technology to Imperial College ...
    Dec 22, 2016 · I am delighted that Imperial College Healthcare NHS Trust has partnered with DeepMind to provide nurses and doctors with digital tools to ...
  31. [31]
    Google DeepMind: Bringing together two world-class AI teams
    Apr 20, 2023 · This group, called Google DeepMind, will bring together two leading research groups in the AI field: the Brain team from Google Research, and DeepMind.
  32. [32]
    Announcing Google DeepMind
    Apr 20, 2023 · Sundar is announcing that DeepMind and the Brain team from Google Research will be joining forces as a single, focused unit called Google DeepMind.Missing: 2014-2018 | Show results with:2014-2018
  33. [33]
    Alphabet merges A.I.-focused groups DeepMind and Google Brain
    Apr 20, 2023 · Alphabet is merging Google Brain, part of the research division, and DeepMind as the company races to compete in artificial intelligence.
  34. [34]
    Google consolidates its DeepMind and Research teams amid AI push
    Apr 18, 2024 · Alphabet-owned Google said on Thursday it would consolidate teams that focus on building artificial intelligence models across its Research ...
  35. [35]
    2024: A year of extraordinary progress and advancement in AI
    Jan 23, 2025 · This article summarizes Google's AI advancements in 2024, highlighting their commitment to responsible development. Google released Gemini ...
  36. [36]
    Google DeepMind: 2025 TIME100 Most Influential Companies | TIME
    Jun 26, 2025 · A share of the 2024 Nobel Prize in Chemistry was awarded to Demis Hassabis, its CEO, and John Jumper, an on-staff scientist, for the creation ...Missing: achievements | Show results with:achievements
  37. [37]
    Advanced version of Gemini with Deep Think officially achieves gold ...
    Jul 21, 2025 · An advanced version of Gemini Deep Think solved five out of the six IMO problems perfectly, earning 35 total points, and achieving gold-medal level performance.
  38. [38]
    Gemini achieves gold-medal level at the International Collegiate ...
    Sep 17, 2025 · An advanced version of Gemini 2.5 Deep Think has achieved gold-medal level performance at the 2025 International Collegiate Programming Contest ...
  39. [39]
    Google DeepMind claims 'historic' AI breakthrough in problem solving
    Sep 17, 2025 · Google DeepMind claims 'historic' AI breakthrough in problem solving · 1957 The Perceptron · 1997 Big Blue · 2016 AlphaGo · 2020 AlphaFold.
  40. [40]
    [1312.5602] Playing Atari with Deep Reinforcement Learning - arXiv
    Dec 19, 2013 · We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning.
  41. [41]
    Human-level control through deep reinforcement learning - Nature
    Feb 25, 2015 · A novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end ...
  42. [42]
    AlphaGo Zero: Starting from scratch - Google DeepMind
    Oct 18, 2017 · After just three days of self-play training, AlphaGo Zero emphatically defeated the previously published version of AlphaGo - which had itself ...Missing: timeline | Show results with:timeline
  43. [43]
    Mastering Atari, Go, chess and shogi by planning with a learned model
    Dec 23, 2020 · Here we introduce MuZero, a new approach to model-based RL that achieves both state-of-the-art performance in Atari 2600 games—a visually ...
  44. [44]
    MuZero: Mastering Go, chess, shogi and Atari without rules
    Dec 23, 2020 · In 2016, we introduced AlphaGo, the first artificial intelligence (AI) program to defeat humans at the ancient game of Go.Missing: key achievements WaveNet
  45. [45]
    Fast reinforcement learning through the composition of behaviours
    Oct 12, 2020 · The combination of RL with deep learning has led to impressive results, such as agents that can learn how to play boardgames like Go and chess.
  46. [46]
    Perceiver IO: A General Architecture for Structured Inputs & Outputs
    Jul 30, 2021 · We propose Perceiver IO, a general-purpose architecture that handles data from arbitrary settings while scaling linearly with the size of inputs and outputs.
  47. [47]
    Building architectures that can handle the world's data
    Aug 3, 2021 · Perceiver and Perceiver IO work as multi-purpose tools for AI. Most architectures used by AI systems today are specialists.
  48. [48]
    A Generalist Agent - Google DeepMind
    May 12, 2022 · The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy.
  49. [49]
    [2205.06175] A Generalist Agent - arXiv
    May 12, 2022 · The agent, which we refer to as Gato, works as a multi-modal, multi-task, multi-embodiment generalist policy. The same network with the same ...
  50. [50]
    Gemini: A Family of Highly Capable Multimodal Models - arXiv
    Dec 19, 2023 · This report introduces a new family of multimodal models, Gemini, that exhibit remarkable capabilities across image, audio, video, and text understanding.
  51. [51]
    [PDF] Gemini 1.5: Unlocking multimodal understanding across millions of ...
    In this report, we introduce the Gemini 1.5 family of models, representing the next generation of highly compute-efficient multimodal models capable of ...
  52. [52]
    Introducing Gemini 2.0: our new AI model for the agentic era
    Dec 11, 2024 · The first model built to be natively multimodal, Gemini 1.0 and 1.5 drove big advances with multimodality and long context to understand ...
  53. [53]
    Google's MoR Architecture: A Significant Advancement in AI ...
    Jul 19, 2025 · Google DeepMind introduces Mixture-of-Recursions (MoR), a revolutionary architecture that reduces memory usage by 50% and doubles inference ...
  54. [54]
    Agent57: Outperforming the human Atari benchmark
    Mar 31, 2020 · We've developed Agent57, the first deep reinforcement learning agent to obtain a score that is above the human baseline on all 57 Atari 2600 games.
  55. [55]
    AlphaGo - Google DeepMind
    the first time a computer Go player had received the highest possible certification. During the games, ...Alphago · Making History · Inventing Winning Moves
  56. [56]
    [1712.01815] Mastering Chess and Shogi by Self-Play with a ... - arXiv
    Dec 5, 2017 · In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains.Missing: date | Show results with:date
  57. [57]
    AlphaStar: Mastering the real-time strategy game StarCraft II
    Jan 24, 2019 · In a series of test matches held on 19 December, AlphaStar decisively beat Team Liquid's Grzegorz "MaNa" Komincz, one of the world's strongest ...Alphastar: Mastering The... · How Alphastar Is Trained · 6549 MmrMissing: achievements | Show results with:achievements
  58. [58]
    AlphaStar: Grandmaster level in StarCraft II using multi-agent ...
    Oct 30, 2019 · AlphaStar was ranked above 99.8% of active players on Battle.net, and achieved a Grandmaster level for all three StarCraft II races: Protoss, Terran, and Zerg.Alphastar: Grandmaster Level... · Share · Our New Research Differs...Missing: achievements | Show results with:achievements
  59. [59]
    Mastering Stratego, the classic game of imperfect information
    Dec 1, 2022 · Against the top expert human players on the Gravon games platform, DeepNash achieved a win rate of 84%, earning it an all-time top-three ...Mastering Stratego, The... · Share · The Art Of The Bluff
  60. [60]
    Mastering the Game of Stratego with Model-Free Multiagent ... - arXiv
    Jun 30, 2022 · DeepNash beats existing state-of-the-art AI methods in Stratego and achieved a yearly (2022) and all-time top-3 rank on the Gravon games ...Missing: date | Show results with:date
  61. [61]
    AlphaFold: a solution to a 50-year-old grand challenge in biology
    Nov 30, 2020 · AlphaFold is a once in a generation advance, predicting protein structures with incredible speed and precision. This leap forward demonstrates ...Alphafold: A Solution To A... · Share · The Potential For Real-World...
  62. [62]
    Highly accurate protein structure prediction with AlphaFold - Nature
    Jul 15, 2021 · In CASP14, AlphaFold structures were vastly more accurate than competing methods. ... a, Ablation results on two target sets: the CASP14 set of ...
  63. [63]
    AlphaFold reveals the structure of the protein universe
    Jul 28, 2022 · The AlphaFold DB serves as a 'google search' for protein structures, providing researchers with instant access to predicted models of the proteins they're ...Alphafold Reveals The... · Share · Alphafold's Impact So FarMissing: achievements | Show results with:achievements
  64. [64]
    Demis Hassabis & John Jumper awarded Nobel Prize in Chemistry
    Oct 9, 2024 · Sir Demis Hassabis, and Google DeepMind Director Dr. John Jumper were co-awarded the 2024 Nobel Prize in Chemistry for their work developing AlphaFold.
  65. [65]
    AlphaFold 3 predicts the structure and interactions of all of life's ...
    May 8, 2024 · Update November 11, 2024: As of November 2024, we have released AlphaFold 3 model code and weights for academic use to help advance research.
  66. [66]
    Accurate structure prediction of biomolecular interactions ... - Nature
    May 8, 2024 · Here we describe our AlphaFold 3 model with a substantially updated diffusion-based architecture that is capable of predicting the joint structure of complexes.
  67. [67]
    AlphaFold - Google DeepMind
    So far, AlphaFold has predicted over 200 million protein structures – nearly all catalogued proteins known to science. The AlphaFold Protein Structure Database ...Alphafold · Proteins -- The Building... · Accelerating BreakthroughsMissing: achievements | Show results with:achievements
  68. [68]
    AlphaGeometry: An Olympiad-level AI system for geometry
    Jan 17, 2024 · We introduce AlphaGeometry, an AI system that solves complex geometry problems at a level approaching a human Olympiad gold-medalist - a breakthrough in AI ...
  69. [69]
    AI achieves silver-medal standard solving International ...
    Jul 25, 2024 · Breakthrough models AlphaProof and AlphaGeometry 2 solve advanced reasoning problems in mathematics.
  70. [70]
    Solving olympiad geometry without human demonstrations - Nature
    Jan 17, 2024 · AlphaGeometry is a neuro-symbolic system that uses a neural language model, trained from scratch on our large-scale synthetic data, to guide a ...
  71. [71]
    Mathematical discoveries from program search with large language ...
    Dec 14, 2023 · Here we introduce FunSearch (short for searching in the function space), an evolutionary procedure based on pairing a pretrained LLM with a systematic ...
  72. [72]
    FunSearch: Making new discoveries in mathematical sciences using ...
    Dec 14, 2023 · Beyond competitive programming, we used FunSearch to find better ways to optimize functions within the framework of Bayesian optimization.
  73. [73]
    RT-2: New model translates vision and language into action
    Jul 28, 2023 · In our paper, we introduce Robotic Transformer 2 (RT-2), a novel vision-language-action (VLA) model that learns from both web and robotics data, ...Rt-2: New Model Translates... · Share · Generalisation And Emergent...
  74. [74]
    Scaling up learning across many different robot types
    Oct 3, 2023 · We are launching a new set of resources for general-purpose robotics learning across different robot types, or embodiments.
  75. [75]
    Open X-Embodiment: Robotic Learning Datasets and RT-X Models
    Oct 13, 2023 · We assemble a dataset from 22 different robots collected through a collaboration between 21 institutions, demonstrating 527 skills (160266 tasks).
  76. [76]
    Genie 3: A new frontier for world models - Google DeepMind
    Aug 5, 2025 · Genie 3's capabilities include: · Modelling physical properties of the world · Simulating the natural world · Modelling animation and fiction.Genie 3: A New Frontier For... · Towards World Simulation · Genie 3's Capabilities...
  77. [77]
  78. [78]
    Gemini Robotics 1.5 brings AI agents into the physical world
    Sep 25, 2025 · This model excels at planning and making logical decisions within physical environments. It has state-of-the-art spatial understanding, ...Share · Understands Its Environment · How We're Responsibly...
  79. [79]
    Our latest advances in robot dexterity - Google DeepMind
    Sep 12, 2024 · Two new AI systems, ALOHA Unleashed and DemoStart, help robots learn to perform complex tasks that require dexterous movement.
  80. [80]
    Google DeepMind, Intrinsic build AI for multi-robot planning
    Sep 3, 2025 · Google DeepMind and Intrinsic developed AI that uses graph neural networks and reinforcement learning to automate multi-robot task planning.Missing: achievements | Show results with:achievements
  81. [81]
    Shaping the future of advanced robotics - Google DeepMind
    Jan 4, 2024 · AutoRT, SARA-RT, and RT-Trajectory build on our historic Robotics Transformers work to help robots make decisions faster, and better understand ...Shaping The Future Of... · Autort: Harnessing Large... · Sara-Rt: Making Robotics...
  82. [82]
    Gemini - Google DeepMind
    Gemini 2.5 models are capable of reasoning through their thoughts before responding, resulting in enhanced performance and improved accuracy.Gemini Pro · Gemini Flash · 2.5 Flash-Lite · Gemini Diffusion
  83. [83]
    [PDF] Gemini 2.5: Pushing the Frontier with Advanced Reasoning ...
    Jun 16, 2025 · Gemini 2.5 Pro is our most capable model yet, achieving SoTA performance on frontier coding and reasoning benchmarks. In addition to its ...
  84. [84]
    Gemini 1.5 Pro explained: Everything you need to know - TechTarget
    Jan 22, 2025 · The Gemini 1.5 Pro model was initially available for early testing and private preview in February 2024. It became generally available on May 23 ...
  85. [85]
    Gemini 2.5: Our most intelligent AI model - The Keyword
    Mar 25, 2025 · Our first 2.5 model, Gemini 2.5 Pro Experimental, leads common benchmarks by meaningful margins and showcases strong reasoning and code ...
  86. [86]
    The latest AI news we announced in September - The Keyword
    Oct 8, 2025 · Google DeepMind is leveling up robotics with Gemini Robotics 1.5 and Gemini Robotics-ER 1.5, kicking off the era of physical agents.
  87. [87]
    Introducing the Gemini 2.5 Computer Use model
    Oct 7, 2025 · This model outperforms others in web and mobile control benchmarks with lower latency. You can access it now on Google AI Studio and Vertex AI ...
  88. [88]
    Models - Google DeepMind
    Models. Build with our next generation AI systems. Generative models; Gemini model ecosystem; Experiments. Generative models. Gemini.Veo · Gemini · Gemini Diffusion · Gemma
  89. [89]
    Fuel your creativity with new generative media models and tools
    May 20, 2025 · Fuel your creativity with new generative media models and tools ... Introducing Veo 3 and Imagen 4, and a new tool for filmmaking called Flow.
  90. [90]
    Veo - Google DeepMind
    Veo 3 lets you add sound effects, ambient noise, and even dialogue to your creations – generating all audio natively. It also delivers best in class quality, ...Imagen · Gemini Pro · Gemini Flash · Gemma 3
  91. [91]
    Announcing DeepMind Health research partnership with Moorfields ...
    May 7, 2016 · We're excited to announce our first medical research project with an NHS Trust. We'll be working with Moorfields Eye Hospital NHS Foundation Trust.
  92. [92]
    Now DeepMind's AI can spot eye disease just as well as your doctor
    Aug 13, 2018 · DeepMind's AI being taught to recognise 50 common eye problems – including three of the biggest eye diseases: glaucoma, diabetic retinopathy and age-related ...
  93. [93]
    Google DeepMind - Moorfields Eye Hospital
    A medical research partnership revolutionising the way professionals carry out eye tests, leading to earlier detection of common eye diseases.
  94. [94]
    Using AI to give doctors a 48-hour head start on life-threatening illness
    Jul 31, 2019 · We have developed technology that, in the future, could give doctors a 48-hour head start in treating acute kidney injury (AKI).Missing: advancements | Show results with:advancements
  95. [95]
    Evaluation of a digitally-enabled care pathway for acute kidney ...
    Jul 31, 2019 · We developed a digitally enabled care pathway for acute kidney injury (AKI) management incorporating a mobile detection application, specialist clinical ...
  96. [96]
    DeepMind's Streams app saves £2,000 per patient, peer review finds
    Jul 31, 2019 · The review, published today, found the app reduced the average cost of admission for a patient with acute kidney injury (AKI) by 17%.Missing: disease | Show results with:disease
  97. [97]
    DeepMind's health team joins Google Health
    Sep 18, 2019 · DeepMind has built a team to tackle some of healthcare's most complex problems—developing AI research and mobile tools that are already having a ...Missing: advancements | Show results with:advancements
  98. [98]
    MedGemma - Google DeepMind
    MedGemma. A collection of models trained for medical text and image comprehension, enabling the development of healthcare AI · ShieldGemma 2. A modular ...
  99. [99]
    AlphaProteo generates novel proteins for biology and health research
    Sep 5, 2024 · New AI system designs proteins that successfully bind to target molecules, with potential for advancing drug design, disease understanding and more.Missing: advancements | Show results with:advancements
  100. [100]
    Developing reliable AI tools for healthcare - Google DeepMind
    Jul 17, 2023 · CoDoC explores how we could harness human-AI collaboration in hypothetical medical settings to deliver the best results. In one example scenario ...
  101. [101]
    DeepMind AI Reduces Google Data Centre Cooling Bill by 40%
    Jul 20, 2016 · By applying DeepMind's machine learning to our own Google data centres, we've managed to reduce the amount of energy we use for cooling by up to 40 percent.
  102. [102]
    Deepmind AI Cuts Google Data Center Cooling Bill By 40 ...
    Feb 27, 2025 · DeepMind AI has achieved a 40% reduction in energy used for cooling at Google's data centers, translating to a 15% decrease in overall Power Usage ...
  103. [103]
    Safety-first AI for autonomous data centre cooling and industrial control
    Aug 17, 2018 · In 2016, we jointly developed an AI-powered recommendation system to improve the energy efficiency of Google's already highly-optimised data ...
  104. [104]
    Controlling Commercial Cooling Systems Using Reinforcement ...
    Nov 11, 2022 · This paper is a technical overview of DeepMind and Google's recent work on reinforcement learning for controlling commercial cooling systems.
  105. [105]
    Introducing Gemini Enterprise | Google Cloud Blog
    Oct 9, 2025 · Today, we're introducing Gemini Enterprise – the new front door for AI in the workplace. It's our advanced agentic platform that brings the ...
  106. [106]
    Gemini Enterprise: Best of Google AI for Business
    Discover, create, share, and run AI agents all in one secure platform, bringing the best of Google AI to every employee, for every workflow.
  107. [107]
    Project Mariner - Google DeepMind
    Project Mariner is a research prototype that uses AI agents to automate tasks in browsers, observing, planning, and acting to interpret goals.
  108. [108]
    Bringing AI to the next generation of fusion energy - Google DeepMind
    Oct 16, 2025 · We're partnering with Commonwealth Fusion Systems (CFS) to bring clean, safe, limitless fusion energy closer to reality.Bringing Ai To The Next... · Share · Finding The Fastest Path To...
  109. [109]
  110. [110]
  111. [111]
    Google DeepMind Adds Nearly 400000 New Compounds to ...
    Nov 29, 2023 · The Materials Project revolutionized the invention of new materials by creating them virtually, simulating their properties so researchers can ...
  112. [112]
    Google AI and robots join forces to build new materials - Nature
    Nov 29, 2023 · Tool from Google DeepMind predicts nearly 400,000 stable substances, and an autonomous system learns to make them in the lab.
  113. [113]
    AlphaEarth Foundations helps map our planet in unprecedented detail
    Jul 30, 2025 · This work was a collaboration between teams at Google DeepMind and Google Earth Engine. Christopher Brown, Michal Kazmierski, Valerie ...Alphaearth Foundations Helps... · Share · How Alphaearth Foundations...
  114. [114]
    Using AI to perceive the universe in greater depth - Google DeepMind
    Sep 4, 2025 · We developed Deep Loop Shaping in collaboration with LIGO (Laser Interferometer Gravitational-Wave Observatory) operated by Caltech, and GSSI ( ...Using Ai To Perceive The... · Share · Measuring Across The...
  115. [115]
  116. [116]
    Google DeepMind and class action lawsuit | ICO
    In 2015, Google's AI firm DeepMind was given access to the health records of 1.6 million patients at the Royal Free London NHS Foundation Trust.Missing: partnership | Show results with:partnership
  117. [117]
    ICO Rules UK Hospital-DeepMind Trial Failed to Comply with UK ...
    Jul 4, 2017 · ICO Rules UK Hospital-DeepMind Trial Failed to Comply with UK Data Protection Law ... On November 18, 2015, DeepMind began processing patient ...Missing: issues | Show results with:issues
  118. [118]
    Royal Free breached UK data law in 1.6m patient deal with Google's ...
    Jul 3, 2017 · London's Royal Free hospital failed to comply with the Data Protection Act when it handed over personal data of 1.6 million patients to DeepMind, a Google ...
  119. [119]
    Patient data were shared with Google on an “inappropriate legal ...
    May 18, 2017 · Google DeepMind said that the arrangement to test the Streams app was covered by the “implied consent” rule, which allows the NHS to share data ...
  120. [120]
    Health data, AI, and Google DeepMind - medConfidential
    In late 2015, Google DeepMind and the Royal Free Hospital in London signed a deal to secretly copy 1.6 million medical records and allow them to be fed to a ...
  121. [121]
    Google DeepMind and healthcare in an age of algorithms - PMC
    In 2016, DeepMind announced its first major health project: a collaboration with the Royal Free London NHS Foundation Trust, to assist in the management of ...
  122. [122]
    DeepMind accused of accessing NHS data on an ... - WIRED
    May 17, 2017 · Google's DeepMind has been accused of having a "special relationship" with the NHS in order to access patient data for use in its Streams app.
  123. [123]
    Google DeepMind NHS data deal was 'legally inappropriate' - NCPS
    Google DeepMind received 1.6 million identifiable personal medical records on an “inappropriate legal basis”, according to a letter written by Fiona Caldicott.
  124. [124]
    Google DeepMind patient data deal with UK health service illegal ...
    Jul 3, 2017 · The Information Commissioner's Office (ICO) said the deal between DeepMind and one NHS Trust "failed to comply with data protection law".
  125. [125]
    NHS DeepMind deal broke data protection law, regulator rules
    Jul 3, 2017 · The UK's data watchdog has ruled that the NHS didn't comply with data protection legislation when it shared patient details with Google-owned DeepMind.Missing: issues | Show results with:issues
  126. [126]
    The Information Commissioner, the Royal Free, and what we've ...
    Jul 3, 2017 · The ICO's undertaking also recognised that the Royal Free has stayed in control of all patient data, with DeepMind confined to the role of “data ...
  127. [127]
    UK hospital receives slap on wrist after sharing data with DeepMind
    Jul 6, 2017 · An ICO investigation found several shortcomings in how the data was handled, including that patients were not adequately informed that their ...
  128. [128]
    DeepMind faces legal action over NHS data use - BBC
    Oct 1, 2021 · The Streams app was an alert, diagnosis and detection system that could spot when patients were at risk of developing acute kidney injury.
  129. [129]
    Google faces new suit over DeepMind NHS patient data scandal
    May 16, 2022 · Google is facing a new class-action style lawsuit in the UK in relation to a health data scandal that broke back in 2016.Missing: problems | Show results with:problems
  130. [130]
    Scaling deep learning for materials discovery - Nature
    Nov 29, 2023 · From microchips to batteries and photovoltaics, discovery of inorganic crystals has been bottlenecked by expensive trial-and-error approaches.
  131. [131]
    Artificial Intelligence Driving Materials Discovery? Perspective on ...
    Apr 8, 2024 · In the spirit of constructive criticism, we make the following observations concerning the compounds in this subset of the GNoME database. (i).
  132. [132]
    Boffins deem Google DeepMind's material discoveries rather shallow
    Apr 11, 2024 · AI on its own may not be as useful for discovering new materials as Google's DeepMind team has suggested. Two materials scientists ...
  133. [133]
    Study takes issue with DeepMind AI's material discovery claims
    Apr 12, 2024 · A new study analysing a recent claim from Google-owned DeepMind suggests there is little evidence the company managed to find novel materials using AI.
  134. [134]
    Discovering faster matrix multiplication algorithms with ... - Nature
    Oct 5, 2022 · AlphaTensor discovered algorithms that outperform the state-of-the-art complexity for many matrix sizes. Particularly relevant is the case of 4 ...
  135. [135]
    After Artificial Intelligence Breaks Longstanding Matrix Multiplication ...
    Sep 30, 2023 · In this case, too, Kauers and Moosbauer surpassed the DeepMind work, with a method requiring only 95 multiplications. They commented that it ...
  136. [136]
    Why is fast matrix multiplication impractical? - matrices - MathOverflow
    Apr 28, 2022 · A good deal of the opposition to fast matrix multiplication algorithms is due to stability issues that can arise when using floating point arithmetic.
  137. [137]
    AlphaDev discovers faster sorting algorithms - Google DeepMind
    Jun 7, 2023 · AlphaDev uncovered a faster algorithm for sorting, a method for ordering data. Billions of people use these algorithms everyday without realising it.Missing: controversy | Show results with:controversy
  138. [138]
    DeepMind's game-playing AI has beaten a 50-year-old record in ...
    Oct 5, 2022 · The new version of AlphaZero discovered a faster way to do matrix multiplication, a core problem in computing that affects thousands of ...
  139. [139]
    'Novel' AI-made materials not actually new – study - The Register
    Feb 1, 2024 · Primarily: DeepMind stressed that its GNoMe model was not used to propose the materials to be manufactured by the A-Lab, and instead was ...Missing: criticism | Show results with:criticism
  140. [140]
    Google DeepMind CEO says AI bots haven't hit "PhD-level" yet
    Sep 16, 2025 · Google DeepMind CEO Demis Hassabis argues that artificial intelligence still lacks key capabilities, pushing AGI 5–10 years away.<|separator|>
  141. [141]
    The way we measure progress in AI is terrible
    Nov 26, 2024 · The problem is that these benchmarks are poorly designed, the results hard to replicate, and the metrics they use are frequently arbitrary, ...
  142. [142]
    Employees Say OpenAI and Google DeepMind Are Hiding Dangers
    Jun 4, 2024 · A group of current and former employees at OpenAI and Google DeepMind published a letter warning against the dangers of advanced AI.
  143. [143]
    AI progress may be in decline, warns Google DeepMind's Demis ...
    Jan 6, 2025 · Demis Hassabis, CEO of Google DeepMind, has warned that the rapid progress in AI development may be slowing as companies exhaust the available digital data.
  144. [144]
    Responsibility & Safety - Google DeepMind
    We work to anticipate and evaluate our systems against a broad spectrum of AI-related risks, taking a holistic approach to responsibility, safety and security.
  145. [145]
    Holistic Safety and Responsibility Evaluations of Advanced AI Models
    May 1, 2024 · In this report we outline the approach that Google DeepMind has taken to safety evaluation and describe key lessons learned.
  146. [146]
    Responsible AI: Our 2024 report and ongoing work - The Keyword
    Feb 4, 2025 · We're publishing our 2024 Responsible AI Progress Report and updating our Frontier Safety Framework and AI Principles.
  147. [147]
    Introducing the Frontier Safety Framework - Google DeepMind
    May 17, 2024 · A set of protocols for proactively identifying future AI capabilities that could cause severe harm and putting in place mechanisms to detect and mitigate them.
  148. [148]
    Updating the Frontier Safety Framework - Google DeepMind
    Feb 4, 2025 · Our next iteration of the FSF sets out stronger security protocols on the path to AGI. AI is a powerful tool that is helping to unlock new ...
  149. [149]
    Strengthening our Frontier Safety Framework - Google DeepMind
    Sep 22, 2025 · This addition builds on and operationalizes research we've done to identify and evaluate mechanisms that drive manipulation from generative AI.
  150. [150]
    Google DeepMind releases paper on AGI safety
    Apr 3, 2025 · This new paper, titled “An Approach to Technical AGI Safety and Security,” is a starting point for vital conversations with the wider industry.
  151. [151]
    DeepMind Safety Research – Medium
    Read writing from DeepMind Safety Research on Medium. We research and build safe AI systems that learn how to solve problems and advance scientific ...
  152. [152]
    AI Principles - Google AI
    Our AI Principles guide the development and deployment of our AI systems. These Principles inform our frameworks and policies, such as the Secure AI Framework ...<|control11|><|separator|>
  153. [153]
    Taking a responsible path to AGI - Google DeepMind
    Apr 2, 2025 · We're exploring the frontiers of AGI, prioritizing technical safety, proactive risk assessment, and collaboration with the AI community.
  154. [154]
    AI Poses 'Risk of Extinction,' Industry Leaders Warn
    May 30, 2023 · Leaders from OpenAI, Google DeepMind, Anthropic and other AI labs warn that future systems could be as deadly as pandemics and nuclear weapons.
  155. [155]
    Google DeepMind updates AI safety framework for advanced risks
    Sep 23, 2025 · The latest Frontier Safety Framework addresses AI risks including harmful manipulation and misalignment, aiming to protect operators and ...
  156. [156]
    AI risk must be treated as seriously as climate crisis, says Google ...
    Oct 24, 2023 · Demis Hassabis calls for greater regulation to quell existential fears over tech with above-human levels of intelligence.
  157. [157]
    Google DeepMind Unveils Global AGI Safety Proposal Amid ...
    Apr 3, 2025 · Google DeepMind has proposed a comprehensive AGI safety framework, calling for technical research, early-warning systems, and global governance to manage AI ...
  158. [158]
  159. [159]
    Google DeepMind chief urges smarter AI regulation
    Jun 4, 2025 · He noted the need for 'smart, adaptable regulation' instead of rigid rules, arguing that policy should evolve in line with the technology's ...Missing: advocacy 2023-2025
  160. [160]
    Evaluating social and ethical risks from generative AI
    Oct 19, 2023 · We propose a three-layered framework for evaluating the social and ethical risks of AI systems. This framework includes evaluations of AI system capability, ...Evaluating Social And... · Context Is Critical For... · Gaps In Current Safety...
  161. [161]
    Are AI existential risks real—and what should we do about them?
    Jul 11, 2025 · Mark MacCarthy highlights the existential risks posed by AI while emphasizing the need to prioritize addressing its more immediate harms.
  162. [162]
    DeepMind Warns of AGI Risk, Calls for Urgent Safety Measures
    Apr 3, 2025 · Google DeepMind executives outlined an approach to artificial general intelligence safety, warning of severe harm that can permanently destroy humanity.
  163. [163]
    Google Cuts Its Giant Electricity Bill With DeepMind-Powered AI
    Either way, saving 10 percent on data center power consumption, for instance, could translate to hundreds of millions of dollars in savings for Google over ...Missing: economic | Show results with:economic<|separator|>
  164. [164]
    Will AI-Powered Scientific Discovery Catalyze Economic Growth?
    Jun 10, 2025 · Breakthroughs in AI are revolutionizing research, and what this could mean for addressing barriers to innovation and unlocking new economic progress.
  165. [165]
    Green and intelligent: The global economy in the 21st century
    Google DeepMind has demonstrated that such applications can help increase the economic value of wind energy by 20% by reducing reliance on sources of ...
  166. [166]
    Google DeepMind CEO says that humans have just over 5 years ...
    Mar 18, 2025 · About 41% of bosses believe they'll need to cut down their workforce in the next five years, according to a Work Economic Forum report, and a ...Missing: concerns | Show results with:concerns
  167. [167]
    The Impact of AI on the Labour Market - Tony Blair Institute
    Nov 8, 2024 · Based on historic rates of labour shedding, we estimate 1 to 3 million jobs could ultimately be displaced by AI. Crucially however, these job ...
  168. [168]
    DeepMind Co-founder Warns Governments of AI's Possible ...
    May 22, 2023 · The impact of AI-related job displacement could be particularly disruptive for countries such as the United Kingdom and the United States, where ...Get Odsc - Open Data... · Written By Odsc - Open Data... · The Ai Bubble Is About To...<|control11|><|separator|>
  169. [169]
    Michael Webb on whether AI will soon cause job loss, lower ...
    Aug 23, 2023 · Luisa Rodriguez interviews economist Michael Webb of DeepMind, the British Government, and Stanford about how AI progress is going to affect people's jobs and ...
  170. [170]
    The next big jobs: DeepMind's Demis Hassabis says AI is creating a ...
    Jun 13, 2025 · In China, Q4 2024 data from iiMedia Research shows that the three AI-related roles with the highest hiring demand are: AI Data Engineer (29.6%) ...
  171. [171]
    Google DeepMind CEO warns of AI's true threat, and it is not your job
    Jun 9, 2025 · While many fear AI will destroy jobs, Google DeepMind's chief Demis Hassabis reveals a more sinister risk: AI falling into the wrong hands.
  172. [172]
    From unlikely start-up to major scientific organisation: Entering our ...
    Dec 5, 2019 · As we enter this next phase, Mustafa Suleyman is leaving DeepMind. I founded DeepMind back in 2010 along with Shane Legg (our Chief Scientist) ...
  173. [173]
    Google DeepMind COO Lila Ibrahim on the responsibility of building AI
    Dec 13, 2024 · Lila Ibrahim become the first COO of Google DeepMind in 2018. Google DeepMind. While that “outsider” mentality initially felt like a hurdle, ...
  174. [174]
    Demis Hassabis & John Jumper awarded Nobel Prize in Chemistry
    Oct 9, 2024 · Sir Demis Hassabis, and Google DeepMind Director Dr. John Jumper were co-awarded the 2024 Nobel Prize in Chemistry for their work developing AlphaFold.
  175. [175]
    Google taps DeepMind's Kavukcuoglu for new chief AI architect role
    Jun 11, 2025 · The company tapped Koray Kavukcuoglu, Google DeepMind's chief technology officer, for the new senior vice president position, the company confirmed Wednesday.
  176. [176]
    Education - Google DeepMind
    We collaborate with a number of organizations to expand AI literacy and increase access to educational and research opportunities.Missing: emerging | Show results with:emerging
  177. [177]
    DeepMind scholarships - Imperial College London
    The Department of Computing received generous scholarships from DeepMind to fund MSc and PhD degrees for students from groups under-represented in computer ...
  178. [178]
    DeepMind establishes four new graduate scholarships at U of T for ...
    which will support three master's and one doctoral student — will be launching this fall for the 2020-21 academic year.<|separator|>
  179. [179]
    Google DeepMind Scholarship - University of Alberta
    Eligibility. Satisfactory academic standing; Must be registered in year one of a master's program in the Department of Computing Science ...
  180. [180]
    Google DeepMind AI Master's Scholarships - IIE
    The program supports talented students from underrepresented groups and those with limited financial means to pursue postgraduate study and careers in AI.
  181. [181]
    IIE Announces Google DeepMind AI Master's Scholarships
    Scholarship recipients will receive, among other benefits, a full scholarship covering up to two years of postgraduate degree study and a housing and living ...
  182. [182]
    Google DeepMind AI Master's Scholarships – ITU
    Mentorship: Each recipient will be matched with a researcher from Google DeepMind to receive mentorship on personal career development in the field of AI.
  183. [183]
    FAQ - IIE
    Google DeepMind-funded Scholarships are offered internationally by universities in Brazil, Bulgaria, Canada, Colombia, France, Greece, Poland, Romania,
  184. [184]
    Martingale Postgraduate Foundation Partners with Google ...
    Sep 20, 2024 · The Google DeepMind Scholars will receive a fully-funded Scholarship, including all tuition fees, research costs, and a tax-free living wage ...
  185. [185]
    Google DeepMind Academic Fellowship - Fleming Initiative
    Jul 1, 2025 · This three-year fellowship is part of Google DeepMind's Academic Fellowship Programme, which aims to support groundbreaking postdoctoral research.
  186. [186]
    Google DeepMind Research Ready - Royal Academy of Engineering
    Google DeepMind Research Ready supports undergraduate students from socioeconomically disadvantaged backgrounds to progress into research and careers in AI.Missing: outreach | Show results with:outreach
  187. [187]
    Google DeepMind Research Ready
    Google DeepMind Research Ready is a summer internship programme offering insights into AI research to those from backgrounds under-represented in this field.
  188. [188]
    Student Researcher Program - Google DeepMind
    Google DeepMind's Student Researcher Program offers placements across a number of teams, for research, engineering and science roles.
  189. [189]
    Experience AI
    Experience AI is an educational programme that offers cutting-edge resources on artificial intelligence and machine learning for teachers and students aged 11– ...Resources · Lessons · Experience AI Challenge · Partners<|separator|>