Artificial brain
An artificial brain denotes a engineered system of software and hardware intended to replicate the structural organization, computational dynamics, and cognitive faculties of a biological brain, such as that of humans or other animals.[1] These systems pursue brain-like intelligence through approaches like large-scale neural simulations or neuromorphic architectures that emulate neuronal and synaptic behaviors at varying levels of biological fidelity.[2] Key objectives encompass advancing artificial general intelligence, probing neuroscience mechanisms, and potentially enabling mind uploading or enhanced human cognition, though empirical progress remains constrained by incomplete brain mapping and prohibitive computational scales.[3] Prominent initiatives include the Blue Brain Project, which utilized supercomputing to model rat neocortical columns with detailed biophysical simulations of neurons and synapses, achieving reconstructions of microcircuits that exhibit emergent oscillatory patterns akin to in vivo activity.[4] Similarly, the European Human Brain Project integrated multiscale data to construct brain atlases and simulation platforms, fostering tools for hypothesis testing in neurology despite criticisms over its scope and deliverables.[3] Complementary hardware efforts, such as atomically thin memristor-based artificial neurons, have demonstrated capabilities in processing multimodal signals and exhibiting adaptive plasticity, hinting at energy-efficient alternatives to conventional von Neumann architectures.[5] Significant hurdles persist, including the sheer scale required—simulating a human brain's approximately 86 billion neurons and 100 trillion synapses demands exascale computing unattainable with current technology—alongside modeling the intricate, nonlinear interactions of glial cells, neuromodulators, and developmental plasticity.[6] Complexity arises from the brain's non-modular, massively parallel operations, which defy reduction to silicon equivalents without sacrificing biological realism, while real-time emulation necessitates speeds orders of magnitude beyond today's processors.[7] Ethical controversies emerge regarding potential consciousness in advanced simulations, raising dilemmas over rights and suffering in silicon substrates, though no verified instances of qualia have arisen.[8] Despite these barriers, incremental achievements in partial emulations, like thalamocortical models with millions of neurons, underscore viable pathways toward hybrid neuro-AI systems.[2]Definition and Scope
Core Definition
An artificial brain denotes a synthetic system of hardware and software engineered to replicate the structural organization, neural dynamics, and cognitive functionalities of a biological brain, such as those in mammals or humans. This replication targets the brain's core elements, including approximately 86 billion neurons interconnected by around 100 trillion synapses in the human case, to achieve information processing via spiking signals, synaptic plasticity, and network-level computations rather than abstracted algorithmic models.[9][10] Central methodologies encompass whole brain emulation, which involves non-destructive scanning of a preserved brain to map its connectome at synaptic resolution and simulate its electrophysiological activity on digital substrates, and neuromorphic computing, which employs specialized chips to mimic neuronal firing and synaptic weights through analog or mixed-signal circuits for energy-efficient, brain-like parallelism. These approaches prioritize fidelity to biological causality over optimization for narrow tasks, potentially enabling emergent properties like adaptive learning and sensory integration. However, no complete artificial human brain exists as of 2025, with current efforts limited to partial simulations of small neural circuits or insect-scale systems due to computational and scanning barriers.[11][12][13][14] The concept diverges from general artificial intelligence by emphasizing biomimetic replication grounded in empirical neuroscience data, such as detailed anatomical mappings from electron microscopy, to infer functional mechanisms from first principles of cellular and network physiology. Proponents argue this bottom-up strategy could yield robust general intelligence, contrasting top-down symbolic or statistical methods prone to brittleness in novel contexts, though skeptics highlight unresolved challenges in capturing non-computable aspects like qualia or glial cell roles. Empirical validation relies on matching simulated outputs to biological benchmarks, such as cortical column responses in rodent models.[15][16]Distinctions from AI and Related Concepts
An artificial brain refers to computational systems designed to replicate the structural and functional organization of biological brains, particularly emphasizing neural connectivity, synaptic plasticity, and hierarchical processing akin to those observed in mammalian cortex.[17] Unlike broader artificial intelligence (AI), which encompasses diverse paradigms such as rule-based expert systems, statistical machine learning, and transformer-based large language models, an artificial brain prioritizes causal mechanisms derived from neuroscience over abstract functional approximations.[18] For instance, while AI often achieves task performance through gradient descent on massive datasets—requiring thousands of exposures to learn patterns the human brain grasps from single instances—artificial brain approaches simulate neuron-level dynamics to emulate innate learning efficiencies.[18][19] This distinction underscores a substrate divergence: biological brains operate on wetware networks of specialized neurons with heterogeneous morphologies and electrochemical signaling, whereas conventional AI relies on uniform, digital silicon architectures lacking such biophysical constraints.[20] Artificial brains, by contrast, incorporate neuromorphic elements like spiking neural networks that mimic temporal firing patterns and energy-efficient sparse activation, aiming for causal realism in computation rather than mere behavioral mimicry.[21] Peer-reviewed analyses highlight that AI's backpropagation training, while effective for pattern recognition, diverges from cortical Hebbian learning rules, where synaptic changes arise from correlated pre- and post-synaptic activity without global error signals.[22][23] Artificial general intelligence (AGI), often conflated in popular discourse, targets versatile human-like reasoning across domains but does not mandate brain-like implementation; current AGI pursuits favor scalable data-driven models over emulation.[24] In emulation-focused artificial brains, success metrics include fidelity to verified neural circuit behaviors, such as those from connectomics data, rather than benchmark scores on abstract tasks.[25] Related concepts like cognitive architectures (e.g., ACT-R) model high-level psychological processes symbolically, bypassing low-level neural simulation, whereas artificial brains integrate multiscale modeling from ion channels to macro-circuits for predictive validity.[26] These approaches reveal systemic biases in AI literature, where functional equivalence is overstated without empirical validation against brain ablation studies or lesion data, privileging hype over verifiable causal pathways.[22]Historical Development
Early Concepts and Thought Experiments
In the 17th century, mechanistic philosophies began conceptualizing the mind as arising from physical processes amenable to replication. Thomas Hobbes, in his 1651 work Leviathan, described reasoning as "computation," equating mental operations to mechanical reckoning with symbols, thereby suggesting that cognitive faculties could be instantiated in non-biological substrates if the appropriate causal mechanisms were duplicated.[27] This materialist stance rejected immaterial souls, positing sense perception and thought as motions propagated through the body to the brain. The most explicit early articulation of human cognition as fully mechanical appeared in Julien Offray de La Mettrie's 1748 treatise L'Homme Machine. A physician influenced by anatomical observations and Newtonian physics, La Mettrie argued that humans are intricate self-organizing machines, with intelligence emerging solely from the brain's material structure and sensitivity rather than any supernatural essence; he likened the brain to a clock whose workings could be discerned through empirical dissection and replication.[28] La Mettrie's rejection of dualism implied that an artificial brain, constructed to mimic neural organization, could exhibit equivalent mental capacities, though he focused on natural rather than engineered replication.[29] These ideas informed later thought experiments probing the sufficiency of structural or functional simulation for mentality. In the mid-20th century, precursors to computational neuroscience, such as Warren McCulloch and Walter Pitts' 1943 model of neurons as logical gates, demonstrated how brain-like computation could be formalized mathematically, treating neural firing as binary operations capable of universal computation.[30] However, philosophical challenges emerged: Gottfried Wilhelm Leibniz's 1714 analogy of the brain to a mill—where inspecting gears reveals no perception—highlighted the explanatory gap between physical parts and subjective experience, cautioning that mere emulation might fail to capture qualia or intentionality.[27] By the 1970s, functionalist thought experiments tested emulation feasibility. Lawrence Davis in 1974 envisioned simulating a brain via a network of human-staffed offices connected by telephone lines, each mimicking neural states; if functional equivalence held, this "artificial" system should think, yet its distributed nature intuitively lacked unified consciousness.[31] Similarly, Ned Block's 1978 "China brain" scenario proposed the population of China, coordinated via radio, emulating one person's neural firings; Block used this to argue that behavioral mimicry alone does not guarantee intrinsic mentality, underscoring debates on whether substrate matters for cognition.[31] These experiments, while critiquing strong functionalism, presupposed that precise brain emulation was conceivable in principle, advancing causal realism about mind-brain identity.20th Century Foundations
In 1943, Warren S. McCulloch and Walter Pitts published "A Logical Calculus of the Ideas Immanent in Nervous Activity," proposing a simplified model of neurons as binary threshold logic units that could collectively perform any finite logical computation, thus establishing a computational theory of neural networks as universal function approximators.[32] This abstract framework shifted focus from purely biological descriptions to mathematical simulations of brain-like processing, influencing subsequent artificial intelligence and neuroscience research despite its simplifications, such as ignoring temporal dynamics and plasticity.[30] Advancing biophysical realism, Alan Hodgkin and Andrew Huxley developed in 1952 a set of differential equations describing action potential generation in the squid giant axon through voltage-dependent sodium and potassium conductances, enabling the first digital simulations of neuronal excitability on early computers.[33] Their model, validated against experimental voltage-clamp data, quantified ionic currents with parameters like sodium activation (m) and inactivation (h) gates, providing a predictive tool for spike initiation and propagation that remains foundational for detailed neuronal simulations.[34] This work earned them the 1963 Nobel Prize in Physiology or Medicine and bridged experimental electrophysiology with computational methods.[35] Synaptic plasticity emerged as a key concept with Donald O. Hebb's 1949 formulation in The Organization of Behavior, articulating that coincident pre- and postsynaptic firing strengthens connections ("cells that fire together wire together"), offering a biological mechanism for learning and memory implementable in network models.[36] Hebb's postulate, derived from observations of neural assembly dynamics, inspired algorithms for weight adjustment in artificial systems, though empirical verification awaited later long-term potentiation studies.[37] Hardware-oriented progress culminated in Frank Rosenblatt's 1958 Perceptron, a single-layer neural network trained via error-driven weight updates to classify patterns, with the Mark I Perceptron hardware processing 400 inputs at up to 1,000 decisions per second using potentiometers for weights.[38] Building on McCulloch-Pitts logic and Hebbian principles, it demonstrated supervised learning for binary classification but was limited to linearly separable problems, as later critiqued.[39] Dendritic and network modeling advanced in 1959 when Wilfrid Rall introduced multicompartment representations of neuronal morphology, simulating passive cable properties and active conductances on the IBM 650 to reveal nonlinear signal integration in branching dendrites.[40] These efforts, constrained by 1960s computing (e.g., simulations of tens of compartments), established protocols for scaling from single cells to microcircuits, presaging whole-brain emulation by emphasizing structural fidelity.[41] By century's end, such foundations enabled small-scale simulations like 100-neuron hippocampal networks in 1982, probing emergent rhythms tied to phenomena like epilepsy.[30]21st Century Initiatives
The Blue Brain Project, launched in 2005 at the École Polytechnique Fédérale de Lausanne (EPFL) under Henry Markram, pioneered large-scale digital reconstruction and simulation of mammalian brain tissue, beginning with a rat neocortical column comprising approximately 10,000 neurons and 10 million synapses.[42][4] This initiative utilized supercomputing resources, including IBM Blue Gene systems, to model neuronal morphology, electrophysiology, and synaptic plasticity at biologically detailed levels, achieving milestones such as the first simulation of a neocortical column in 2006.[43] The project evolved into efforts toward multi-scale brain atlases and continued until 2024, emphasizing reverse-engineering principles to validate models against empirical data from neuroscience experiments.[44] Building on this foundation, the European Human Brain Project (HBP), initiated in 2013 with €1 billion in funding from the European Commission, aimed to integrate neuroscience data into a unified simulation platform for understanding brain function and dysfunction, involving over 500 researchers across 140 institutions.[45][46] It developed the EBRAINS research infrastructure for high-performance computing-based brain modeling, data analytics, and neuromorphic hardware, though it faced criticism for prioritizing simulation infrastructure over experimental validation and for management issues leading to mid-project restructuring in 2015.[47] The HBP concluded in 2023, transitioning resources to EBRAINS for ongoing multiscale brain research.[46] Parallel national efforts emerged, including the U.S. BRAIN Initiative announced by President Obama on April 2, 2013, with initial $100 million funding to develop innovative neurotechnologies for mapping neural circuits and understanding dynamic brain activity, supporting computational modeling as part of broader goals to address disorders like Alzheimer's and Parkinson's.[48][49] Japan's Brain/MINDS project, launched in 2014 by the Ministry of Education, Culture, Sports, Science, and Technology, focused on creating a multiscale marmoset brain atlas to elucidate higher cognitive functions, incorporating transgenic models and advanced imaging for circuit-level insights translatable to humans.[50][51] In China, the China Brain Project, approved in 2016 as a 15-year initiative under the 13th Five-Year Plan, targeted neural mechanisms of cognition, brain-inspired artificial intelligence, and interventions for neurological diseases through integrated basic and applied research.[52][53] These programs reflect a global push toward data-driven brain emulation, though progress remains constrained by computational demands and incomplete biological knowledge.Technical Approaches
Whole Brain Emulation
Whole brain emulation (WBE) constitutes a proposed method for replicating the functions of a biological brain through high-resolution scanning followed by computational simulation on digital hardware. This approach seeks to capture the brain's neural architecture, synaptic connections, and dynamic processes at a level sufficient to reproduce information processing, behavioral outputs, and potentially subjective mental states equivalent to the original.[54] The concept presupposes that mental phenomena arise from computable physical processes in the brain, enabling transfer to a non-biological substrate without loss of fidelity.[54] The emulation process involves two principal stages: scanning and simulation. Scanning requires non-invasive or destructive techniques to acquire structural and functional data, such as neuron morphologies, synapse locations, ion channel distributions, and electrophysiological properties, typically at resolutions of 5 nanometers or finer to resolve molecular details. Destructive methods, including fixation, ultrathin sectioning, and electron microscopy imaging, predominate due to current technological constraints on non-destructive high-throughput scanning.[54] Simulation then translates this data into software models, employing frameworks like spiking neural networks or compartment models to mimic neural firing, synaptic plasticity, and network dynamics in real-time. Validation demands behavioral matching and internal consistency checks against biological benchmarks.[54] Computational demands for human-scale WBE at the spiking neural network level are estimated at approximately 10^18 floating-point operations per second (FLOPS) for real-time simulation, comparable to projected supercomputer capacities around 2019, though scaling to full fidelity including molecular processes could exceed 10^43 FLOPS. Storage requirements similarly escalate, reaching thousands of terabytes for neural-level data and up to 10^14 terabytes for molecular simulations, necessitating advances in data compression and hardware efficiency.[54] Optimistic timelines from 2008 projections suggested feasibility before mid-century for electrophysiological models, contingent on exponential improvements in imaging, sectioning automation (e.g., via automated tape-collecting ultramicrotome devices), and neuroinformatics.[54] Progress toward WBE remains incremental, with efforts focused on simpler organisms as proof-of-concept. The OpenWorm project, initiated in 2010, aims to simulate the 302-neuron nervous system of Caenorhabditis elegans, yet after over a decade, it has not achieved full behavioral emulation despite mapping the worm's connectome, highlighting gaps in integrating anatomical, physiological, and environmental data.[55] Larger-scale initiatives, such as partial mammalian simulations, project mouse whole-brain cellular-level emulation around 2034, contingent on enhanced connectomics and simulation software.[56] Organizations like the Carboncopies Foundation advance research into scanning and emulation protocols, emphasizing prerequisites like high-throughput electron microscopy arrays.[57] Key challenges include achieving nanoscale imaging speed for petavoxel datasets without artifacts, inferring latent parameters like synaptic weights from static scans, and simulating underappreciated elements such as glial cells, neuromodulators, and ephaptic effects. Uncertainty persists regarding necessary simulation granularity—whether synaptic-level models suffice or if molecular/quantum processes prove essential—potentially inflating requirements beyond current extrapolations. Validation of emulated consciousness or fidelity lacks empirical standards, and destructive scanning limits in vivo testing, underscoring WBE's speculative status despite foundational roadmaps.[54] Empirical hurdles, evidenced by stalled simple-organism emulations, suggest timelines may extend beyond initial estimates, prioritizing incremental milestones like cortical column simulations over full human brains.[58]Neuromorphic Computing
Neuromorphic computing encompasses hardware architectures and algorithms designed to replicate the neural and synaptic structures of biological brains, enabling event-driven, asynchronous processing through spiking neural networks.[59] These systems depart from conventional von Neumann designs by colocalizing computation and memory within neuron-like elements, thereby minimizing energy-intensive data shuttling between separate processing and storage units.[60] This brain-inspired paradigm supports sparse, parallel operations akin to biological neural firing, where activity occurs only upon relevant stimuli, contrasting with the constant clock-driven cycles of traditional processors.[61] The foundational principles trace to Carver Mead's work in the 1980s, which pioneered analog very-large-scale integration (VLSI) circuits to model sensory and neural processing with subthreshold transistor behaviors mimicking ion channels.[62] Early efforts focused on silicon retinas and cochleas for efficient sensory emulation, evolving into scalable digital-analog hybrids. By the 2010s, major implementations included IBM's TrueNorth chip, unveiled in 2014, which integrates 4096 neurosynaptic cores to simulate 1 million neurons and 256 million programmable synapses while consuming under 100 milliwatts in operation.[63] Similarly, Intel's Loihi chip, released in 2017, incorporates on-chip plasticity rules for spike-timing-dependent learning, facilitating adaptive neural models on a single die with 128 neuromorphic cores.[13] In pursuit of artificial brains, neuromorphic computing addresses key scalability hurdles in brain emulation by enabling real-time simulation of millions to billions of neurons at watt-level power budgets, far below the exaflop-scale demands of conventional supercomputers for equivalent fidelity.[64] For instance, TrueNorth has demonstrated pattern recognition tasks with energy efficiencies orders of magnitude superior to graphics processing units (GPUs) for sparse data workloads, such as olfactory or visual processing in neural models.[59] Loihi extends this to hybrid learning scenarios, supporting reinforcement and supervised paradigms directly in hardware to explore emergent behaviors in large-scale network simulations.[13] Initiatives like the Human Brain Project have integrated such platforms to prototype cortical columns, revealing insights into spatiotemporal dynamics unattainable via software-only emulation on von Neumann systems.[64] Despite progress, neuromorphic approaches face constraints in mapping full brain-scale connectivity—exceeding 10^14 synapses—due to current fabrication limits and the need for standardized spiking protocols across devices.[61] Ongoing research emphasizes memristive devices and 2D materials to enhance synaptic density and analog precision, potentially bridging gaps toward functional brain replicas.[65] Empirical benchmarks indicate these systems excel in edge computing for bio-inspired tasks, with TrueNorth achieving 70 milliwatts for image classification versus gigawatt-hours for deep learning equivalents on CPUs.[63] This positions neuromorphic hardware as a complementary pathway to digital emulation, prioritizing causal efficiency over raw throughput for realistic neural modeling.[60]Hybrid Biological-Computational Methods
Hybrid biological-computational methods integrate living neural tissue, such as cultured neurons or three-dimensional brain organoids derived from stem cells, with electronic hardware like silicon chips and multi-electrode arrays (MEAs) to enable computation, learning, and adaptive processing. These approaches seek to harness the energy efficiency, plasticity, and parallel processing of biological systems while leveraging the precision and scalability of computational interfaces for input-output control and data handling. Unlike purely digital neuromorphic systems, hybrids rely on real-time electrophysiological interactions, where electrical stimuli from electrodes modulate neural activity, and neuronal firing patterns are recorded to inform algorithmic feedback loops.[66]00806-6) A foundational example is the DishBrain system developed by researchers at Cortical Labs and collaborators, reported in 2022, which interfaced in vitro networks of human and rodent cortical neurons—cultured on high-density MEAs—with a simulated environment to perform goal-directed tasks. The neurons received sensory inputs as electrical patterns representing spatial data (e.g., positions in a Pong game) and modulated motor outputs via frequency-encoded predictions, adapting through dopamine-mimicking rewards for error reduction, achieving paddle control in under 5 minutes of training across 2.5 million trials. This demonstrated emergent self-organization and predictive coding akin to biological sensory-motor loops, with the hybrid setup consuming approximately 1 million times less power than equivalent deep learning models for similar tasks.00806-6)[67] Organoid intelligence (OI), an emerging paradigm formalized in 2023, extends this by employing 3D cerebral organoids—self-organizing clusters of up to 10^6 neurons mimicking early brain structures—as computational substrates interfaced with microfluidic chambers and advanced MEAs for nutrient delivery and signal transduction. OI systems aim to perform reservoir computing, where organoids process nonlinear dynamics via intrinsic recurrent connectivity, outperforming silicon in tasks like pattern recognition due to biological adaptability; for instance, a 2023 Brainoware prototype integrated a mouse brain organoid with an AI system to classify speech signals with 78% accuracy after brief training, surpassing traditional algorithms in energy use by orders of magnitude. Recent validations, such as 2025 studies from Johns Hopkins showing organoids exhibiting spike-timing-dependent plasticity for memory formation, underscore their potential for scalable biocomputing, though limited by organoid viability (typically weeks to months).[66][68][69] Commercialization efforts, like Cortical Labs' CL1 platform launched in March 2025, package mature neuron-silicon hybrids into accessible biocomputers, enabling cloud-based training of synthetic biological intelligence for applications in drug discovery and adaptive AI, with neurons forming fluid networks that evolve via sub-millisecond feedback. These methods face biophysical constraints, including signal attenuation in dense tissue and ethical considerations for using human-derived cells, yet offer causal insights into neural computation by directly probing living systems rather than abstracted models.[70][71][72]Major Projects and Initiatives
Blue Brain Project
The Blue Brain Project, initiated in July 2005 at the École Polytechnique Fédérale de Lausanne (EPFL) under the direction of neuroscientist Henry Markram, sought to advance simulation neuroscience by creating biologically detailed digital reconstructions of mammalian brain structures, beginning with the rodent neocortex.[42][73] Funded initially by a Swiss government grant and supported by IBM's Blue Gene supercomputer, the project aimed to reverse-engineer neural circuits at the cellular and subcellular levels to test hypotheses about brain function that could complement experimental methods.[74][75] Early efforts focused on reconstructing a neocortical column (NCC), a basic functional unit of the cortex containing approximately 10,000 to 100,000 neurons. By 2006, simulations achieved cellular-level modeling of an NCC with up to 100,000 neurons, incorporating detailed morphologies, ion channels, and synaptic dynamics derived from experimental data.[74] This involved data-driven approaches to parameterize neuron types, connectivity patterns, and electrophysiological properties, enabling in silico validation against biological recordings. Subsequent milestones included developing a digital library of neuron models, establishing probabilistic atlases of cell distributions, and producing a 3D reconstruction of juvenile rat somatosensory cortex microcircuitry, published as a landmark draft in 2015.[76][4] Key achievements encompassed predictive discoveries, such as identifying multi-dimensional geometric structures in neural networks (2017) and refining connectivity rules for mouse neocortical layers using integrated datasets (2021), which revealed energy-efficient wiring principles aligning with observed biological efficiency.[77][4] The project's tools, including the NEURON simulator and morphometric databases, facilitated scalable simulations on supercomputers, contributing to broader efforts like the Human Brain Project (HBP), which absorbed Blue Brain resources starting in 2013 with €1 billion in EU funding.[42] However, ambitious timelines proposed by Markram—such as full mouse brain simulation by the early 2020s—faced scrutiny for underestimating data gaps and computational demands, leading to debates on feasibility despite methodological innovations.[75] The initiative concluded core operations in December 2024 after two decades, transitioning to an independent not-for-profit foundation in 2025 to sustain open-access resources via platforms like the Blue Brain Nexus for data sharing and simulation.[42] This legacy includes EBRAINS infrastructure for multiscale modeling, though critics note persistent challenges in achieving whole-brain fidelity due to incomplete synaptic and plasticity data.[4] The project's empirical emphasis on verifiable reconstructions has influenced global neuroscience, prioritizing causal mechanisms over abstract theorizing.[42]Human Brain Project
The Human Brain Project (HBP) was a large-scale European research initiative launched in 2013 as a Future and Emerging Technologies (FET) Flagship program funded by the European Commission, running until September 2023 with a total investment exceeding €600 million.[46] [3] The project aimed to accelerate understanding of the human brain's structure and function through information and communications technology (ICT), including multiscale brain modeling, simulation on high-performance computing platforms, and development of brain-inspired computing paradigms.[45] It built on earlier efforts like the Blue Brain Project, emphasizing a bottom-up approach to integrate experimental data into digital reconstructions of neural systems, with the ultimate goal of creating a collaborative platform for neuroscience research.[78] Key components included constructing detailed brain atlases, fostering data-sharing infrastructures, and advancing neuromorphic hardware to mimic neural efficiency.[79] The HBP coordinated over 500 scientists across 13 partner institutions in a phased structure, with ramp-up (2013–2016), core phases (SGA1–SGA3 until 2023), and parallel infrastructure grants totaling around €406 million in core EU contributions.[80] By its conclusion, the project produced EBRAINS, a sustainable digital research infrastructure providing access to brain datasets, atlases, simulation tools, and high-performance computing resources for ongoing neuroscience.[81] [47] Achievements encompassed over 3,000 peer-reviewed publications, more than 160 digital tools and applications, and innovations such as a high-resolution 3D atlas of the human brain integrating structural, functional, and connectivity data across scales.[82] Practical outcomes included personalized brain network models for simulating epilepsy treatments and contributions to neuromorphic computing, enabling energy-efficient simulations of cortical columns with millions of neurons.[83] An independent expert review in 2024 affirmed the HBP's transformative impact on neuroscience, highlighting its role in promoting multidisciplinary collaboration and open-access resources that continue via EBRAINS.[81] The project faced significant criticisms, particularly for its ambitious scope and top-down management, which some neuroscientists argued diverted funds from bottom-up experimental research and overstated simulation feasibility given gaps in neuronal diversity and dynamics knowledge.[84] [46] In 2014, over 800 European researchers signed an open letter threatening boycott, citing "substantial failures" in governance, unrealistic timelines for whole-brain emulation, and inadequate focus on core neuroscience questions.[85] These concerns led to leadership changes, including the replacement of founder Henry Markram, and scaled-back goals away from full human brain simulation toward infrastructural tools.[46] Despite such pushback, which reflected broader debates on simulation-driven versus data-driven neuroscience, the HBP's legacy persists in EBRAINS' operational services supporting global brain research as of 2024.[86]Other Key Efforts
The SpiNNaker project, initiated in 2009 by researchers at the University of Manchester, developed a massively parallel neuromorphic computing platform using ARM-based processors to simulate spiking neural networks asynchronously, mimicking the brain's event-driven communication via address-event representation. By 2018, the full-scale SpiNNaker-1 system with one million cores was operational, enabling real-time simulations of networks up to 150-180 million neurons, as demonstrated in applications modeling cortical microcircuits and contributing to neuroscience research on brain dynamics.[87][88] SpiNNaker-2, an upgraded architecture with enhanced scalability, was activated in the early 2020s, supporting event-based machine learning and larger brain-inspired models without traditional GPUs or storage, though it remains constrained by power efficiency compared to digital alternatives.[89] The BrainScaleS project, originating from Heidelberg University in the early 2010s as part of EU-funded neuromorphic research, employs mixed-signal analog-digital hardware to emulate biophysical neuron and synapse models at accelerated timescales—up to 10,000 times faster than biological speeds—for efficient exploration of neural network behaviors. The first-generation BrainScaleS-1 system featured wafer-scale integration with millions of plastic synapses and hundreds of thousands of neurons, while BrainScaleS-2, deployed in the 2020s, incorporates 512 leaky-integrate-and-fire neurons per chip with on-chip hybrid plasticity for adaptive learning, facilitating studies in computational neuroscience such as spike-timing-dependent plasticity and closed-loop experiments with robotics.[90][91] These systems prioritize physical emulation over software simulation to capture sub-millisecond dynamics, though challenges persist in scaling to mammalian brain volumes due to analog noise and calibration demands.[92] Other notable initiatives include IBM's SyNAPSE program, launched in 2008 as a DARPA collaboration to pioneer neurosynaptic computing chips, culminating in the 2014 TrueNorth processor that integrated 1 million neurons and 256 million synapses on a single low-power CMOS chip for cognitive tasks like pattern recognition, influencing subsequent neuromorphic hardware despite limitations in synaptic density relative to biological brains.[93] Stanford's Neurogrid, developed by Kwabena Boahen since the mid-2000s, utilized custom VLSI chips to simulate up to 1 million neurons in real time with biologically realistic conductance-based models, enabling energy-efficient emulation of retinal and cortical circuits as validated in rodent vision studies. Smaller-scale emulation efforts, such as OpenWorm's simulation of the C. elegans nematode's 302-neuron connectome since 2011, have achieved functional whole-organism models in software, providing proofs-of-concept for scalable brain emulation pipelines but highlighting data gaps in subcellular dynamics.[93] These projects collectively advance hardware and software tools for brain-like computation, though none has yet replicated full mammalian cortex fidelity.Challenges and Limitations
Computational and Scalability Barriers
Simulating a human brain at sufficient fidelity for whole brain emulation demands computational resources far exceeding current capabilities, with estimates for required floating-point operations per second (FLOPS) ranging from 10^{15} to 10^{18} or higher, depending on the level of biological detail modeled. The human brain comprises approximately 86 billion neurons and 100 trillion synapses, necessitating simulations that capture spiking activity, synaptic dynamics, and potentially subcellular processes to achieve functional equivalence.[56] At the spike-network level, requirements may approach 10^{18} FLOPS for real-time operation, comparable to or exceeding the peak performance of leading supercomputers like Frontier, which reached 1.102 exaFLOPS in 2022, though emulation software inefficiencies inflate effective needs.[94] Scalability barriers arise from the quadratic growth in connectivity and communication overheads in distributed computing systems, where simulating larger networks increases not only raw compute but also data transfer latencies and memory bandwidth constraints.[95] Peer-reviewed analyses indicate that human-scale simulations require two orders of magnitude more computational power than primate models like the marmoset brain, projecting feasibility beyond 2044 even with optimistic hardware advances.[56] Memory demands for storing connectomes and dynamic states can exceed petabytes, with synapse-level representations alone estimated at hundreds of terabytes, straining storage hierarchies in high-performance computing environments.[96] Energy efficiency poses a fundamental mismatch, as the human brain operates on 12-20 watts while digital emulations consume orders of magnitude more; extrapolations from smaller-scale models suggest gigawatt-level power for human brain simulation, limited by thermodynamic bounds like Landauer's principle rather than raw transistor counts.[97] Neuromorphic hardware aims to mitigate this by mimicking analog neural efficiency, yet scaling to brain-sized systems remains hindered by fabrication yields and interconnect densities, with current prototypes handling only fractions of cortical columns.[98] These barriers underscore that while partial simulations of rodent or insect brains are feasible today, full artificial brain replication demands breakthroughs in algorithms, hardware architecture, and parallelization to overcome exponential resource scaling.[99]Biological Fidelity and Data Gaps
Artificial brain models, whether through whole brain emulation, neuromorphic hardware, or hybrid approaches, face significant hurdles in replicating the intricate biological details of neural tissue, including synaptic connectivity, subcellular dynamics, and multiscale interactions. Biological fidelity requires not only structural accuracy—such as precise mapping of neuronal morphologies and synaptic strengths—but also functional realism in capturing transient processes like synaptic plasticity, neuromodulation, and glial contributions, which current simulations often approximate or omit due to computational constraints and empirical uncertainties. For instance, compartmental neuron models, while detailed, are frequently ad hoc constructions that perform poorly when extrapolated beyond narrowly tested scenarios, failing to generalize to broader biological contexts.[100] A primary data gap lies in connectomics, where comprehensive synaptic-level wiring diagrams remain unavailable for mammalian brains at scale. The human brain comprises approximately 86 billion neurons and up to 10^15 synapses, but non-destructive mapping techniques like diffusion MRI yield only coarse structural estimates, with tractography methods exhibiting systematic inaccuracies in resolving fine axonal pathways, as demonstrated in international challenges where no algorithm fully reconstructed ground-truth tracts from simulated data.[101] Efforts such as the Human Connectome Project have advanced large-scale functional and anatomical mapping using multimodal imaging on over 1,200 subjects, yet these fall short of electron-microscopy-resolved connectomes, which have been achieved only for tiny organisms like C. elegans (302 neurons) or partial insect brains.[102] This incompleteness precludes faithful emulation of network-level emergent properties, as variability in individual connectomes and undiscovered microcircuits introduces irreducible uncertainty.[103] Further gaps persist at molecular and subcellular levels, including incomplete catalogs of ion channels, receptor subtypes, and protein interactions that govern neuronal excitability and plasticity. Multiscale modeling exacerbates these issues, as integrating biophysical details (e.g., Hodgkin-Huxley-type dynamics) with network activity demands hidden parameters whose values are empirically underconstrained, leading to models that prioritize either microscopic detail or macroscopic function at the expense of holistic fidelity.[103] Digital twin brain initiatives, for example, acknowledge deficits in incorporating genetic influences, myelin sheath integrity, and receptor kinetics, while struggling to simulate multimodal sensory integration or cross-scale spatiotemporal dynamics without relying on simplified abstractions.[104] These omissions risk propagating errors in predictions of brain disorders or cognitive processes, underscoring that while task-driven approximations suffice for narrow applications, true biological realism demands vast, unresolved datasets from advanced techniques like high-throughput electron microscopy or in vivo optogenetics.[104][103]Criticisms of Feasibility and Overhype
The Human Brain Project, launched in 2013 with a budget exceeding €1 billion, faced significant backlash for promising a functional simulation of the entire human brain within a decade, a goal articulated by its founder Henry Markram in a 2009 TED talk but unmet by 2019.[105] Critics, including over 800 European neuroscientists in 2014, accused the project of substantial governance failures, opacity, and an overly narrow focus on bottom-up simulation that diverted resources from broader neuroscience research.[85] A mediation panel convened by the European Commission confirmed issues with scientific strategy and leadership, leading to Markram's resignation and a restructuring, yet the project concluded in 2023 without achieving whole-brain simulation.[75] Such initiatives exemplify overhype in "big science" ventures, where ambitious timelines secure funding but overlook the incremental nature of neuroscientific discovery, resulting in perceived waste and eroded trust.[106] Feasibility concerns center on insurmountable computational demands; as of 2024, simulating the human brain's approximately 86 billion neurons and 100 trillion synapses at sufficient fidelity remains unattainable due to inadequate hardware performance and incomplete brain mapping data.[56] Even modest benchmarks, such as emulating the 302-neuron C. elegans worm, have shown no substantial progress after over a decade of effort, underscoring that biological neurons exhibit far greater complexity—incorporating dynamic biochemical signaling, plasticity, and non-linear dynamics—than the simplistic threshold models assumed in many simulations.[107] Whole brain emulation proponents often invoke computationalism, positing that mind states are substrate-independent, but detractors argue this ignores causal dependencies on wetware-specific processes like molecular diffusion and ion channel stochasticity, which defy efficient digital abstraction without exponential resource scaling.[108] Neuromorphic computing, intended to mimic neural architectures for efficiency, encounters parallel hurdles: while prototypes like IBM's TrueNorth demonstrate low-power spiking networks, they suffer from high latency in processing and difficulties in training at scale, limiting scalability to brain-like generality.[109] Critics contend that these systems replicate superficial topologies but fail to capture the brain's adaptive, hierarchical integration of sensory-motor loops and glial-neuronal interactions, rendering claims of "brain-like" intelligence premature amid persistent energy and algorithmic bottlenecks.[110] Hybrid approaches, blending biological tissue with silicon, amplify ethical and technical risks without resolving core emulation gaps, as evidenced by stalled progress in projects like the Blue Brain initiative, which overpromised cortical column simulations but delivered limited, non-generalizable models.[86] Overall, these criticisms highlight a pattern where institutional incentives—particularly in grant-dependent academia—prioritize speculative roadmaps over rigorous validation, fostering timelines decoupled from empirical realities.Philosophical and Theoretical Implications
Simulation Hypothesis and Mind uploading
The simulation hypothesis proposes that advanced civilizations capable of running vast numbers of detailed ancestor simulations would make it probable that observed reality is simulated rather than base-level, as articulated by philosopher Nick Bostrom in his 2003 paper.[111] This trilemma asserts that at least one of the following holds: nearly all civilizations go extinct before achieving simulation-running technology; posthuman societies have little interest in creating such simulations; or we are almost certainly living in a simulation.[111] In the context of artificial brain development, the hypothesis hinges on the premise that high-fidelity emulation of biological brains—replicating neural dynamics, synaptic plasticity, and potentially biochemical processes—would enable conscious simulated agents indistinguishable from humans, allowing scalable historical recreations.[111] Critics, particularly physicists, contend that the hypothesis violates fundamental physical constraints, such as the Bekenstein bound on information density within a given volume of space, which limits the computational capacity of any simulating system to far below what's needed for a universe-scale simulation including quantum phenomena.[112] Astrophysical analyses further argue that simulating even a Hubble-volume observable universe would require processing power equivalent to converting planetary masses into computronium, yet thermodynamic inefficiencies and error correction demands render it implausible under general relativity and quantum field theory.[113] These objections underscore that while small-scale brain simulations (e.g., cortical columns) are advancing, extrapolating to full-reality simulation ignores causal barriers like the no-cloning theorem preventing perfect quantum state copies, casting doubt on the technological feasibility assumed by the argument.[112] Mind uploading, closely tied to whole brain emulation (WBE), envisions digitizing a human mind by scanning neural structure and activity at synaptic or molecular resolution, then instantiating it on computational hardware to achieve substrate-independent continuity.[114] Feasibility assessments, such as those by Anders Sandberg and colleagues, estimate that emulating a human brain at 10^16 to 10^18 floating-point operations per second—accounting for ion channels, neurotransmitters, and plasticity—remains orders of magnitude beyond 2025 hardware, with scanning technologies like electron microscopy limited by tissue damage and incomplete dynamic data capture.[114] Proponents link WBE success to the simulation hypothesis by positing that uploaded minds could recursively simulate progenitors, amplifying the probability of nested realities, yet empirical gaps persist: functional equivalence demands verifying subjective experience, which current models (e.g., connectome-based) fail to guarantee without resolving debates over whether computation alone suffices for qualia under physicalist assumptions.[115] Philosophical interconnections highlight risks of overreliance on computationalism; if consciousness emerges from specific biological wetware rather than abstract information processing, both uploading and simulation arguments falter, as evidenced by persistent failures to replicate even invertebrate behaviors in purely digital models without hybrid biological elements.[114] As of 2025, no peer-reviewed demonstration of uploaded consciousness exists, and scalability analyses predict timelines exceeding a century barring breakthroughs in non-von Neumann architectures or nanoscale scanning, tempering optimism with the recognition that these concepts, while provocative, lack direct empirical validation beyond toy simulations.[115]Consciousness in Artificial Systems
The emergence of consciousness in artificial systems, such as computationally emulated brains, lacks empirical validation and hinges on unresolved debates about the nature of subjective experience. Prominent theories like global workspace theory, which posits consciousness as arising from broadcasted information integration across neural modules, and higher-order thought theory, emphasizing meta-representations of mental states, have been evaluated against AI architectures but yield no positive indicators for phenomenal qualia in existing models. Integrated information theory quantifies consciousness via Φ, a measure of irreducible causal integration, suggesting that systems with high Φ could be conscious regardless of substrate; however, computations of Φ in AI networks reveal values far below those inferred for human brains, undermining claims of machine sentience.[116][117] Opposing views, rooted in biological naturalism, assert that consciousness depends on the specific biochemical causal powers of neural tissue, not mere functional simulation. Philosopher John Searle contends that digital computations manipulate symbols without intrinsic understanding or first-person ontology, as illustrated by his Chinese room argument, where syntactic rule-following fails to produce semantics or experience; thus, even perfect emulations of brain algorithms on silicon would mimic behavior without genuine consciousness. This substrate-dependent stance implies that artificial brains, lacking wetware biochemistry, cannot replicate the neurobiological processes essential for qualia.[118][119] For whole brain emulation specifically, functionalists argue that atomic-level fidelity to neural dynamics would transfer consciousness via computational identity, potentially enabling mind uploading. Yet, no-go theorems demonstrate that non-biological substrates like chips cannot entangle information in the causally efficacious manner required for consciousness, as they lack the holistic, neuron-specific integration observed in vivo. Quantum effects proposed by Roger Penrose and Stuart Hameroff in orchestrated objective reduction further challenge digital emulation, suggesting microtubule computations in neurons underpin non-computable aspects of awareness.[120] As of 2025, empirical assessments of large language models and neural simulations report behavioral sophistication—such as self-reflection or emotional simulation—but no verifiable markers of consciousness, with surveys indicating median expert estimates of 25% probability for conscious AI by 2034. Claims of sentience in systems like GPT variants stem from anthropomorphic illusions rather than causal evidence, and ethical risks of mistaking mimicry for reality persist absent rigorous tests.[121][122]Ethical and Societal Considerations
Moral Status of Emulated Minds
The moral status of emulated minds, particularly those achieved through whole brain emulation (WBE), depends on unresolved debates about whether digital simulations can instantiate consciousness or sentience equivalent to biological counterparts. Proponents of functionalism argue that if an emulation faithfully replicates the functional organization and causal dynamics of a human brain—encompassing approximately 86 billion neurons and hundreds of trillions of synapses—it would possess subjective experience, thereby warranting moral consideration akin to human rights against suffering, deletion, or exploitation.[123] [124] This view aligns with substrate independence, the hypothesis that mental states are not tied to specific physical materials but to computational patterns, supported by roughly 33% of philosophers in surveys on mind-body theories.[125] Opposing perspectives emphasize biological prerequisites for consciousness, contending that emulations, running on silicon substrates, cannot generate qualia or genuine emotions due to the absence of organic processes like electrochemical signaling or evolutionary adaptations for integrated information processing.[123] Political philosopher Francis Fukuyama has claimed that emulations lack true consciousness, rendering them morally equivalent to sophisticated software without standing.[126] Critics of functionalism further note that current assessments of sentience, such as self-reports from systems like emulated nematodes in projects akin to OpenWorm, remain unreliable indicators, as simulations may mimic behavior without underlying phenomenology.[123][127] Even absent definitive proof of consciousness, emulated minds exhibit vulnerability to harm, including engineering-induced pain or scalable exploitation, amplifying ethical risks if moral status is underestimated.[128] For lower-fidelity emulations resembling animal cognition, moral status might parallel that of non-human animals, invoking principles of analogy for welfare protections, though communication barriers complicate enforcement.[128] Precautionary approaches advocate institutional safeguards, such as polycentric legal frameworks granting emulations or proxies standing to prevent abuse, prioritizing avoidance of potential mass suffering over uncertainty.[128] These considerations underscore the need for empirical tests of fidelity in WBE to resolve ontological questions empirically rather than speculatively.Risks and Resource Allocation Debates
The development of artificial brains via whole brain emulation (WBE) has sparked debates over existential risks, including the potential for emulated minds to trigger an intelligence explosion through rapid copying and computational speedup, evading human oversight and leading to misaligned outcomes.[129] Such scenarios could amplify human-like flaws at superhuman scales, as emulations might inherit drives for resource competition or self-preservation without built-in safeguards.[130] Critics, including AI safety researchers, note that high-fidelity emulations of accelerated cognition could exhibit instability, drawing parallels to elevated psychological disorder rates observed in high-IQ human populations, undermining assumptions of inherent safety.[130] [126] Ethical vulnerabilities further complicate risks, as creating emulated minds introduces duties toward potentially sentient entities that could suffer in simulated environments or be exploited for labor, raising questions of moral hazard in deployment.[128] While some ethicists propose mitigation through gradual scaling and verification protocols, the causal pathway from emulation fidelity to behavioral predictability remains empirically untested, with no guaranteed alignment despite biological origins.[131] Proponents counter that WBE's structural mimicry of human neuroscience may yield more interpretable systems than de novo architectures, potentially reducing opaque failure modes, though this optimism hinges on unresolved assumptions about neural determinism.[11] Resource allocation debates center on WBE's prohibitive demands, estimated to require exascale computing for simulation and nanoscale brain scanning technologies not yet viable at human scale, contrasting with lower-barrier advances in machine learning paradigms.[129] Opponents argue that diverting public and private funds—such as the multimillion-dollar grants awarded to emulation projects since the early 2010s—risks opportunity costs, sidelining parallel investments in AI governance or hybrid neuro-AI interfaces that could yield nearer-term benefits with fewer unknowns.[132] Advocates, including futurists like Robin Hanson, maintain that WBE's path to scalable intelligence justifies prioritization, positing economic transformations from emulation-driven productivity as outweighing upfront costs, provided risks are managed via international coordination.[133] These tensions reflect broader causal trade-offs: emulation's fidelity promises value inheritance but at the expense of agility, fueling calls for evidence-based funding tied to verifiable milestones rather than speculative timelines.[11][131]Recent Developments (2023-2025)
Advances in Brain Modeling and Simulation
In 2023 and 2024, the EBRAINS research infrastructure, successor to the Human Brain Project, advanced multiscale brain simulation platforms capable of modeling neural activity from molecular to whole-brain levels, integrating data from rodent and human atlases to simulate disease states and cognitive processes.[81] These platforms enabled simulations of cortical microcircuits with biophysical detail, incorporating over 10,000 neuron models validated against experimental electrophysiology data.[81] A October 2024 study from the University of Surrey developed a computational model simulating neuron growth during brain development, replicating dendritic branching and synaptic pruning patterns observed in embryonic mouse brains through diffusion-based algorithms that mimic cytoskeletal dynamics.[134] This model predicted growth trajectories with 85% accuracy against in vivo imaging, offering insights into neurodevelopmental disorders like autism by varying parameters for genetic mutations.[134] In computational neuroscience, large-scale mechanistic models of brain circuits emerged in 2024, featuring biophysically detailed neurons—up to 100,000 per simulation—integrated with synaptic plasticity rules to replicate oscillatory rhythms in the hippocampus and cortex.[135] These models, constrained by patch-clamp and optogenetic data, demonstrated emergent behaviors like theta-gamma coupling without predefined assumptions, advancing beyond abstract neural networks toward causal explanations of circuit function.[135] By April 2025, whole-brain modeling combined intraoperative electrical stimulation mapping with EEG data from 20 patients, revealing cortical excitability gradients where higher-order association areas exhibited 2-3 times stronger evoked responses than sensory regions, parameterized in generative models for predictive simulation.[136] Such approaches utilized Bayesian inference to estimate network connectivity, improving simulation fidelity for personalized neurosurgery planning.[136] Foundation models trained on neuroimaging datasets progressed in 2025, with AI-driven simulations replicating ventral stream hierarchies for object recognition, achieving 70-80% alignment with fMRI patterns in human visual cortex tasks.[137] These hybrid models bridged machine learning and neuroscience by reverse-engineering brain algorithms, though limited to modular rather than holistic brain emulation due to data sparsity.[137] Despite these gains, full mammalian whole-brain simulations at cellular resolution remained infeasible as of mid-2025, constrained by exascale computing needs exceeding current hardware by orders of magnitude.[56]Neuromorphic Hardware Progress
Neuromorphic hardware, designed to emulate the brain's parallel, event-driven processing through spiking neural networks and analog or digital circuits, has seen significant scaling and efficiency improvements in recent years. Intel's Loihi 2 chip, released in 2021 but iteratively enhanced, supports up to 128 neuromorphic cores per chip with on-chip learning and asynchronous spiking, enabling applications in sparse data processing.[138][139] In April 2024, Intel deployed Hala Point, the largest neuromorphic system to date, comprising 1,152 Loihi 2 processors with 1.15 billion neurons and over 100 trillion synapses, operating at 20 watts for sustained AI inference tasks, demonstrating a path to exascale neuromorphic computing for energy-constrained environments.[140] IBM's NorthPole architecture, introduced in 2023, integrates deep neural network processing directly into memory to eliminate data movement bottlenecks, achieving up to 14 times faster inference and 77% lower latency compared to GPU baselines on image recognition benchmarks, with a focus on scalable, in-memory computing for AI accelerators.[141] Ongoing IBM research emphasizes phase-change memory devices for synaptic emulation, targeting sub-10 pJ per operation energy efficiency in hybrid analog-digital systems.[142] BrainChip's Akida platform advanced with the Akida 2.0 release in 2024, incorporating temporal event-based neural networks and visual transformers for edge AI, supporting up to 1.2 million neurons per processor with sparsity-driven power savings exceeding 10,000 times over traditional DNNs in object detection tasks.[143] By mid-2025, Akida IP was licensed for space applications, enabling radiation-hardened neuromorphic processing with microjoule-level event processing for satellite autonomy.[144][145] Broader ecosystem progress includes hybrid digital-analog shifts for manufacturability, as digital neuromorphic circuits in devices like Loihi and Akida reduce variability issues plaguing analog memristors, facilitating commercialization.[146] In October 2024, Mercedes-Benz integrated Loihi 2 into radar systems for real-time adaptive beamforming, improving detection accuracy by 20% in dynamic environments while consuming under 1 watt.[147] These developments highlight neuromorphic hardware's edge in power efficiency—often 100-1000 times below von Neumann architectures for sparse workloads—but scaling to human-brain levels remains constrained by interconnect density and programming complexity.[148][149]Future Prospects
Projected Timelines for Milestones
Projections for milestones in artificial brain development, particularly whole brain emulation (WBE), vary widely due to uncertainties in scanning resolution, neural modeling fidelity, and computational scaling. A 2024 analysis of hardware trends and biological complexity estimates that cellular-level simulation of a mouse brain could become feasible around 2034, enabling validation against biological data.[150] Marmoset brain emulation at similar resolution is projected around 2044, serving as an intermediate step toward primate-scale systems.[150] Human WBE is anticipated later, potentially beyond mid-century, contingent on advances in non-invasive imaging and exascale-plus computing.[150] Expert forecasts reflect this extended horizon. Anders Sandberg, co-author of the seminal 2008 WBE roadmap, predicts a median timeline of 2064 for the technology enabling human brain emulation, based on surveys of neuroscientists and engineers.[151] Earlier estimates in the 2008 report posited feasibility by the 2030s to 2040s under optimistic scanning and simulation assumptions, but subsequent progress in connectomics—such as the 2024 fruit fly brain wiring diagram—has highlighted persistent bottlenecks in synaptic dynamics and plasticity modeling.[152] Key intermediate milestones include emulating simpler nervous systems for behavioral validation. Projects like OpenWorm have simulated C. elegans at the cellular level since 2014, but achieving worm-level agency required refinements into the 2020s; scaling to insect brains may occur by 2030, per ongoing efforts in fly connectome simulation.[153] Neuromorphic hardware milestones, such as Intel's Loihi chips demonstrating brain-like efficiency, project toward mouse-scale integration by the early 2030s, though full emulation demands orders-of-magnitude increases in synaptic update rates.[154]| Milestone | Projected Timeline | Source |
|---|---|---|
| C. elegans full emulation | Achieved (behavioral gaps persist) | OpenWorm project updates[153] |
| Insect (fly) brain connectome simulation | ~2030 | Extrapolated from 2024 fly mapping[153] |
| Mouse cellular WBE | ~2034 | Hardware-biology scaling model[150] |
| Marmoset WBE | ~2044 | Hardware-biology scaling model[150] |
| Human WBE availability | Median 2064 | Sandberg expert survey[151] |