Fact-checked by Grok 2 weeks ago

SpiNNaker

SpiNNaker (Spiking Neural Network Architecture) is a , brain-inspired platform designed for real-time simulation of large-scale , enabling the modeling of brain-like computations at biological speeds with low power consumption. Developed to challenge conventional architectures by emulating the brain's parallel processing and event-driven communication, it supports applications in , , and by simulating up to billions of neurons and trillions of synapses. The project was initiated in 2006 at the , led by computer engineer , with the goal of creating a scalable system for neuromorphic simulations that could handle the complexity of biological neural networks without the inefficiencies of traditional supercomputers. As a key component of the European , SpiNNaker's first-generation system was completed and operational by 2018, comprising 57,600 custom chips mounted on 1,200 boards, delivering over 1 million processor cores and 7 terabytes of . This scale allows for real-time modeling of entire brain regions, such as a with approximately 70 million neurons, a feat previously requiring massive conventional resources. At its core, the architecture features chips with 18 low-power ARM968 cores each—16 dedicated to and two for system management—interconnected via a packet-switched, that mimics synaptic signaling with 40- or 72-bit asynchronous packets routed at up to 250 megabits per second. Each core can model up to 1,000 s and handle millions of incoming synapses, with the system's design emphasizing , (around 1-2 watts per chip), and to support hybrid analog-digital processing. Software tools like PyNN and sPyNNaker facilitate configuration and execution, integrating with standards such as NEST for seamless workflows. SpiNNaker's applications span for studying brain dynamics, neuro-robotic systems for real-time sensory-motor control, and event-based for efficient in edge devices like autonomous vehicles. The second-generation SpiNNaker 2, developed in collaboration with since 2013 and operational in prototypes by 2021, advances this with 152 ARM Cortex-M4F cores per chip in a , targeting 10 million cores overall for 10 times the neural simulation , including accelerators for neural . Deployed in systems like the SpiNNcloud supercomputer at since 2025, it supports up to 5 billion neurons and continues to drive innovations in hybrid brain-inspired computing.

Background and Development

Origins and Motivation

The SpiNNaker project was conceived in the late 1990s by at the , driven by the need for an efficient platform to simulate large-scale (SNNs) that replicate the parallel and asynchronous processing of biological brain functions. This initiative emerged from earlier explorations into VLSI architectures for associative memories, funded modestly by an EPSRC grant in 1998, with the goal of advancing through biologically inspired hardware. Furber's vision was to create a system capable of modeling up to one billion neurons in real time, approximating 1% of the human brain's scale, to facilitate deeper insights into neural information processing. A primary motivation was to overcome the inefficiencies of conventional architectures, which struggle with the asynchronous, event-driven nature of neural computations due to their sequential processing and the von Neumann bottleneck that separates memory from computation. Traditional supercomputers, while powerful for general tasks, incur high and costs when simulating sparse, irregular spiking activity, making brain-scale modeling impractical. SpiNNaker aimed to address these by drawing on neuromorphic principles, prioritizing biological realism in simulation while also informing the design of more efficient future computers. In 2005, Furber outlined the architecture in a technical note, leading to successful EPSRC funding in 2006 under grants EP/D07908X/1 and EP/G015740/1, totaling over £3.3 million, to develop a system based on low-power processors. The core design philosophy emphasized asynchronous, packet-switched communication to mimic the propagation of neural spikes via Address Event Representation (AER), ensuring low-latency event routing across the network. was a central tenet, targeting power consumption akin to mobile processors to enable sustainable, large-scale deployments without excessive heat or resource demands. Later, the project gained support from the European Human Brain Project to further its applications.

Key Milestones

The project officially launched in 2005, supported by initial funding from the UK's Engineering and Physical Sciences Research Council (EPSRC) to develop prototypes for large-scale simulations. This marked the formal beginning of efforts at the to build a neuromorphic platform inspired by asynchronous brain-like processing, with design work accelerating following EPSRC grants awarded in 2006. From 2010 to 2013, early prototypes—including single-chip versions and small-board systems—were rigorously tested, successfully demonstrating real-time simulation of (SNNs) and validating the architecture's potential for parallel . These tests involved production chips delivered in late 2010 and full SpiNNaker1 chips in 2011, which supported initial applications in . In 2018, the full million-core SpiNNaker system, comprising 1,036,800 ARM cores, was announced by the (HBP) on October 14 and became operational on November 2 as a key component of the HBP, enabling real-time simulations of up to 1 billion neurons and advancing collaborative brain modeling efforts. By 2019, €8 million in EU funding was awarded to —announced by the HBP on September 24—for SpiNNaker2 development, signaling the transition to enhanced second-generation hardware. SpiNNaker's integration into the HBP's neuromorphic platform facilitated widespread deployments for research by 2020, supporting tools like PyNN for scalable SNN experiments across international teams.

Hardware Architecture

Processor Chips

The SpiNNaker chip serves as the core computational element in the original SpiNNaker , optimized for massively parallel simulation of with a focus on biological realism and energy efficiency. Fabricated by (UMC) on a 130 nm process, each chip integrates 18 ARM968E-S cores clocked at approximately 200 MHz, where one core functions as a and for and fault handling, while the other 17 are dedicated to neuron simulation tasks. This design allows the chip to emulate sparse, large-scale neural models by distributing computational load across the cores, with each simulation core capable of managing up to around 1,000 neurons depending on model complexity and connectivity. Memory resources on the chip are tailored to support neural requirements, featuring 128 MB of off-chip mobile shared across all cores for storing parameters, synaptic weights, and history queues, alongside per-core tightly coupled memories (32 KB instruction Tightly Coupled Memory or ITCM and 64 KB data Tightly Coupled Memory or DTCM) for fast local access during computation. An additional 32 KB of node SRAM provides scratchpad space for temporary data. models implemented on these cores utilize to balance precision and performance, enabling simulations of both deterministic models (such as integrate-and-fire variants) and models (incorporating via techniques like ) at biological timescales. This arithmetic approach reduces computational overhead, allowing each core to process state updates and synaptic events efficiently without floating-point units. Spike processing is handled through a dedicated on-chip multicast router that receives incoming Address Event Representation (AER) packets, which represent neural , and directs them to the appropriate simulation cores with low latency (around 0.1 µs per hop). Upon receipt, a spike triggers synaptic updates via (DMA) transfers, where the core issues a DMA request to retrieve the relevant row of synaptic weights and parameters from SDRAM into local memory for rapid processing, minimizing CPU intervention and enabling high-throughput event-driven computation. The chip's overall power dissipation is approximately 1 W under full load, achieved through globally asynchronous locally synchronous () clocking and voltage scaling, which facilitates deployment in large-scale systems without excessive energy demands.

Interconnection Network

The SpiNNaker interconnection network employs a two-dimensional , where each chip connects to six neighboring chips via bidirectional links, enabling efficient communication across the system. This hexagonal arrangement provides redundancy and isotropic routing paths, with wraparound links at the boundaries to form the , supporting scalability up to 65,536 chips in a 256×256 . The links operate at 250 Mbps using asynchronous 2-of-7 (NRZ) encoding off-chip, facilitating high aggregate while minimizing power consumption. Communication occurs via packet-based protocols optimized for event-driven workloads, primarily using 40-bit or 72-bit packets to transmit events from neurons. The 40-bit format consists of an 8-bit header and 32-bit routing key, while the 72-bit version adds an optional 32-bit ; spikes typically use the shorter format for efficiency, allowing to multiple destinations with low overhead. End-to-end for packet transmission remains under 50 μs in worst-case scenarios across the full , with nominal router traversal at approximately 0.2 μs per , ensuring performance for biological simulations. Each link can handle up to several million packets per second in practice, though traffic is managed to sustain around 250,000 packets per second per link under typical loads. Routing leverages dimension-ordered routing (DOR) for traffic along the dimensions, combined with (TCAM)-based trees for efficient one-to-many spike distribution, using up to 1,024 preconfigured entries per router. This approach minimizes path lengths and supports population-level addressing via masks in the keys, enabling from a single to reach thousands of targets without excessive replication. The design is fully asynchronous, lacking a global clock to align with irregular biological timing, and employs where packets advance as space allows, with on-chip buffering limited to reduce and area. To mitigate scalability challenges in large topologies, the network incorporates channels implicitly through packet prioritization and routing, alongside a timeout mechanism that drops stalled packets after a few cycles to prevent deadlocks without dedicated escape paths. This fault-tolerant strategy, including two-hop bypasses for link failures, ensures reliable operation across millions of cores while maintaining the at 480 Gbps for a million-core configuration.

System Scalability

The SpiNNaker system aggregates individual chips into larger units through a hierarchical structure, beginning at the board level. Each SpiNN-5 board houses 48 SpiNNaker chips arranged in a hexagonal array, providing a compact mesh interconnect for intra-board communication, while Ethernet interfaces on the board enable external input/output connectivity to host systems. At the cabinet level, 120 boards are integrated per cabinet (arranged as 24 boards per subrack across 5 subracks), resulting in 5,760 chips, with power and cooling systems supporting sustained operation. The full-scale SpiNNaker machine comprises 10 cabinets, totaling 1,200 boards, 57,600 chips, 1,036,800 processor cores, and 7 TB of distributed across the chips' SDRAM. This configuration consumes approximately 90 kW of power under full load, making it compatible with standard cooling infrastructure. The system's scalability enables real-time simulation of up to 1 billion neurons and 1 trillion synapses, facilitating large-scale neuromorphic modeling without excessive latency. The complete SpiNNaker platform is hosted by the Advanced Processor Technologies group at the University of Manchester, with remote access provided through the EBRAINS neuromorphic computing platform for collaborative research (as of 2025).

Software Ecosystem

Simulation Frameworks

sPyNNaker is the primary software package for running (SNN) simulations on the platform, implementing the PyNN application programming (API) to enable Python-based descriptions of neural models. It allows users to define populations of neurons, specify connection rules such as all-to-all or fixed-probability topologies, and configure synaptic parameters, abstracting the underlying hardware complexities of . Through this , simulations can scale to large networks comprising up to 10^9 neurons and 10^12 synapses while targeting performance. The package supports a range of neuron models, including the leaky integrate-and-fire (LIF) model, the Izhikevich model, and custom models implemented in C code. For the LIF model, the membrane potential dynamics follow the \frac{dv}{dt} = \frac{I - v}{\tau} where v is the , I is the input current, and \tau is the , discretized for using methods like the exponential Euler approach. The Izhikevich model, suitable for capturing diverse firing patterns, is integrated with variants handling current-based or conductance-based inputs, while custom C models extend functionality for specialized behaviors not covered by built-in options. Network partitioning in sPyNNaker automatically divides the SNN into sub-populations, mapping up to 255 neurons per to optimize resource usage and minimize inter-core communication overhead across SpiNNaker's multi-chip architecture. This process generates a machine that is placed and routed on the hardware, ensuring efficient spike transmission via the on-chip network. Simulations operate in a time-stepped manner with a default timestep of 1 ms, where states are updated synchronously and are processed event-driven to meet constraints enforced by SpiNNaker's hardware timers. Delays in connections are supported up to 144 timesteps, accommodating synaptic latencies in biological timescales. sPyNNaker maintains with other simulators like NEST through the standardized PyNN API, facilitating hybrid workflows that combine SpiNNaker's neuromorphic execution with CPU- or GPU-based components for validation and extended modeling.

Development and Runtime Tools

The SpiNNaker Application Runtime Kernel () serves as the foundational and environment for individual on SpiNNaker chips. It is linked directly with application code and manages essential low-level operations, including dynamic allocation from system RAM (SysRAM) and SDRAM, (DMA) transfers for efficient spike packet handling, and inter-core messaging via the SpiNNaker Datagram Protocol (). imposes a structured layout to ensure across , with SysRAM divided into fixed regions for , tables, and variables, while SDRAM supports larger application stores. This kernel enables reliable execution of parallel tasks by providing APIs for , , and fault , such as emergency during link failures. Complementing SARK, the SpiNNaker Control and Monitor Processor (SC&MP) functions as the system's operating system-like layer, primarily running on monitor cores to oversee boot processes, communication , and system monitoring. SC&MP generates and loads routing tables to facilitate efficient spike dissemination across the , using algorithms that compute minimal-hop paths for up to 72 keys per chip while avoiding congested links. It also handles SDP packet , embedding commands for core-to-core and external Ethernet communications, and supports low-level commands via the SpiNNaker Command Protocol (SCP) for tasks like inspection and loading. This setup ensures scalable over multi-chip systems, with SC&MP maintaining system-wide consistency during . Development for SpiNNaker relies on a GCC-based tailored for the cores, enabling C/C++ application builds with optimizations for low-power, real-time execution. The spinnaker_tools package provides Makefiles and utilities that cross-compile code using Embedded GCC (version 9.2.1 or compatible), incorporating flags for thumb-mode instructions, debug symbols, and linkage with the Spin1 for . Compiled are loaded onto chips via the Board Management (BMP) firmware, which resides on each board's Spartan FPGA and handles Ethernet-based , , and application deployment through tools like ybug. This workflow supports iterative , with ybug allowing direct execution and configuration without full recompilation. Software tools are being adapted for SpiNNaker2, with ongoing to support its advanced cores and accelerators. Debugging and performance analysis are facilitated by integrated tools within the low-level stack, including ybug for memory inspection, core state examination, and runtime intervention across the system. For network visualization and , the SpiNNakerGraphFrontEnd (SGFE) provides a Python-based to map application onto the , retrieve information, and extract performance metrics such as available core counts and provenance for . While SGFE focuses on graph deployment, it enables of spike transmission efficiency and core utilization by querying models post-execution, often in conjunction with PyNN scripts for higher-level oversight. These tools support fault diagnosis, such as verifying paths or monitoring throughput, essential for optimizing large-scale deployments. Key releases in the runtime ecosystem include sPyNNaker 4.0.0 from 2018, which stabilized low-level integrations for PyNN-based simulations and laid groundwork for (HBP) collaborations by enhancing compatibility with neuromorphic workflows. Later versions, such as 6.0.0, further improved model support, performance, and routing for brain-scale models, with ongoing updates as of 2025 refining stability for HBP and other platforms.

Applications

Neuroscience Simulations

SpiNNaker has been extensively utilized for simulating detailed cortical microcircuits, enabling to model biologically realistic neural at scales relevant to . A prominent example is the simulation of an 80,000-neuron model representing 1 mm² of somatosensory , incorporating approximately 300 million synapses with leaky integrate-and-fire neurons and sparse connectivity patterns derived from experimental data. This model, based on the Potjans-Diesmann framework, achieves real-time performance on a small of SpiNNaker boards, demonstrating efficient parallelization of synaptic events and spike routing. Validation against electrophysiological recordings from and sensory shows close agreement in population firing rates (around 4-10 Hz for excitatory neurons) and activity patterns, confirming the platform's fidelity for hypothesis testing in cortical dynamics. At larger scales, SpiNNaker supports emulation of substantial portions of mammalian brains in real time, facilitating investigations into , , and emergent behaviors. The full million-core system can simulate up to 1 billion neurons and 10¹² synapses, equivalent to approximately 1% of the human brain's neural scale, allowing for biologically plausible models that incorporate synaptic delays and asynchronous updates. For instance, configurations have emulated brain-scale networks of around 70 million neurons, enabling exploration of whole-brain interactions and mechanisms without the latency issues of conventional supercomputers. These simulations support targeted studies on rules, such as spike-timing-dependent plasticity (STDP), and their role in learning and across neural populations. Within the (HBP), SpiNNaker integrates into multiscale simulation pipelines, bridging cellular-level details—like ion channel dynamics and multi-compartment models—with network and whole-brain representations. This allows for hybrid workflows where detailed microcircuits are embedded within larger anatomical models, using standardized interfaces like PyNN for model portability across scales. Such capabilities have advanced understanding of cross-level interactions, from subcellular signaling to global brain states, in a unified computational environment. A key demonstration of SpiNNaker's utility in plasticity research is the 2018 simulation of vestibulo-ocular reflex (VOR) adaptation, where STDP mechanisms in a cerebellar model adjusted eye movements to compensate for sensory-motor misalignments in a closed-loop setup. This work, extended in subsequent real-time implementations on SpiNNaker, validated adaptive learning in spiking networks against experimental benchmarks, achieving convergence to stable gaze stabilization within biologically relevant timescales. SpiNNaker's architecture excels in applications through its low-latency spike processing, typically under 1 per event, which supports closed-loop experiments integrating simulated brains with real-time sensory inputs or robotic effectors for dynamic testing. This real-time responsiveness, combined with power efficiency (around 10 nJ per synaptic operation), enables prolonged simulations of adaptive processes without compromising biological fidelity.

and Integration

SpiNNaker enables real-time robotic control through spiking neural networks (SNNs) that process sensorimotor tasks with low latency, leveraging its asynchronous architecture for efficient event-driven computation. In integrations with the iCub humanoid robot, SNN models for visual-motor coordination have been deployed on SpiNNaker to handle visuomotor tasks, such as attention-based object tracking, achieving sub-millisecond processing delays suitable for dynamic environments. This setup allows the robot to integrate sensory inputs from cameras with motor outputs, demonstrating improved real-time performance over software-based simulations. For event-based AI, SpiNNaker processes data from neuromorphic sensors like Dynamic Vision Sensors (DVS) cameras, enabling low-power tasks such as edge and in robotic applications. These sensors output asynchronous events representing intensity changes, which SpiNNaker handles via SNNs to perform real-time feature extraction, as shown in systems for visual tracking and obstacle avoidance on mobile robots. Such integrations reduce energy consumption compared to frame-based , with demonstrations on autonomous platforms achieving efficient processing of sparse event streams for tasks like coastline detection from DVS inputs. Hybrid models combining SNNs with traditional techniques on support energy-efficient inference for edge devices in robotics. By mapping artificial neural networks (ANNs) alongside SNNs, these hybrids handle sequential vision tasks like estimation, yielding up to 1.87 times energy savings while maintaining accuracy in applications such as drone navigation. This approach exploits SpiNNaker's to blend event-driven sparsity of SNNs with the robustness of ANNs, facilitating deployment on resource-constrained robotic systems. In practical examples, has powered adaptive locomotion in hexapod robots using SNN-based (CPGs) trained via principles, enabling real-time gait adjustments to terrain variations around 2020. These systems embed SNNs on to generate rhythmic motor patterns responsive to sensory , as validated on physical hexapod platforms for stable walking over uneven surfaces. Beyond neural tasks, extends to broader AI workloads through SNN implementations of graph algorithms and constraint satisfaction problems, solving NP-hard issues like Sudoku or vertex coloring via stochastic spiking search. This capability leverages the platform's massive parallelism for non-biological applications, such as optimization in robotic planning, where noisy neural solvers approximate solutions efficiently on distributed cores.

SpiNNaker2 Advancements

Design Enhancements

The SpiNNaker2 platform represents a significant advancement in process technology, shifting from the 130 nm node used in the original to a 22 nm fully depleted silicon-on-insulator (FDSOI) process, which enables greater density, reduced leakage currents, and improved through features like adaptive body biasing and dynamic voltage/ (DVFS). This fabrication upgrade allows for more compact of components while maintaining low-power operation suitable for large-scale neuromorphic systems. Each SpiNNaker2 chip integrates 152 ARM Cortex-M4F processing elements (PEs), plus one additional system core for a total of 153 cores, organized into 38 quad-processing elements (QPEs) to optimize resource sharing and communication. The chip provides 19 MB of on-chip (128 kB per PE) and supports up to 2 GB of off-chip LPDDR4 , enabling the of approximately 152,000 neurons and 152 million synapses per chip. To facilitate hybrid workloads combining (SNNs) and deep neural networks (DNNs), dedicated accelerators are incorporated, including a 16x4 array of 8-bit multiply-accumulate (MAC) units for efficient convolutions and matrix operations in DNNs, as well as fixed-point units for exponential, logarithmic, and critical for SNN spike processing and routing. The interconnection network in SpiNNaker2 features an enhanced network-on-chip (NoC) with a 192-bit flit size operating at up to 400 MHz, supporting low-latency routing for events and enabling seamless communication across PEs and chips in a hexagonal with six bidirectional inter-chip links per chip. This design improves bandwidth and scalability over the original SpiNNaker, targeting systems with over 10 million while preserving the asynchronous, event-driven paradigm for brain-like computation. Power efficiency is a enhancement, with the chip consuming approximately 250 mW in baseline operation, achieving roughly 10 times the simulation capacity of SpiNNaker1 at comparable or lower power levels through near-threshold voltage operation and DVFS.

Performance and Deployment

The SpiNNaker2 platform significantly enhances simulation capacity compared to its predecessor, with a single chip capable of simulating approximately 10 times more neurons due to its increased core count and optimized . Full-scale systems target up to 10 million cores, enabling the of around 10 billion neurons, which approaches the scale of small mammalian brains. This capacity supports complex, large-scale (SNNs) while maintaining through event-based processing. Benchmarks demonstrate the SpiNNaker platform's effectiveness in event-based machine learning tasks. For instance, a 2024 study on event-driven coastline detection using a achieved 98.33% accuracy with just 18,040 neurons, highlighting efficient sparse processing for vision applications. In asynchronous workloads, such as from dynamic vision sensor data, earlier work achieved 96.6% accuracy on MNIST-like tasks with minimal memory usage (64 kB) and supported operation with low-latency spike transmission, enabling sub-millisecond in event-driven scenarios. These results underscore the system's ability to handle hybrid asynchronous computations, outperforming traditional GPUs in for sparse models. Deployments of SpiNNaker2 began in 2024 with the commercial launch by SpiNNcloud Systems, providing accessible neuromorphic supercomputing for research and industry. A notable installation occurred in 2025 at , where a SpiNNaker2 simulates 175 million neurons to advance research and deterrence modeling, integrating neuromorphic efficiency with workflows. In November 2025, SpiNNcloud delivered a SpiNNaker2 to the THOR initiative, establishing a landmark neuromorphic commons in the United States. Software support has evolved with updates to sPyNNaker, now extended as py-spinnaker2, facilitating hybrid SNN/DNN models through Python-based interfaces that map networks directly to hardware. Integration with the EBRAINS research infrastructure provides cloud-based access, allowing users to run simulations without local hardware via standardized APIs. Looking ahead, SpiNNaker2's scalable architecture positions it for exascale , particularly for sparse AI models that benefit from its asynchronous, brain-inspired design, potentially revolutionizing energy-efficient processing in and data-center environments.

References

  1. [1]
    SpiNNaker - Human Brain Project
    A massively-parallel brain-inspired neuromorphic computer for large-scale real-time brain modelling applications.
  2. [2]
    Overview of the SpiNNaker System Architecture - IEEE Xplore
    Jun 26, 2012 · SpiNNaker is a million-core computing engine using ARM9 cores and a custom interconnect fabric, designed to simulate up to a billion neurons in ...
  3. [3]
    A Look at SpiNNaker 2 - University of Dresden - Neuromorphic Chip
    The SpiNNaker project was initiated at the University of Manchester in 2006, with the goal of designing a massively parallel system optimized for simulating ...
  4. [4]
    (PDF) Overview of the SpiNNaker System Architecture - ResearchGate
    Aug 7, 2025 · It consists of an array of ARM9 cores, communicating via packets carried by a custom interconnect fabric. The packets are small (40 or 72 bits), ...
  5. [5]
    [PDF] Overview of the SpiNNaker system architecture
    Abstract—SpiNNaker (a contraction of Spiking Neural Network Architecture) is a million-core computing engine whose flagship goal is to be able to simulate the ...
  6. [6]
    sPyNNaker: A Software Package for Running PyNN Simulations on ...
    Nov 19, 2018 · This paper gives a complete overview of the current state-of-the art in SpiNNaker software pertaining to modeling spiking neural networks. As ...<|separator|>
  7. [7]
    [1911.02385] SpiNNaker 2: A 10 Million Core Processor System for ...
    Nov 6, 2019 · Authors:Christian Mayr, Sebastian Hoeppner, Steve Furber. View a PDF of the paper titled SpiNNaker 2: A 10 Million Core Processor System for ...Missing: original | Show results with:original
  8. [8]
    Milestone for energy-efficient AI systems: TUD launches SpiNNcloud ...
    Apr 14, 2025 · The supercomputer SpiNNcloud, developed by Prof. Christian Mayr, Chair of Highly-Parallel VLSI Systems and Neuro-Microelectronics at TUD, goes into operation.
  9. [9]
    [PDF] SPINNAKER - OAPEN Home
    Furber and P. Bogdan. Suggested citation: Steve Furber and Petrut Bogdan ... The SpiNNaker compiler first takes a high-level description of the network ...
  10. [10]
    (PDF) SpiNNaker - Programming model - ResearchGate
    Aug 7, 2025 · [1][2] [3] Commonplace von Neumann computers have limitations in simulating spiking neuronal dynamics 4 because of their sequential ...
  11. [11]
    [PDF] Impact case study (REF3) Page 1 Institution - Research Explorer
    This research has been funded through EPSRC grants (EP/D07908X/1 and EP/G015740/1, totalling over GBP3,300,000), and is part of the EU H2020 Future Emerging ...<|separator|>
  12. [12]
    'Human brain' supercomputer with 1 million processors switched on ...
    Nov 2, 2018 · ... starting way back in 2006. The project was initially funded by the EPSRC and is now supported by the European Human Brain Project. It is ...
  13. [13]
    Major Neuromorphic Computing projects - Conscium
    Jun 7, 2024 · Conceived and led by Professor Steve Furber, one of the original ... A key part of SpiNNaker's funding came from the EU's Human Brain ...
  14. [14]
    (PDF) The SpiNNaker project - ResearchGate
    The spiking neural network architecture (SpiNNaker) project aims to deliver a massively parallel million-core computer whose interconnect architecture is ...Missing: original | Show results with:original
  15. [15]
    Second Generation SpiNNaker Neuromorphic Supercomputer to be ...
    Sep 24, 2019 · Saxon Science Ministry delivers 8 Mio Euro to TU Dresden for second generation SpiNNaker machine, to be called “SpiNNcloud".
  16. [16]
    [PDF] Overview of the SpiNNaker system architecture - ePrints Soton
    SpiNNaker is a million-core system with up to 1,036,800 ARM9 cores, 7TB RAM, and 57K nodes, each with 18 cores and 128MB RAM.
  17. [17]
    (PDF) SpiNNaker: A 1-W 18-Core System-on-Chip for Massively ...
    Aug 9, 2025 · The basic building block is the SpiNNaker Chip Multiprocessor (CMP), which is a custom-designed globally asynchronous locally synchronous (GALS) ...
  18. [18]
    [PDF] SpiNNaker datasheet version 2.02 6 January 2011
    Jan 6, 2011 · Interface to 128Mbyte (nominal) Mobile DDR SDRAM. • over 1 Gbyte/s sustained block transfer rate;. • optionally incorporated within the same ...Missing: MB | Show results with:MB<|control11|><|separator|>
  19. [19]
    Stochastic rounding and reduced-precision fixed-point arithmetic for ...
    Jan 20, 2020 · SpiNNaker is based on a digital 18-core chip designed primarily to simulate sparsely connected large-scale neural networks with weighted ...
  20. [20]
    Breaking the millisecond barrier on SpiNNaker - PubMed Central - NIH
    In order to investigate questions about required temporal precision in neural networks, we introduce a novel programming framework for SpiNNaker (Furber et ...
  21. [21]
    SpiNNaker: A multi-core System-on-Chip for massively-parallel ...
    The MPSoC contains 100 million transistors in a 102 mm2 die, provides a peak performance of 3.96 GIPS and has a power consumption of 1W at 1.2V when all ...
  22. [22]
    (PDF) Understanding the interconnection network of SpiNNaker
    Jun 21, 2025 · Once the 3D Mesh topology is ready, we are going to set up the routing scheme that provides the minimum number of routers and the minimum ...
  23. [23]
    SpiNNaker: Enhanced multicast routing - ScienceDirect.com
    This paper investigates how best to generate multicast routes for SpiNNaker, a purpose-built, low-power, massively-parallel architecture.
  24. [24]
    [PDF] This presentation is to provide a quick overview of the hardware and ...
    Each SpiNN-5 board (shown on the left) has 48 SpiNNaker chips and three FPGAs, ... Each subrack (on the right) can hold up to 24 boards (1152 chips or 20K cores).
  25. [25]
    SpiNNTools: The Execution Engine for the SpiNNaker Platform
    This work introduces a software suite called SpiNNTools that can map a computational problem described as a graph into the required set of executables.
  26. [26]
    [PDF] An Improved Interconnection Network for the Next Generation of ...
    24 boards per rack. 5 racks per cabinet, 10 cabinets. Figure 1: Construction ... a host PC using spare HSS connections on SpiNNaker boards. This system ...
  27. [27]
    Advanced processor technologies - Department of Computer Science
    SpiNNaker is the world's largest neuromorphic computing platform, and we support an open service under the auspices of the European Human Brain Project, with ...Missing: deployment HBP
  28. [28]
    sPyNNaker Models, Limitations and Extensions
    sPyNNaker8 implements a subset of the PyNN 0.9 API. We recommend using PyNN 0.9 for new work. Neuron Models. sPyNNaker currently supports the following model ...Missing: framework | Show results with:framework
  29. [29]
    Performance Comparison of the Digital Neuromorphic Hardware ...
    May 22, 2018 · The digital neuromorphic hardware SpiNNaker has been developed with the aim of enabling large-scale neural network simulations in real time and with low power ...
  30. [30]
    [PDF] SARK - SpiNNaker Application Runtime Kernel
    SARK is the lowest level of software which runs on a SpiNNaker core (CPU). It is linked together with the application code and performs three main functions. • ...
  31. [31]
    spinnaker_tools: SpiNNaker API, SARK, SC&MP, and Spin1 API
    The SpiNNaker Application Runtime Kernel (see sark.h). This can be considered to be the C library of SpiNNaker. SC&MP — The SpiNNaker ...
  32. [32]
    spinnaker_tools: spinnaker_tools: SpiNNaker API, SARK, SC&MP ...
    The SpiNNaker Control and Monitor Processor (see scamp.h). This can be considered to be the operating system of SpiNNaker. Spin1 API — The SpiNNaker API ...
  33. [33]
    [PDF] AppNote 5 - Spinnaker Command Protocol (SCP) Specification
    SARK responds to a small set of commands while SC&MP responds to a larger set. A command is directed to a particular core by means of the addressing field ...
  34. [34]
    [PDF] AppNote 4 - SpiNNaker Datagram Protocol (SDP) Specification
    Dec 7, 2011 · The IPTag table in the first SC&MP implementation contains 16 entries of which the first 4 are reserved for permanent IPTags. Embedding SDP in ...<|control11|><|separator|>
  35. [35]
    SpiNNaker API, sark, sc&mp, bmp firmware and build tools - GitHub
    Installation and Setup. Edit the setup file so that it points to your installations of ARM and/or GNU software development tools.Missing: runtime | Show results with:runtime<|control11|><|separator|>
  36. [36]
    C Compiler for SpiNNaker
    To build programs on SpiNNaker, you will primarily need to install a C compiler that is compatible with SpiNNaker. At present, we recommend using gcc for ...Missing: BMP | Show results with:BMP<|control11|><|separator|>
  37. [37]
    [PDF] ybug - System Control Tool for SpiNNaker
    There are a number of low-level debugging features such as the ability to inspect and change memory in any SpiNNaker chip in the system. ybug communicates with ...
  38. [38]
  39. [39]
    PyNN on SpiNNaker Software 4.0.0 - Zenodo
    Sep 25, 2017 · A release of the software which enables the execution of PyNN 0.7 and PyNN 0.8 scripts on the SpiNNaker Neuromorphic hardware platform.
  40. [40]
    sPyNNaker version 4.0.0 - Software for SpiNNaker
    The version described here is no longer supported. · License Agreement · PyNN on SpiNNaker Installation Guide · PyNN on SpiNNaker Support and Limitations ...
  41. [41]
  42. [42]
  43. [43]
    SpiNNaker | Science and Industry Museum
    Oct 14, 2022 · SpiNNaker is modelled on the human brain and designed to mimic the way neurons in our brain send information through electrochemical 'spikes'.Missing: timeline | Show results with:timeline
  44. [44]
    Simulations - Human Brain Project
    Network models described using the PyNN API can be simulated on the BrainScaleS and SpiNNaker platforms, either interactively via Jupyter notebooks, or in batch ...Simulation Tools, Services... · Network Level Simulation · Molecular And Subcellular...
  45. [45]
    Tools and services documentation - HBP Wiki - EBRAINS
    Sep 28, 2023 · Simulate multi-scale brain network models with TVB and NEST. NEST ... Allows the NRP to use SpiNNaker as a brain for robotic simulations.
  46. [46]
  47. [47]
    Using Stochastic Spiking Neural Networks on SpiNNaker to Solve ...
    Dec 18, 2017 · We provide a software framework for the implementation of such noisy neural solvers on the SpiNNaker massively parallel neuromorphic hardware.
  48. [48]
    [PDF] Learning Visual-Motor Cell Assemblies for the iCub Robot using a ...
    The robot's neural control architecture learned to integrate visual, motor and linguistic representations through Hebbian- linked Kohonen maps. In [12] ...
  49. [49]
    ATIS + SpiNNaker: a Fully Event-based Visual Tracking Demonstration
    Dec 3, 2019 · We aim to show the hardware and software architecture that integrates the ATIS and SpiNNaker together in a robot middle-ware that makes ...
  50. [50]
    [PDF] Neuromorphic Event-based Line Detection on SpiNNaker - HAL
    Jan 14, 2025 · Our architecture relies on SNN intrinsic dynamics and ensures the accurate detection of moving lines recorded by an event-based camera with no ...
  51. [51]
    Event-driven nearshore and shoreline coastline detection on ...
    The proposed method involves utilising FPGA implementation for interfacing the DVS camera with the SpiNNaker board, which, according to literature [6, 41], ...
  52. [52]
    Neuromorphic computing for robotic vision: algorithms to hardware ...
    Aug 13, 2025 · This perspective article analyzes recent advances and future directions, advocating a system design approach that integrates specialized sensing ...
  53. [53]
    Neuropod: A real-time neuromorphic spiking CPG applied to robotics
    Mar 14, 2020 · A Spiking Neural Network was designed and implemented on SpiNNaker. The network models a complex, online-change capable Central Pattern ...
  54. [54]
    [PDF] Design, Implementation and Validation of SCPGs - CORE
    To validate our designs, we have implemented them on the SpiNNaker board using PyNN and we have embedded it on a hexapod robot. The system includes a. Dynamic ...<|separator|>
  55. [55]
    [PDF] Neuromorphic Hardware - A System Perspective - NHR@FAU
    Apr 15, 2025 · SpiNNaker2 Chip Architecture. Characteristics. • Optimized for minimum baseline power: ~250mW. • Enabled by Racyics ABB 0.5V IP. • Performance ...
  56. [56]
    [PDF] SpiNNaker 2: A 10 Million Core Processor System for Brain ... - arXiv
    56 Chips x 25 Boards x 5 Racks x 10 Cabinets. → ≈ 10 Million Processors. SpiNNaker1 Chip (18 cores). Figure 1. First and second generation of the SpiNNaker ...<|control11|><|separator|>
  57. [57]
    SpiNNcloud Systems Launches SpiNNaker2 to Advance ... - HPCwire
    May 8, 2024 · Additionally, the SpiNNaker2 architecture highlights through its flexibility, allowing the native implementation of not only Deep Neural ...Missing: enhancements improvements
  58. [58]
    None
    ### SpiNNaker2 Architecture Summary
  59. [59]
    SpiNNcloud Systems Announces First Commercially Available ...
    May 8, 2024 · Today, in advance of ISC High Performance 2024, SpiNNcloud Systems announced the commercial availability of its SpiNNaker2 platform, ...<|control11|><|separator|>
  60. [60]
    Sandia Deploys SpiNNaker2 Neuromorphic System
    Jun 16, 2025 · 24 Boards With 48 Chips Each. What the Sandia scientists stood up is a highly parallel architecture with 24 boards, each of which holds 48 ...
  61. [61]
    py-spinnaker2 - GitLab
    Jun 14, 2022 · SNN Python Interface. High-level users can define spiking neural networks (SNN) and hybrid DNN/SNN models in py-spinnaker2. The networks are ...
  62. [62]
    Brain-Tech in action: German Spinncloud computing startup secures ...
    Jul 25, 2025 · It is based on the SpiNNaker2-chip technology, which was developed under the leadership of Prof. Christian Mayr in Dresden as part of the Human ...