Fact-checked by Grok 2 weeks ago

Spiking neural network

A spiking neural network (SNN) is a of neural networks that emulates the spike-based communication of biological neurons, where information is transmitted via discrete, event-driven potentials rather than continuous scalar values, enabling efficient processing of spatiotemporal . Unlike traditional artificial neural networks (ANNs), which rely on rate-coded activations, SNNs incorporate temporal dynamics, allowing neurons to integrate incoming spikes over time until reaching a firing , after which they reset and propagate spikes to connected neurons. This biologically inspired approach, often termed the third generation of neural networks, draws from foundational models to achieve greater , particularly on neuromorphic that mimics brain-like sparsity and parallelism. The conceptual roots of SNNs trace back to early models, including the McCulloch-Pitts formalization of logical neurons in 1943 and the Hodgkin-Huxley biophysical model of 1952, which described spike generation through ion channel dynamics. Subsequent developments, such as Donald Hebb's rule in 1949 and the integrate-and-fire models from the early , laid the groundwork for spike-timing-dependent learning mechanisms like STDP (spike-timing-dependent plasticity). Modern SNN architectures gained prominence with efficient models, including the Leaky Integrate-and-Fire (LIF) model, which simplifies dynamics with leakage and threshold-based firing, and the Izhikevich model introduced in 2003, which balances biological realism and computational tractability by capturing diverse spiking patterns such as and using just two differential equations. SNNs excel in applications requiring low-power, real-time processing, such as tasks like and , as well as for navigation and control, where their event-driven nature reduces computational overhead compared to ANNs. Training SNNs typically involves adaptations of gradient-based methods like or bio-inspired rules like STDP, though challenges persist in and direct training from data due to the non-differentiable nature of ; recent advances in gradients and software frameworks like Lava have helped address these issues. , including IBM's TrueNorth (2014), Intel's Loihi (2018), and Loihi 2 (2021), have accelerated SNN adoption by providing hardware optimized for propagation and synaptic updates, fostering advancements in and brain-machine interfaces.

Introduction

Definition and Principles

Spiking neural networks (SNNs) are the third generation of artificial models, designed to emulate biological neurons by encoding and transmitting information through discrete potentials, known as spikes, in continuous time rather than via continuous activation values typical of earlier generations. This biologically inspired approach allows SNNs to capture the temporal dynamics of neural signaling, where the precise timing of spikes plays a central role in . Originating as an extension of traditional artificial neural networks in the , SNNs were formalized to incorporate spike timing as a fundamental mechanism for information processing, building on earlier models like integrate-and-fire neurons while addressing limitations in handling temporal information. The seminal work by Maass (1997) established SNNs as computationally powerful models capable of universal approximation with fewer neurons than rate-based networks, highlighting their potential for efficient, brain-like computation. At their core, SNNs operate on three key principles: temporal coding, in which the relative timing and patterns of convey ; event-driven , where neural and updates occur asynchronously only upon arrival, minimizing unnecessary calculations; and sparse activity, featuring low average firing rates (often below 100 Hz) that mirror biological efficiency and reduce . These principles enable SNNs to process dynamic, time-varying inputs more naturally than traditional models. The basic architecture of an SNN comprises a finite set of spiking neurons connected by synapses, each with associated weights and delays, where inputs are provided as spike trains to source neurons and outputs are read from target neurons' firing patterns over time. This layered, synaptic structure supports event-based propagation of spikes, facilitating computations that align closely with observed neural dynamics in the brain.

Advantages Over Traditional Neural Networks

Spiking neural networks (SNNs) provide substantial advantages over traditional artificial neural networks (ANNs) due to their event-driven, sparse spiking activity, which only activates computations when spikes occur, contrasting with the continuous, dense activations in ANNs. This asynchronous processing mimics the brain's low firing rates of 10-100 Hz, reducing unnecessary operations and power consumption, particularly on neuromorphic hardware like Intel's Loihi chip. For instance, SNNs have demonstrated up to 280 times lower energy use for tasks compared to ANNs, and approximately 100 times less energy for (SLAM) applications. When spike rates remain below 6.4% over 5-10 timesteps, SNNs outperform quantized ANNs in by leveraging binary spikes that eliminate multiplications in favor of additions. In temporal processing, SNNs excel at handling time-series such as speech, video, or event-based inputs by encoding through precise spike timings, enabling more nuanced representation of dynamics than the static, rate-based computations of ANNs. This capability is evident in tasks like estimation, where SNN models such as Spike-FlowNet achieve high accuracy on datasets like MVSEC while maintaining low . Such temporal sensitivity allows SNNs to process spatiotemporal patterns efficiently, making them suitable for real-time applications in and . The biological plausibility of SNNs offers a key advantage, as their spiking mechanisms closely align with real neuronal dynamics, including integrate-and-fire models and , facilitating integration with research and potential bio-AI systems. Furthermore, SNNs exhibit enhanced robustness to noise and perturbations in dynamic environments, outperforming ANNs in scenarios with noisy inputs, as shown in SNN models that maintain under synaptic variations. Studies on neuromorphic report 10-100 times lower power consumption for equivalent tasks, underscoring the practical impact of these advantages.

Biological Basis

Neuronal Firing Mechanisms

Neuronal firing in biological systems is characterized by the generation of , which are rapid, all-or-nothing electrical impulses that propagate along the neuron's to transmit signals. These spikes occur when the depolarizes to a , typically around -55 mV, triggering the opening of voltage-gated sodium (Na+) channels that allow a massive influx of Na+ ions, rapidly reversing the to positive values near +40 mV. This is followed by the activation of voltage-gated potassium (K+) channels, leading to K+ efflux that repolarizes the membrane and often results in a brief hyperpolarization. The foundational description of these dynamics was provided by Hodgkin and Huxley's experiments on the in 1952, which quantitatively modeled the currents responsible for initiation and propagation. Following an , neurons enter a refractory period that temporarily prevents re-firing, ensuring unidirectional signal propagation and limiting maximum firing rates. The absolute refractory period, lasting approximately 1-2 ms, corresponds to the time when sodium channels are inactivated and cannot reopen, rendering the completely inexcitable regardless of stimulus strength. This is succeeded by the relative refractory period, which extends for a few additional milliseconds during the hyperpolarized afterpotential, where a stronger-than-normal stimulus is required to reach due to partial recovery of states. These phases collectively constrain neuronal excitability and contribute to the timing precision of spike trains. Neurons exhibit diverse firing patterns in response to sustained input currents, reflecting intrinsic properties of their ion channels and membrane conductances. Tonic firing produces regular, single at a steady rate, common in many cortical and sensory neurons under moderate . Bursting involves clusters of 2-5 rapid followed by quiescence, often driven by calcium-activated currents that promote rhythmic activity in thalamic or hippocampal cells. Adapting firing begins with high-frequency that decrease over time due to slow activation of conductances, as observed in some pyramidal neurons. These patterns arise from the interplay of voltage-gated channels described in Hodgkin-Huxley frameworks and variations thereof. Typical firing rates for biological neurons range from 1 to 100 Hz, varying by , , and stimulus conditions; for instance, cortical pyramidal neurons often operate around 5-20 Hz during , while fast-spiking can reach up to 200-300 Hz transiently. These rates are limited by the refractory period and energy constraints, with average population rates in around 4-5 Hz. In terms of , neurons primarily use rate coding, where signal intensity is represented by , or temporal coding, which encodes through the precise timing of relative to stimuli or network oscillations. Both mechanisms coexist, with temporal coding enhancing discriminability in . Synaptic integration across dendrites summates excitatory and inhibitory inputs to modulate the likelihood of reaching firing .

Synaptic Dynamics

Synapses serve as the primary sites for communication between s in biological neural networks, enabling the transmission of signals through two main mechanisms: chemical and electrical synaptic transmission. Chemical synapses, which predominate in the mammalian , involve the release of neurotransmitters such as glutamate for excitatory transmission or for inhibitory transmission from the presynaptic into the synaptic cleft, where they bind to receptors on the postsynaptic . This process introduces a synaptic delay of approximately 0.5 to 5 s due to the time required for vesicle fusion, neurotransmitter , and receptor activation. In contrast, electrical synapses facilitate direct flow through junctions, allowing rapid, bidirectional signaling with minimal delay (less than 1 ) and supporting synchronized activity among groups, though they are less common in mammals. Upon neurotransmitter binding, chemical synapses generate postsynaptic potentials that alter the membrane potential of the receiving neuron. Excitatory postsynaptic potentials (EPSPs) result from glutamate-activated receptors, causing depolarization by influx of sodium or calcium ions, which brings the membrane potential closer to the firing threshold. Inhibitory postsynaptic potentials (IPSPs), mediated by GABA or glycine, lead to hyperpolarization through chloride or potassium efflux, moving the potential away from threshold and reducing the likelihood of firing. These potentials summate spatially (from multiple synapses) and temporally (from repeated inputs) at the axon hillock, where the integrated signal determines whether an action potential is initiated if the threshold is reached. Synaptic strength is dynamically regulated by plasticity mechanisms that adapt based on recent neural activity. Short-term plasticity includes facilitation, where repeated presynaptic firing increases neurotransmitter release probability due to residual calcium accumulation, enhancing subsequent responses over hundreds of milliseconds to seconds; and , caused by depletion of the readily releasable vesicle , which reduces transmission efficacy following high-frequency activity. Long-term plasticity manifests as (LTP), a persistent strengthening of synapses lasting hours or more, or long-term (LTD), a weakening, both underpinning formation through Hebbian principles where "cells that fire together wire together" via correlated pre- and postsynaptic activity. Neuromodulators like and serotonin further tune synaptic dynamics by altering transmission efficacy and plasticity. , acting through D1-like receptors, enhances LTP by boosting cyclic AMP levels and function, particularly in reward-related circuits such as the and . Serotonin modulates synaptic strength via 5-HT receptors, influencing both LTP and in regions like the to regulate emotional and cognitive processing.

Computational Models

Integrate-and-Fire Neurons

The integrate-and-fire (IF) model represents one of the earliest and simplest mathematical descriptions of neuronal dynamics, originally proposed by Louis Lapicque in 1907 as an analogy to the electrical properties of nerve cells based on stimulation experiments with frog nerves. Lapicque modeled the as an where the acts as a that charges in response to input currents and discharges through a , leading to a firing event when a is reached; this framework laid the foundation for subsequent spiking models in . The basic leaky integrate-and-fire (LIF) neuron, an extension of Lapicque's original idea, describes the evolution of the subthreshold V(t) through a first-order that incorporates both excitatory input integration and passive leakage across the . The governing equation is \frac{dV}{dt} = -\frac{V}{\tau} + \frac{I(t)}{C}, where \tau = RC is the representing the leakage rate, I(t) is the total synaptic input current, and C is the ; when V(t) reaches a firing threshold \theta, the emits a spike and resets V to a lower value, typically the . This model captures essential aspects of biological firing by treating spikes as instantaneous events rather than detailed voltage traces, making it computationally efficient for simulating large-scale spiking neural networks. Variants of the IF model adjust the basic formulation to better approximate diverse neuronal behaviors observed in biology. The perfect integrate-and-fire (PIF) neuron simplifies the LIF by omitting the leakage term, resulting in dV/dt = I(t)/C, which leads to linear potential accumulation without decay and is useful for analyzing threshold-crossing in constant-current scenarios. The adaptive integrate-and-fire (AIF) model introduces a dynamic threshold that increases after each spike to mimic adaptation and firing rate deceleration, as in the adaptive exponential integrate-and-fire (AdEx) formulation where an adaptation current or variable threshold \theta(t) evolves alongside V(t). The quadratic integrate-and-fire (QIF) model replaces the linear integration with a quadratic term, dV/dt = (V - V_r)(V - V_t) + I(t)/C, enabling the reproduction of bursting patterns and type I excitability near a saddle-node bifurcation, which aligns with certain cortical neuron dynamics. Typical parameter values for the LIF model in simulations draw from biological measurements, with the membrane time constant \tau ranging from 10 to 20 ms to reflect passive decay in mammalian s, and the \theta often normalized to -50 mV relative to a of -70 mV. The output of an IF is represented as a spike train consisting of Dirac delta functions \delta(t - t_i) at firing times t_i, which facilitates event-based computation in spiking without modeling the .

Conductance-Based Models

Conductance-based models of spiking neurons provide a biophysically detailed framework for simulating neuronal dynamics by incorporating voltage-dependent conductances, offering greater fidelity to biological processes compared to simpler integrate-and-fire approximations. These models describe how evolves through the interplay of ionic currents mediated by sodium (Na⁺), (K⁺), and leak conductances, enabling the reproduction of complex firing patterns observed in real neurons. The foundational conductance-based model is the Hodgkin-Huxley (HH) model, developed from voltage-clamp experiments on the giant of the Loligo forbesi. In this model, the V dynamics are governed by the : C \frac{dV}{dt} = I - g_\mathrm{Na} m^3 h (V - E_\mathrm{Na}) - g_\mathrm{K} n^4 (V - E_\mathrm{K}) - g_\mathrm{L} (V - E_\mathrm{L}), where C is the membrane capacitance, I is the applied current, g_\mathrm{Na}, g_\mathrm{K}, and g_\mathrm{L} are the maximum conductances for sodium, , and leak channels, respectively, and E_\mathrm{Na}, E_\mathrm{K}, and E_\mathrm{L} are the corresponding reversal potentials. The voltage-dependent gating variables m (sodium activation), h (sodium inactivation), and n ( activation) evolve according to first-order Markov kinetics: \frac{dx}{dt} = \alpha_x(V)(1 - x) - \beta_x(V) x, \quad x \in \{m, h, n\}, with rate functions \alpha_x and \beta_x derived empirically from experimental data. Spikes are generated when the membrane potential reaches a threshold due to rapid Na⁺ influx, followed by repolarization via K⁺ efflux, capturing the characteristic action potential shape. This work earned Alan Hodgkin and Andrew Huxley the 1963 Nobel Prize in Physiology or Medicine for elucidating the ionic basis of nerve conduction. Conductance-based models like excel in replicating biologically realistic phenomena, including , subthreshold oscillations, and , which are essential for understanding behavior. They are widely employed in large-scale simulations of cortical networks to study emergent properties such as and information processing. However, their computational demands are substantially higher than those of leaky integrate-and-fire models, requiring approximately 100 times more floating-point operations per millisecond of simulation due to the multiple coupled equations. To address the efficiency limitations of the four-dimensional HH model while preserving its biophysical accuracy, extensions such as the Izhikevich model offer a two-dimensional simplification that balances detail and computational tractability. Introduced in 2003, it uses the equations: \frac{dv}{dt} = 0.04 v^2 + 5 v + 140 - u + I, \quad \frac{du}{dt} = a (b v - u), with an auxiliary after-spike reset: if v \geq 30 mV, then v \leftarrow c and u \leftarrow u + d, where parameters a, b, c, and d tune the firing dynamics. This model reproduces the diverse spiking behaviors of cortical neurons with far lower cost—about 13 floating-point operations per millisecond—enabling simulations of up to 100,000 interconnected neurons in real time, compared to only tens for full HH.

Learning Algorithms

Spike-Timing-Dependent Plasticity

Spike-timing-dependent plasticity (STDP) serves as a unsupervised learning rule in spiking neural networks (SNNs), modulating synaptic weights according to the precise relative timing of pre-synaptic and post-synaptic action potentials. This rule captures the temporal asymmetry of Hebbian learning, where "neurons that fire together wire together" is refined by the order and interval of spikes. The core mechanism defines the timing difference as Δt = t_post - t_pre, where t_post and t_pre are the times of post-synaptic and pre-synaptic spikes, respectively. When Δt > 0 (pre-synaptic spike precedes post-synaptic), the synapse strengthens via (LTP); when Δt < 0 (post-synaptic precedes pre-synaptic), it weakens via long-term depression (LTD). This bidirectional adjustment promotes causal associations in spike trains, fostering network stability and functionality. The canonical formulation of STDP, derived from experimental data, quantifies the synaptic weight change Δw based on spike pairs as follows: \begin{cases} \Delta w = A_{+} \exp\left(-\frac{\Delta t}{\tau_{+}}\right) & \text{if } \Delta t > 0 \\ \Delta w = -A_{-} \exp\left(\frac{\Delta t}{\tau_{-}}\right) & \text{if } \Delta t < 0 \end{cases} Here, A_{+} and A_{-} represent the maximum amplitudes for potentiation and depression (typically A_{+} ≈ A_{-} ≈ 0.01 in normalized models), while τ_{+} and τ_{-} are decay time constants (often ≈ 20 ms, with τ_{-} slightly larger to reflect broader LTD windows). These parameters ensure that changes decay exponentially with increasing |Δt|, limiting plasticity to millisecond-scale correlations relevant to neural signaling. STDP exhibits several variants to align with diverse experimental observations and computational needs. In additive STDP, Δw remains constant regardless of current weight, leading to strong synaptic competition but requiring bounds to avoid instability. Multiplicative (or weight-dependent) variants scale Δw by factors like (1 - w) for LTP or w for , promoting weight and realistic bounded dynamics observed in biological synapses. Triplet STDP further refines this by incorporating third-order interactions (e.g., two pre-synaptic and one post-synaptic), capturing nonlinear effects in higher-frequency regimes and fitting data from slices better than pair-based models. The biological foundation of STDP traces to studies on hippocampal neurons, where Bi and Poo demonstrated that synaptic modifications depend critically on spike order and interval, mirroring LTP/LTD induction via calcium dynamics and activation. In SNNs, STDP facilitates feature learning by self-organizing synaptic maps, such as extracting oriented edges from visual inputs or adapting to temporal patterns in auditory signals, thereby enabling efficient without .

Supervised and Unsupervised Methods

Unsupervised learning methods in spiking neural networks (SNNs) extend beyond spike-timing-dependent plasticity (STDP) to include mechanisms like homeostatic plasticity, which maintains firing rates through adaptive adjustments to neuronal excitability, ensuring stable network activity during . This rate helps prevent runaway excitation or silencing in unsupervised settings, promoting balanced representations of input patterns. Competitive learning via winner-take-all (WTA) spiking further refines feature extraction, where inhibitory connections suppress non-dominant neurons, enabling sparse, selective responses to stimuli in layers of SNNs. Supervised learning in SNNs faces primary challenges from the non-differentiable nature of spike generation, typically modeled by a , which disrupts gradient-based optimization like . To address this, surrogate gradient methods approximate the derivative with a smooth function, such as a , allowing error signals to propagate through spiking layers while preserving temporal dynamics. Event-based backpropagation through time (BPTT) variants compute exact gradients by treating spikes as discrete events and unfolding the network temporally, enabling precise weight updates in multi-layer SNNs without full simulation overhead. Alternative approaches include with spike-based rewards, where temporal differences in spike trains encode value functions, facilitating policy optimization in dynamic environments. Evolutionary algorithms optimize SNN topologies by evolving structures and parameters, bypassing differentiability issues to discover efficient architectures for specific tasks. Spike-time error functions, as in the SpikeProp algorithm, minimize differences between actual and target spike timings using on a quadratic error metric, providing an early supervised framework for temporal coding. Recent advances since 2020 in ANN-to-SNN (ANN2SNN) map pre-trained artificial neural networks to SNNs using , where ANN activations correspond to spike , often preserving over 90% accuracy on benchmarks like while enabling energy-efficient inference. As of 2025, advanced methods enable one-timestep conversions achieving 88.8% top-1 accuracy on ImageNet-1K, further enhancing efficiency. These methods mitigate training challenges by leveraging established ANN performance, with optimizations like threshold balancing to reduce and discrepancies.

Applications

Neuromorphic Sensing and Processing

Spiking neural networks (SNNs) integrate seamlessly with dynamic vision sensors (DVS), which capture visual changes as asynchronous events encoded directly into spikes, enabling sparse and temporally precise processing of sensory data. This event-based paradigm contrasts with traditional frame-based by only activating computations in response to motion or changes, reducing redundant data handling and facilitating real-time applications. In , SNNs trained on spike-encoded datasets like N-MNIST demonstrate high performance, achieving accuracies around 99% on static-like tasks and over 90% on dynamic gesture benchmarks such as DVS128 Gesture, leveraging the temporal dynamics inherent in spike trains for feature extraction. In auditory processing, SNNs excel at handling variable speech rates through their spike-timing mechanisms, which naturally encode temporal variations in sound signals better than rate-based artificial neural networks (ANNs). Spike-based models convert audio waveforms into spike trains via temporal schemes, allowing robust of phonemes and words despite fluctuating tempos or noise. For example, deep SNN architectures have attained up to 93.75% accuracy on the TIDIGITS for digit , outperforming traditional methods in scenarios with rate variability by exploiting precise timings for phonetic discrimination. For olfactory and tactile sensing, bio-inspired SNNs mimic biological pathways using spike-timing-dependent plasticity (STDP) to classify odors and textures from sparse sensor inputs. In classification, STDP-trained SNNs process spike patterns from noses, achieving accuracies exceeding 95% on multi-class datasets by learning temporal correlations in volatile compound responses. Similarly, for tactile processing, SNNs trained with STDP on spike-encoded touch data from neuromorphic sensors enable discrimination of surface textures with , capitalizing on the event-driven nature to handle dynamic contact forces efficiently. Key demonstrations include the TrueNorth chip, which supports real-time in visual streams using SNNs, processing up to 1 million neurons at 65 mW for low-power edge deployment. This contrasts with convolutional neural networks (CNNs), which consume watts on similar mobile tasks, yielding energy savings of several orders of magnitude in SNNs—often in the milliwatt range versus watts for CNNs—due to sparse spiking activity. These advantages extend to low-latency processing in , where SNNs enable sub-millisecond responses to sensory events, surpassing ANN delays for time-critical and obstacle avoidance.

Robotics and Control Systems

Spiking neural networks (SNNs) have been applied to tasks in robotics, particularly through spike-based algorithms that enable adaptive and . In one seminal , an SNN was trained to control the 4-degree-of-freedom arm of the , demonstrating autonomous learning of reaching and grasping behaviors by processing sensory inputs as spike trains and generating motor commands via temporal coding. This approach leverages the temporal dynamics of spikes to achieve faster adaptation compared to traditional rate-based methods, with studies showing improved convergence speed for continuous control tasks in multi-joint robotic arms. Such spike-based frameworks, including spiking actor-critic methods, have since been extended to , where SNNs optimize patterns in legged robots by rewarding spike timings that align with stable movement trajectories. In autonomous , SNNs facilitate real-time obstacle avoidance by encoding data as asynchronous events, allowing the network to learn collision-free paths through mechanisms like spike-timing-dependent (STDP). For instance, SNN controllers trained on point clouds have enabled mobile robots to navigate dynamic environments, where STDP modulates synaptic weights based on the temporal correlation between incoming spikes from obstacle detections and corrective motor outputs, resulting in robust path planning without explicit programming. This integration of STDP with data supports during exploration, as demonstrated in setups where robots achieve 2%-5% higher success rates (lower collision rates) compared to baselines in cluttered arenas after brief training episodes. Recent advancements combine these with , using SNNs to process spatiotemporal features for end-to-end policies that outperform conventional artificial neural networks in and response . SNNs also play a critical role in prosthetics via brain-machine interfaces (BMIs), where they decode neural from implants to generate precise limb control signals. A key example involves SNN decoders that translate multi-electrode spike trains into continuous trajectories for prosthetic arms, achieving high with intended movements in experiments. These networks exploit the sparse, event-driven nature of to filter noise and predict in , enabling users to perform reaching tasks with sub-second . In upper-limb prosthetics, SNNs further enhance control by integrating tactile feedback as , allowing for grip adjustment and slippage detection during manipulation. The SyNAPSE program in the 2010s significantly advanced SNN applications in by funding neuromorphic hardware prototypes capable of executing spike-based control algorithms for perception-action loops in autonomous systems. Building on this, integrations since 2023 have incorporated spiking actor-critic frameworks into robotic control, where the actor SNN generates action policies via modulated STDP and the critic evaluates rewards through , yielding significant energy savings over non-spiking in tasks. As of 2025, SNNs continue to advance in AIoT applications for energy-efficient robotic sensing and control. Despite these successes, real-time deployment of SNNs in faces challenges from computational in simulating complex dynamics, often exceeding 10 ms per cycle on standard processors. addresses this by mapping SNNs onto field-programmable gate arrays (FPGAs) or neuromorphic , reducing to microseconds while preserving fidelity for closed-loop .

Implementation

Software Frameworks

Several open-source software frameworks facilitate the and of spiking neural networks (SNNs) on general-purpose such as CPUs and GPUs. These tools enable researchers to prototype, train, and deploy SNN models ranging from simple integrate-and-fire neurons to more complex conductance-based models, supporting features like spike-timing-dependent plasticity (STDP) and surrogate gradient methods for learning. Brian2 is a Python-based simulator designed for flexible modeling of SNNs, allowing users to define custom neuron dynamics through systems of equations. It employs to simulate networks efficiently and includes built-in solvers, such as exponential Euler integrators, for handling detailed models like Hodgkin-Huxley () neurons. Brian2 supports prototyping of leaky integrate-and-fire (LIF) networks and larger simulations on standard hardware, making it suitable for exploratory research in . NEST serves as a simulator optimized for large-scale SNNs, emphasizing dynamics, structure, and scalability to brain-like sizes of up to 10^8 and 10^12 synapses. Written primarily in C++ with bindings, it handles simulations of populations with point neurons or multi-compartment models on multi-core CPUs, facilitating studies of emergent behaviors in extensive without focusing on individual . For development and training, SNNTorch extends to incorporate spiking layers, enabling gradient-based optimization of SNNs via surrogate gradients to approximate non-differentiable spike functions during . It leverages 's GPU acceleration for efficient training of deep SNNs and includes utilities for encoding inputs into spikes, supporting both direct training and ANN-to-SNN conversion techniques. A March 2024 release enhanced compatibility with advanced conversion methods, allowing seamless transfer of pre-trained artificial weights to spiking architectures for tasks like image classification; a subsequent February 2025 release further refactored models for easier integration of custom neurons. Lava, developed by , provides a modular for building neuromorphic applications, including SNN simulations on CPUs through its process-based architecture. It supports STDP for and, via the Lava-DL extension, surrogate gradients for supervised training of deep SNNs, with tools for mapping models to general-purpose before potential hardware deployment. Lava's promotes reusability, enabling users to construct and test SNN pipelines from LIF-based prototypes to complex event-driven systems.

Hardware Platforms

Spiking neural networks (SNNs) require specialized hardware platforms to achieve efficient, low-power execution that mimics biological neural processing, leveraging event-driven computation and to handle sparse spike activity. These platforms range from digital neuromorphic chips that simulate spiking dynamics asynchronously to hybrid and emerging analog systems designed for scalability and real-time simulation. Key advancements focus on integrating on-chip learning mechanisms, such as spike-timing-dependent plasticity (STDP), while addressing power constraints inherent in traditional architectures. Digital neuromorphic hardware uses discrete asynchronous logic to emulate neuronal firing and synaptic integration. The Loihi chip, introduced in 2017, features 128 neuromorphic cores on a 60 mm² die fabricated in a , enabling asynchronous spiking with in-memory computing for spike routing and on-chip STDP learning. Similarly, IBM's TrueNorth, released in 2014, integrates 1 million digital neurons and 256 million synapses across 4096 cores in a 65 mW asynchronous design, where each core supports 256 leaky integrate-and-fire neurons with event-driven communication via an on-chip mesh network. Its successor, Loihi 2, released in 2021, scales to 1 million neurons and 120 million synapses across 128 cores, with systems like Hala Point (2024) combining 1,152 Loihi 2 processors to reach 1.15 billion neurons for large-scale edge . These digital systems prioritize for edge applications, consuming microwatts per synaptic operation compared to milliwatts in conventional processors. Hybrid and scalable platforms extend capabilities by combining digital processing with configurable elements for diverse workloads. The SpiNNaker system, based on processors, uses 18-core chips to simulate up to 1 million neurons in real-time across a multi-chip board, employing packet-switched communication to distribute spike events asynchronously; its 2024 successor, SpiNNaker2, improves to 152,000 neurons per chip in a for enhanced . The Tianjic chip, developed in 2019 and refined by 2020, adopts a analog-digital with reconfigurable cores that unify spiking and rate-based neural processing, supporting mixed workloads on a many-core setup for brain-inspired tasks. Emerging memristor-based implementations, such as (RRAM) arrays for synaptic weights, enable low-power STDP by exploiting analog conductance states to update weights based on spike timing, as demonstrated in one-transistor/one-memristor structures that achieve with sub-picojoule energy per . Despite these advances, scaling neuromorphic hardware to brain-like sizes—approaching 10^11 neurons—remains challenging due to issues like interconnect bottlenecks, process variability in analog components, and the need for fault-tolerant architectures to maintain sparsity and asynchrony at exascale.

Evaluation and Challenges

Benchmarking Datasets

Benchmarking datasets for spiking neural networks (SNNs) provide standardized neuromorphic inputs to evaluate models on tasks like , enabling fair comparisons across algorithms and hardware. These datasets typically convert traditional data into spike trains or use event-based sensors to capture temporal , addressing the unique spatio-temporal processing of SNNs. Unlike static image datasets for artificial neural networks (ANNs), SNN benchmarks emphasize low-latency, energy-efficient on asynchronous events. The is a seminal neuromorphic , converting the classic MNIST handwritten digits into event-based spikes using a dynamic sensor (DVS) with saccadic eye movements. It includes 60,000 samples and 10,000 samples, each represented as events over 34x34 pixels spanning approximately 300 ms. Similarly, S-MNIST simulates spike trains from MNIST pixels via encoding, generating rate-based inputs over a fixed time window (e.g., 100-500 timesteps) to in SNNs without hardware . The DVS Gesture dataset extends this to spatio-temporal video, recording 11 hand gesture classes (e.g., , hand wave) from 29 subjects using a DVS128 camera, yielding 1,176 and 288 sequences of asynchronous events for action recognition tasks. Audio benchmarks like the Spiking Digits (SHD) dataset adapt spoken digits (0-9 in English and ) into spike trains via an artificial model, providing 10,420 samples across 700 channels for auditory . These datasets support diverse modalities, from to , and are widely used to assess SNN robustness to and variability in event timing. The NeuroBench framework standardizes evaluation across neuromorphic tasks, incorporating datasets like N-MNIST, DVS Gesture, and SHD to measure end-to-end performance on algorithms and hardware. Post-2020 efforts by the neuromorphic community, including NeuroBench, have driven dataset curation and protocol alignment to bridge simulation and deployment gaps. Key evaluation metrics for SNNs focus on temporal and efficiency aspects beyond ANN accuracy. Classification accuracy is computed from spike-based outputs, often matching ANN levels (e.g., SNNs reach 98% on N-MNIST compared to 99% for ANNs on MNIST) while achieving up to 10x lower power consumption due to sparse spiking. is quantified as time-to-first-spike for decision-making, typically in milliseconds for event-driven inputs. is assessed via spikes per inference or synaptic operations per event, emphasizing sparse computation. Throughput measures inferences or synaptic operations per second, highlighting scalability for applications. Recent benchmarks as of 2025 incorporate scenarios, testing SNNs on distributed neuromorphic data like partitioned N-MNIST or SHD subsets to evaluate privacy-preserving, energy-efficient training across devices.

Limitations and Future Directions

Despite their promise, spiking neural networks (SNNs) encounter significant limitations in scalability, as training large-scale models proves more challenging than for artificial neural networks (ANNs) due to the non-differentiable spike dynamics and temporal dependencies that complicate . This issue is exacerbated by the discrete, event-driven nature of s, which demands specialized learning algorithms like gradients or spike-timing-dependent , often leading to slower convergence and reduced accuracy on complex tasks compared to ANNs. Furthermore, the field suffers from a lack of standardized libraries and frameworks, hindering reproducible and widespread adoption, unlike the robust ecosystems available for ANNs such as or . Interpretability of spike codes poses another barrier, as the asynchronous and sparse spiking patterns make it difficult to discern how temporal information contributes to , limiting and trust in SNN deployments. Hardware constraints compound these software challenges. Neuromorphic chips, essential for efficient SNN execution, face high fabrication costs stemming from specialized analog or mixed-signal designs that deviate from standard processes, restricting scalability to . Compatibility with existing ecosystems remains poor, as neuromorphic hardware often requires custom interfaces and lacks seamless integration with conventional GPU-based training pipelines, impeding hybrid workflows. Future directions aim to address these gaps through innovative architectures and applications. Hybrid ANN-SNN systems, which convert pre-trained ANNs to SNNs or co-optimize both, offer a pathway to combine ANN trainability with SNN , achieving up to 10x reduction in tasks while maintaining accuracy. Quantum-inspired SNNs, drawing from superposition principles, enhance adaptability for dynamic environments like reversal learning, potentially outperforming classical SNNs in handling. In sustainable , SNNs offer potential to significantly reduce energy demands through sparse, event-based processing that aligns with low-power neuromorphic hardware. Ongoing research from 2023 to 2025 explores fault-tolerant SNNs for , leveraging their radiation resilience for autonomous systems in harsh cosmic conditions. advancements further promise brain-scale SNN simulations, projecting simulations of 10^11 neurons by the late 2020s to uncover emergent neural behaviors. Ethical considerations are increasingly prominent, particularly regarding bias in bio-inspired models, where training data reflecting societal inequities can amplify discriminatory outcomes in SNN applications like healthcare diagnostics. Energy equity in neuromorphic technology emphasizes democratizing access to efficient computing resources, ensuring that low-power SNN deployments mitigate global digital divides rather than exacerbating them through uneven adoption.

References

  1. [1]
    Spiking Neural Networks and Their Applications: A Review - PMC
    Spiking neural networks aim to bridge the gap between neuroscience and machine learning, using biologically realistic models of neurons to carry out the ...
  2. [2]
    Exploring spiking neural networks: a comprehensive analysis of ...
    This article presents a comprehensive analysis of Spiking Neural Networks (SNNs) and their mathematical models for simulating the behavior of neurons ...
  3. [3]
    [PDF] Simple model of spiking neurons - Eugene.Izhikevich
    In this paper, a simple spiking model (1), (2) is presented that is as biologically plausible as the Hodgkin–Huxley model, yet as computa- tionally efficient as ...
  4. [4]
    Overview of Spiking Neural Network Learning Approaches and ...
    In this paper, we review existing learning algorithms for spiking neural networks, divide them into categories by type, and assess their computational ...
  5. [5]
    A quantitative description of membrane current and its application to ...
    HODGKIN A. L., HUXLEY A. F. Currents carried by sodium and potassium ions through the membrane of the giant axon of Loligo. J Physiol. 1952 Apr;116(4):449–472.
  6. [6]
    The Refractory Period - Neuroscience - NCBI Bookshelf - NIH
    The refractory period limits the number of action potentials that a given nerve cell can produce per unit time.
  7. [7]
    Firing patterns in the adaptive exponential integrate-and-fire model
    The AdEx model can produce initial bursting, regularly bursting, tonic spiking, adapting, accelerating, irregular spiking, or delayed initiation firing ...
  8. [8]
    Firing Frequency Maxima of Fast-Spiking Neurons in Human ...
    Oct 18, 2016 · It has long been vaguely assumed that the firing frequency of FS neurons could reach 500-600 Hz, or even higher; however, the exact firing- ...
  9. [9]
    On the Distribution of Firing Rates in Networks of Cortical Neurons
    Nov 9, 2011 · The average (over E and I populations) firing rate is 4.9 Hz, whereas the median firing rate is more than a factor of 2 lower, with a value of ...<|separator|>
  10. [10]
    Rate and Temporal Coding Convey Multisensory Information ... - NIH
    Mar 20, 2017 · Generally, neurons carry information about modality-specific sensory stimuli by using either a firing rate code (i.e., neurons modulate their ...
  11. [11]
  12. [12]
    TIMING OF SYNAPTIC TRANSMISSION - Annual Reviews
    delay is only 2.5 ms (12). This process is fast but need not be precise. Here, we examine both the speed and precision of synaptic connections and determine ...
  13. [13]
    Summation of Synaptic Potentials - Neuroscience - NCBI Bookshelf
    The summation of EPSPs and IPSPs by a postsynaptic neuron permits a neuron to integrate the electrical information provided by all the inhibitory and ...Missing: axon hillock
  14. [14]
  15. [15]
  16. [16]
  17. [17]
    Lapicque's 1907 paper: from frogs to integrate-and-fire
    Oct 30, 2007 · Lapicque's 1907 paper: from frogs to integrate-and-fire. Original Paper; Published: 30 October 2007. Volume 97, pages 337–339, (2007) ...
  18. [18]
    [PDF] Lapicque's introduction of the integrate-and-fire model neuron (1907)
    In 1907, long before the mechanisms responsible for the genera- tion of neuronal action potentials were known, Lapicque devel- oped a neuron model that is ...
  19. [19]
    1.3 Integrate-And-Fire Models | Neuronal Dynamics online book
    Neuron models where action potentials are described as events are called 'Integrate-and-Fire' models. No attempt is made to describe the shape of an action ...Missing: seminal paper
  20. [20]
    A Review of the Integrate-and-fire Neuron Model: I. Homogeneous ...
    Aug 6, 2025 · The integrate-and-fire neuron model is one of the most widely used models for analyzing the behavior of neural systems.<|control11|><|separator|>
  21. [21]
    Tutorial 1: The Leaky Integrate-and-Fire (LIF) Neuron Model
    In this tutorial, we will build up a leaky integrate-and-fire (LIF) neuron model and study its dynamics in response to various types of inputs.
  22. [22]
    Linear leaky-integrate-and-fire neuron model based spiking neural ...
    Research on biological neuron models can be dated back to the 1900s, referred to Lapicque model (Lapicque, 1907; Abbott, 1999), which is employed in the ...
  23. [23]
    A quantitative description of membrane current and its application to ...
    A quantitative description of membrane current and its application to conduction and excitation in nerve. A. L. Hodgkin,. A. L. Hodgkin.
  24. [24]
    Speed read: Signal to charge - NobelPrize.org
    Sep 16, 2009 · The three scientists who received an equal share of the 1963 Nobel Prize in Physiology or Medicine revealed the key triggers that spark the nervous system's in ...
  25. [25]
    Simple model of spiking neurons | IEEE Journals & Magazine
    A model is presented that reproduces spiking and bursting behavior of known types of cortical neurons. The model combines the biologically plausibility of ...
  26. [26]
    Synaptic Modifications in Cultured Hippocampal Neurons
    Dec 15, 1998 · Correspondence should be addressed to Mu-ming Poo, Department of Biology–0357, University of California at San Diego, La Jolla, CA 92093.
  27. [27]
    Spike-timing dependent plasticity - Scholarpedia
    Feb 10, 2010 · It is worth noting that different synapse types can have quite different forms of STDP function (Abbott and Nelson, 2000; Bi and Poo, 2001).Basic STDP Model · Variants of STDP Models · Experimental results and open...
  28. [28]
    Stable Hebbian Learning from Spike Timing-Dependent Plasticity
    Dec 1, 2000 · The weight dependence of the STDP conductance change. a, The data from Bi and Poo (1998)describing the relative synaptic change as a function of ...
  29. [29]
    Triplets of Spikes in a Model of Spike Timing-Dependent Plasticity
    Sep 20, 2006 · Classical experiments on spike timing-dependent plasticity (STDP) use a protocol based on pairs of presynaptic and postsynaptic spikes ...
  30. [30]
    Training Deep Spiking Convolutional Neural Networks With STDP ...
    In this paper, we propose a pre-training scheme using biologically plausible unsupervised learning, namely Spike-Timing-Dependent-Plasticity (STDP).
  31. [31]
    Information-Theoretic Intrinsic Plasticity for Online Unsupervised ...
    Feb 4, 2019 · As a self-adaptive mechanism, intrinsic plasticity (IP) plays an important role in maintaining homeostasis and shaping the dynamics of ...
  32. [32]
    Homeostatic Plasticity and External Input Shape Neural Network ...
    Jul 20, 2018 · We analyze a model of spiking neurons in which the input strength, mediated by spike rate homeostasis, determines the characteristics of the dynamical state.Missing: unsupervised | Show results with:unsupervised
  33. [33]
    Unsupervised Feature Learning With Winner-Takes-All Based STDP
    Apr 4, 2018 · We present a novel strategy for unsupervised feature learning in image applications inspired by the Spike-Timing-Dependent-Plasticity (STDP) ...Abstract · Introduction · Related Work · Contribution
  34. [34]
    Deep Learning With Spiking Neurons: Opportunities and Challenges
    In this review, we address the opportunities that deep spiking networks offer and investigate in detail the challenges associated with training SNNs.
  35. [35]
    [PDF] Surrogate Gradient Learning in Spiking Neural Networks - arXiv
    May 3, 2019 · This article elucidates step- by-step the problems typically encountered when training spiking neural networks, and guides the reader through ...Missing: seminal | Show results with:seminal
  36. [36]
    Event-based backpropagation can compute exact gradients ... - Nature
    Jun 18, 2021 · Spiking neural networks combine analog computation with event-based communication using discrete spikes. While the impressive advances of ...
  37. [37]
    A universal ANN-to-SNN framework for achieving high accuracy and ...
    In this paper, we present a framework named DNISNM for converting ANN to SNN, with the aim of addressing conversion errors arising from differences in the ...Missing: post- | Show results with:post-
  38. [38]
    Is Neuromorphic MNIST Neuromorphic? Analyzing the ... - Frontiers
    We show that an ANN trained with backpropagation on frame-based versions of N-MNIST and N-Caltech101 images achieve 99.23 and 78.01% accuracy.
  39. [39]
    Spiking Neural Networks for event-based action recognition - arXiv
    Jun 7, 2024 · In this work we demonstrate how Spiking neurons can enable temporal feature extraction in feed-forward neural networks without the need for recurrent synapses.
  40. [40]
    Snn and sound: a comprehensive review of spiking neural networks ...
    Jul 11, 2024 · This paper reviews recent developments in SNNs for sound recognition, underscoring their potential to overcome the limitations of digital computing.Missing: definition | Show results with:definition
  41. [41]
    Deep Spiking Neural Networks for Large Vocabulary Automatic ...
    In this work, we use SNNs for acoustic modeling and evaluate their performance on several large vocabulary recognition scenarios.Missing: variable | Show results with:variable
  42. [42]
    Application of a Brain-Inspired Spiking Neural Network Architecture ...
    Using only the classification performance of the SNN model for the testing set would produce models with high predictive skills but poor generalization skills, ...
  43. [43]
    Odor Recognition with a Spiking Neural Network for Bioelectronic ...
    Feb 26, 2019 · This paper proposes a spiking neural network (SNN)-based odor recognition method from spike trains recorded by the implanted electrode array.
  44. [44]
    [PDF] TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron ...
    TrueNorth is a 65 mW, real-time neurosynaptic processor with 1 million neurons, 256 million synapses, and 4096 cores, designed for low-power, real-time ...
  45. [45]
    [PDF] Reconsidering the energy efficiency of spiking neural networks - arXiv
    Abstract—Spiking Neural Networks (SNNs) promise higher energy efficiency over conventional Quantized Artificial Neural Networks.
  46. [46]
    [PDF] Training a Spiking Neural Network to Control a 4-DoF Robotic Arm ...
    The proposed spiking neural network has been tested on controlling the kinematic model of the arm of a humanoid robot, called iCub. This robot has been ...
  47. [47]
  48. [48]
    Designing Spiking Neural Network-Based Reinforcement Learning ...
    The spiking actor network (SAN) is responsible for generating actions as a policy, while the deep critic network (DCN) evaluates the actions of the SAN using ...
  49. [49]
    On the Importance of Neural Membrane Potential Leakage ... - arXiv
    Jul 13, 2025 · This paper studies the use of SNNs for performing direct robot navigation and obstacle avoidance from LIDAR data.
  50. [50]
    Enhancing navigation performance in unknown environments using ...
    Jan 17, 2025 · This paper proposes a brain-inspired navigation method based upon the spiking neural networks (SNN) and reinforcement learning, integrated with a lidar system.<|separator|>
  51. [51]
    Directly-trained Spiking Neural Networks for Deep Reinforcement ...
    Dec 28, 2023 · In this work, we present an energy-efficient implementation of a Reinforcement Learning (RL) algorithm using SNNs to solve an obstacle ...
  52. [52]
    Spiking Neural Network Decoder for Brain-Machine Interfaces - PMC
    We used a spiking neural network (SNN) to decode neural data recorded from a 96-electrode array in premotor/motor cortex while a rhesus monkey performed a point ...
  53. [53]
    [PDF] A Brain-Machine Interface Operating with a Real-Time Spiking ...
    Motor prostheses aim to restore function for severely disabled patients by translating neural signals from the brain into useful control signals for prosthetic ...
  54. [54]
    Touch and slippage detection in robotic hands with spiking neural ...
    In this work, we demonstrate the feasibility of using spiking neural networks for performing two key functions in controlling upper limb prosthesis: touch and ...
  55. [55]
    SyNAPSE Program Develops Advanced Brain-Inspired Chip - DARPA
    Aug 7, 2014 · The SyNAPSE program was created to speed the development of a brain-inspired chip that could perform difficult perception and control tasks.
  56. [56]
    Fully Spiking Actor-Critic Neural Network for Robotic Manipulation
    Aug 16, 2025 · This study proposes a hybrid curriculum reinforcement learning (CRL) framework based on a fully spiking neural network (SNN) for 9-degree-of- ...
  57. [57]
    [PDF] Spiking Neural Networks Hardware Implementations and Challenges
    This survey presents the state of the art of hardware implementations of spiking neural networks, and the current trends in algorithm elaboration.
  58. [58]
    Efficient Hardware Acceleration of Spiking Neural Networks Using ...
    Jan 26, 2025 · Balancing the functional capability and the implementation cost of a neuron is a grand challenge in neuromorphic field. In this paper, we ...
  59. [59]
    Brian 2 documentation — Brian 2 0.0.post128 documentation
    Brian is a simulator for spiking neural networks. It is written in the Python programming language and is available on almost all platforms.Installation · Examples · Neurons · 2.9.0
  60. [60]
    NEST Simulator
    NEST is a simulator for spiking neural network models that focuses on the dynamics, size and structure of neural systems rather than on the exact morphology of ...Download · Features · Documentation · Publications
  61. [61]
    snnTorch Documentation — snntorch 0.9.4 documentation
    snnTorch is a Python package for performing gradient-based learning with spiking neural networks. It extends the capabilities of PyTorch.Snntorch.spikegen · Snntorch.surrogate · Accelerating snnTorch on IPUs · TutorialsMissing: 2024 | Show results with:2024
  62. [62]
    Equations — Brian 2 0.0.post128 documentation
    Brian models are defined by systems of first order ordinary differential equations, but you might see the integrated form of synapses in some textbooks and ...
  63. [63]
    Example: hodgkin_huxley_1952 - Brian 2 documentation
    Hodgkin-Huxley equations (1952). from brian2 import * morpho = Cylinder(length=10*cm, diameter=2*238*um, n=1000, type='axon') El = 10.613*mV ENa =
  64. [64]
    Network Models - the NEST Simulator documentation!
    Here we have several examples of large-scale network models developed for NEST. You can see how to build other networks for various neuron, synapse and device ...
  65. [65]
    NEST - NeuralEnsemble
    NEST is a simulator for spiking neural network models from small-scale microcircuits to brain-scale networks of the order of 10^8 neurons and 10^12 synapses.
  66. [66]
    jeshraghian/snntorch: Deep and online learning with spiking neural ...
    snnTorch is a Python package for performing gradient-based learning with spiking neural networks. It extends the capabilities of PyTorch.Issues 43 · Discussions · Pull requests 17
  67. [67]
    Tutorial 5 - Training Spiking Neural Networks with snntorch
    Understand backpropagation through time, and the associated challenges in SNNs such as the non-differentiability of spikes. Train a fully-connected network ...3. Backprop Through Time · 7. Training The Snn · 7.5 Training Loop
  68. [68]
    Tutorial 1 - Spike Encoding — snntorch 0.9.4 documentation
    In this tutorial, we will assume we have some non-spiking input data (ie, the MNIST dataset) and that we want to encode it into spikes using a few different ...2. Spike Encoding · 2.2 Visualization · 2.3 Latency Coding Of Mnist
  69. [69]
    Lava Software Framework — Lava documentation
    Lava is an open-source software framework for developing neuro-inspired applications and mapping them to neuromorphic hardware.Lava API Documentation · Lava · Installing Lava · Lava Architecture
  70. [70]
    Deep Learning — Lava documentation
    Lava-dl also has the following external, fully compatible, plugin. lava.lib.dl.decolle for training Deep SNNs with local learning and surrogate gradients.
  71. [71]
    lava-nc/lava: A Software Framework for Neuromorphic Computing
    Lava is an open source SW framework to develop applications for neuromorphic hardware architectures. It provides developers with the abstractions and tools ...
  72. [72]
    [PDF] Loihi: A Neuromorphic Manycore Processor with On-Chip Learning
    Loihi is a 60-mm2 chip for modeling spiking neural networks, with features like hierarchical connectivity and programmable synaptic learning rules. It is a ...
  73. [73]
    (PDF) SpiNNaker: A 1-W 18-Core System-on-Chip for Massively ...
    Aug 9, 2025 · SpiNNaker - Spiking Neural Network architecture - is a massively parallel computer system designed to provide a cost-effective and flexible simulator for ...
  74. [74]
    Towards artificial general intelligence with hybrid Tianjic chip ...
    Jul 31, 2019 · The Tianjic chip adopts a many-core architecture, reconfigurable building blocks and a streamlined dataflow with hybrid coding schemes.
  75. [75]
    STDP and STDP variations with memristors for spiking ...
    In this paper we review several ways of realizing asynchronous Spike-Timing-Dependent-Plasticity (STDP) using memristors as synapses. Our focus is on how to use ...
  76. [76]
    [PDF] Taking Neuromorphic Computing to the Next Level with Loihi 2 - Intel
    The Loihi chip represented a milestone in the neuromorphic research field. It incorporated self-learning capabilities, novel neuron models, asynchronous spike- ...Missing: original | Show results with:original
  77. [77]
    An overview of brain-like computing: Architecture, applications, and ...
    ... problems, such as poor process stability and difficulty in scaling. ... Wafer-scale neuromorphic hardware system for large-scale neural modeling, in ...
  78. [78]
    Deep Learning With Spiking Neurons: Opportunities and Challenges
    Oct 25, 2018 · In this review, we address the opportunities that deep spiking networks offer and investigate in detail the challenges associated with training SNNs.1. Introduction · 3.5. Local Learning Rules · 4. Neuromorphic Hardware
  79. [79]
    [PDF] Progress and Challenges in Large Scale Spiking Neural Networks ...
    We discuss the latest progress in neuromorphic hardware, including digital, analog, and hybrid implementations, which facilitate efficient execution of large- ...
  80. [80]
    The road to commercial success for neuromorphic technologies
    Apr 15, 2025 · A key challenge for research into Neuromorphic computation has been uncertainty about precisely which aspects of biological neural computation ...
  81. [81]
    Neuromorphic Hardware Market Insights 2025 to 2035
    Although a good momentum has been experienced, there are still obstacles in the form of software compatibility, ecosystem preparedness and standardization.
  82. [82]
    Advancing brain-inspired computing with hybrid neural networks
    This paper presents a comprehensive review of HNNs with an emphasis on their origin, concepts, biological perspective, construction framework and supporting ...
  83. [83]
    Article Quantum superposition inspired spiking neural network
    Aug 20, 2021 · We propose a quantum superposition spiking neural network (QS-SNN) inspired by quantum mechanisms and phenomena in the brain, which can handle reversal of ...
  84. [84]
  85. [85]
    Spiking Neural Networks(SNN): Third-Generation AI and the Future ...
    Oct 18, 2025 · SNNs offer unique advantages for robotics: Autonomous navigation with real-time sensory processing; Motor control with low latency; Adaptive ...
  86. [86]
    Future projections for mammalian whole-brain simulations based on ...
    Large-scale brain simulation allows us to understand the interaction of vast numbers of neurons having nonlinear dynamics to help understand the information ...
  87. [87]
    A method for the ethical analysis of brain-inspired AI
    May 3, 2024 · This article examines some conceptual, technical, and ethical issues raised by the development and use of brain-inspired AI.
  88. [88]
    Neuromorphic energy economics: toward biologically inspired and ...
    Such systems transform passive consumers into active “neurons” in a self-healing grid, fostering both economic equity and systemic robustness. In a simulated ...