Fact-checked by Grok 2 weeks ago

Cellular neural network

A cellular neural network (CNN), also known as cellular nonlinear network, is a computational paradigm consisting of a finite rectangular of identical continuous-time nonlinear dynamical systems, called cells, locally interconnected only to their nearest neighbors via a uniform cloning template that defines the strength and nature of these interconnections. Each cell processes inputs from its neighbors through a nonlinear , evolving its state according to a first-order differential equation, enabling parallel real-time computation independent of array size. Introduced in 1988 by and Lin Yang, CNNs bridge concepts from cellular automata and traditional neural networks, emphasizing analog VLSI implementability for high-speed . The core structure of a CNN involves a two-dimensional of s, where the state equation for the at position (i,j) is given by \dot{x}_{ij} = -x_{ij} + \sum A_{kl} y_{i+k,j+l} + \sum B_{kl} u_{i+k,j+l} + z, with output y_{ij} = f(x_{ij}), piecewise-linear f(\cdot), template A, template B, and z. This local and template-based uniformity allow CNNs to exhibit rich nonlinear dynamics, including fixed points, limit cycles, and , while ensuring complete under certain conditions like non-negative templates. Unlike fully connected neural networks, CNNs avoid global wiring, facilitating massive parallelism and scalability for hardware realization. CNNs have found extensive applications in image and video processing, including , , noise removal, and feature extraction, often achieving sub-linear . They power vision systems such as the CNN Universal Machine and dedicated for real-time tasks in , , and target tracking. Extensions like discrete-time and fuzzy CNNs have broadened their use to optimization problems, associative memory, and spatiotemporal , influencing fields from to control systems.

Fundamentals

Definition and Basic Principles

A cellular neural network (CNN) is a massively parallel paradigm consisting of a two-dimensional of identical cells, each serving as a basic processing unit with local interconnections to neighboring cells, and operating in either continuous or discrete time. This architecture enables simultaneous computation across all cells, mimicking natural spatiotemporal processes while avoiding the global connectivity typical of traditional neural networks. Invented by and Lin Yang in 1988, the CNN model emphasizes locality and uniformity to facilitate efficient hardware realizations, particularly in analog VLSI implementations. The core components of a CNN include the cells arranged in an M \times N grid, where each cell C_{i,j} interacts only within a defined neighborhood, typically a square region with radius r=1 encompassing a 3×3 grid (or larger, such as 5×5 for r=2). Each cell maintains a state variable x_{i,j} that evolves dynamically based on its input u_{i,j} (an external signal, often an image pixel value) and the outputs y_{k,l} of neighboring cells. The output y_{i,j} is derived nonlinearly from the state, commonly via a piecewise-linear activation function that saturates between -1 and 1, enabling binary-like decisions in applications. These elements collectively support the network's hallmark of local rule-based interactions, where global behavior emerges from simple, uniform cell dynamics. Unlike convolutional neural networks (CNNs) in , which are multi-layered digital architectures with trainable convolutional kernels optimized via for tasks like , cellular CNNs operate as single-layer, analog or continuous-time systems using fixed interaction templates that are not learned during operation. This fundamental difference positions cellular CNNs for , hardware-efficient processing of spatiotemporal data, such as video streams, rather than offline training on large datasets. The principles of massive parallelism and locality allow all cells to update synchronously, achieving sub-millisecond response times in analog chips, making them ideal for problems involving diffusion-like propagation or wave phenomena. A simple example of cell behavior in a CNN is in images, where local rules amplify intensity differences between a central and its neighbors, producing a output map that highlights boundaries without requiring sequential scanning. Such operations demonstrate the network's suitability for early vision tasks, where the collective response of the grid simulates natural edge enhancement through thresholded interactions.

Mathematical Formulation

The mathematical formulation of cellular neural networks (CNNs) is grounded in a set of nonlinear differential equations that govern the dynamics of an array of interconnected cells. In the continuous-time CNN (CT-CNN), proposed by Chua and Yang, each cell (i,j) in an M \times N grid evolves according to the state equation \frac{dx_{ij}(t)}{dt} = -x_{ij}(t) + \sum_{(k,l) \in \mathcal{N}_{r}(i,j)} A_{i,j;k,l} \, y_{kl}(t) + \sum_{(k,l) \in \mathcal{N}_{r}(i,j)} B_{i,j;k,l} \, u_{kl} + z_{ij}, where x_{ij}(t) is the state variable of cell (i,j) at time t, u_{kl} is the input to cell (k,l), y_{kl}(t) is the output, \mathcal{N}_{r}(i,j) denotes the r-neighborhood of cell (i,j) (typically a square region of radius r, such as r=1 for a 3×3 area), A = (A_{i,j;k,l}) is the feedback template matrix encoding interactions among outputs, B = (B_{i,j;k,l}) is the control template matrix for inputs, and z_{ij} is a bias or threshold term. This formulation captures local, weighted summations that drive the state evolution, with the negative feedback term -x_{ij}(t) ensuring damping toward equilibrium. The output y_{ij}(t) is obtained by applying a nonlinear to the state: y_{ij}(t) = f(x_{ij}(t)) = \frac{1}{2} \left( |x_{ij}(t) + 1| - |x_{ij}(t) - 1| \right), which is a piecewise-linear saturation function, also known as a hard-limiter, that maps the state to the range [-1, 1]. This function introduces the nonlinearity essential for CNNs to perform threshold-based computations, such as in image processing. Boundary conditions for the CNN array are specified to handle in the grid. Common types include non-periodic (or fixed-boundary) conditions, where boundary cells have constant inputs (e.g., zero or a fixed value); periodic (or ) conditions, where the array wraps around such that opposite edges connect; and zero-flux conditions, where boundary cell neighborhoods are truncated without replication. These choices influence the global dynamics but do not alter the core cell equations. A discrete-time variant (DT-CNN) approximates the continuous dynamics through iterative updates, defined as x_{ij}(t+1) = f\left( \sum_{(k,l) \in \mathcal{N}_{r}(i,j)} A_{i,j;k,l} \, y_{kl}(t) + \sum_{(k,l) \in \mathcal{N}_{r}(i,j)} B_{i,j;k,l} \, u_{kl} + z_{ij} \right), with y_{ij}(t) = f(x_{ij}(t)), suitable for digital implementations. This Euler-discretized form preserves the essential nonlinear interactions while enabling synchronous updates across the array. Regarding equilibrium points, a CT-CNN reaches a steady state when \frac{dx_{ij}}{dt} = 0 for all cells, yielding x_{ij} = \sum A_{i,j;k,l} \, y_{kl} + \sum B_{i,j;k,l} \, u_{kl} + z_{ij}, with outputs fixed thereafter. Stability analysis relies on Lyapunov functions, such as the energy function E = -\frac{1}{2} \sum_{i,j} \sum_{(k,l) \in \mathcal{N}_r} A_{kl} y_{ij} y_{i+k,j+l} - \sum_{i,j} y_{ij} \left( \sum_{(k,l) \in \mathcal{N}_r} B_{kl} u_{i+k,j+l} + z_{ij} \right), whose time derivative is non-positive under conditions like symmetric and non-negative feedback template A, ensuring convergence to an equilibrium from any initial state via the LaSalle invariance principle. Similar results hold for DT-CNNs under conditions like row-sum boundedness of templates. The theorem establishes that, due to translational invariance in space-invariant CNNs, every obeys an , parameterized solely by the templates A, B, and z. For a neighborhood r=1, this reduces to 9 scalar parameters for the template A, 9 for the template B, and 1 for the z, totaling 19 parameters, enabling efficient realization through template across the array.

History

Origins and Early Development

The cellular neural network (CNN) was invented in 1988 by , a professor of and computer sciences, and his graduate student Lin Yang at the . Their work aimed to create a that combined the local connectivity and discrete-state dynamics of cellular automata with the continuous, adaptive processing capabilities of artificial neural networks, specifically to enable analog computing for applications like image processing and . This motivation stemmed from the limitations of existing digital systems in handling massively parallel, nonlinear operations efficiently in hardware. The foundational theory was introduced in the paper "Cellular Neural Networks: Theory," published in the IEEE Transactions on Circuits and Systems in October 1988. In this work, Chua and Yang proposed CNNs as large-scale nonlinear analog circuits arranged in a rectangular , where each interacts only with its nearest neighbors, facilitating local computations without global interconnections. Chua's prior inventions, including the —a passive two-terminal circuit element theorized in 1971 as the fourth fundamental circuit component alongside the , , and —laid groundwork for exploring memory and nonlinear behaviors in analog systems. Additionally, Chua's development of the chaotic in 1984, a simple autonomous demonstrating deterministic through piecewise-linear nonlinearities, provided key insights into the dynamics that would underpin CNN stability and . Yang's expertise in nonlinear dynamics complemented these efforts, particularly in analyzing the equilibrium points and transient behaviors of coupled nonlinear systems. A primary early challenge was designing CNNs for implementation in very-large-scale integration (VLSI) technology using analog components, which was essential for achieving the massive parallelism required for real-time processing without the bottlenecks of digital von Neumann architectures. This focus on analog VLSI addressed the need for sub-millisecond response times in applications demanding high-speed local interactions, such as edge detection in vision systems. Prior to 1988, precursors like analog mesh networks—diffusion-based resistive grids for smoothing and interpolation in image processing—and early vision chips, including Carver Mead and Misha Mahowald's silicon retina introduced in 1988 but based on mid-1980s prototypes, had explored similar ideas of parallel analog computation for low-level visual tasks. These developments highlighted the potential of silicon-based neuromorphic hardware but lacked the programmable, nonlinear cell interactions that CNNs would later provide.

Key Milestones and Literature

In 1993, T. Roska and L. O. Chua introduced the first programmable universal chip, known as the CNN Universal Machine (CNN-UM), which enabled dynamic programming of templates through local memories and logic units integrated on a single analog array chip. This development marked a significant advancement in realization, allowing real-time execution of complex spatiotemporal computations beyond fixed-template designs. During the 1990s, key literature expanded the theoretical and applicative scope of cellular neural networks, including the seminal book Cellular Neural Networks and Visual Computing by L. O. Chua and T. Roska, published in 2002, which synthesized foundations, template design, and visual processing paradigms while providing algorithmic libraries for practical implementation. Extensions to three-dimensional cellular neural networks emerged in this period, enabling volumetric processing for tasks like image restoration by incorporating depth layers in the cell interactions. In the 2000s, research shifted toward bio-inspired cellular neural networks to model biological systems, such as motion control in walking robots and neural dynamics in sensory processing, leveraging the paradigm's parallelism to emulate natural spatiotemporal patterns. Concurrently, AnaFocus, a semiconductor company founded in 2001 by researchers from the , commercialized CNN-based vision systems through mixed-signal chips like the Eye-RIS platform, facilitating standalone focal-plane processing for embedded applications. Influential papers from this era include a 1994 IEEE workshop contribution on CNN applications in image processing, demonstrating and via cloned templates. A comprehensive 2002 survey in the Chua-Roska book further consolidated theory, stability analysis, and emerging uses in nonlinear dynamics. Recent literature through 2025 has increasingly integrated cellular neural networks into neuromorphic systems, with reviews highlighting their role in energy-efficient, analog computing for brain-like architectures; for instance, a 2024 preprint explores p-adic variants for hierarchical reaction-diffusion models in image processing. The foundational 1988 papers by L. O. Chua and L. Yang on cellular neural network theory and applications have amassed over citations collectively, underscoring their enduring impact on the field.

Architecture

Cell Structure and Dynamics

In a cellular neural network, each is modeled as an analog comprising a linear that stores the as voltage, paralleled by a piecewise-linear voltage-controlled providing the nonlinear output, and linear resistors that facilitate from neighboring cells' states and inputs. This integrator-based structure allows the to accumulate currents derived from local interactions, with the output typically saturating between -1 and +1 as a saturated-linear function of the state voltage. The mechanism employs resistive coupling to sum weighted contributions from adjacent cells, enabling real-time signal processing without global interconnections. The dynamics of a single exhibit transient behavior dominated by the of the circuit, where the charges or discharges based on the net until reaching . In analog VLSI realizations, this results in ultrafast to a stable fixed point, often in less than 1 μs, facilitated by sub-micron fabrication yielding time constants on the order of 0.1 μs. Signal propagation across mimics processes, with disturbances spreading gradually through successive neighborhood interactions rather than instantaneously. Neighborhood interactions are strictly local, limited to an r-radius sphere around each cell—commonly r=1, encompassing a grid including the cell itself—where synaptic weights are encoded via the (2r+1)×(2r+1) feedback matrix A for state coupling and control matrix B for input coupling. These matrices determine the strength and polarity of influences, ensuring space-invariant behavior through cloning. For boundary cells at the array edges, the effective neighborhood is reduced, with missing neighbors typically treated as zero input or via periodic replication to maintain computational consistency. Variants of cell coupling include configurations, where the A matrix is and cells operate independently without , suitable for , contrasted with fully coupled cells that leverage both A and B matrices for autonomous and nonlinear dynamics. These cells simplify but limit emergent behaviors observed in coupled arrays.

Templates and

In cellular neural networks (CNNs), the feedback template A and the control template B define the local interaction rules among cells within a specified neighborhood. The feedback template A, a (2r+1) \times (2r+1) matrix where r is the neighborhood radius, specifies the weights applied to the states of neighboring cells, influencing the self-feedback and lateral that the network's . For r=1, which defines a 3x3 neighborhood, A consists of 9 elements, though often fewer unique parameters due to assumptions, such as rotational invariance. The control template B, also a (2r+1) \times (2r+1) matrix, weights the input signals from neighboring cells in a feedforward manner, enabling the network to respond to external stimuli; it is frequently set identical to A in simple configurations to simplify processing. These templates collectively determine how each cell computes its output based on its own state, nearby states, and inputs, ensuring localized, parallel computation across the array. The cloning mechanism, central to the CNN architecture, mandates that every cell in the grid shares identical templates A and B, a property known as space-invariance or the cloning template approach. This uniformity arises from the theorem that identical cloning templates guarantee consistent local rules for all cells, preserving translational invariance and enabling predictable global behavior from local interactions. Such design facilitates efficient VLSI fabrication, as the repetitive cell structure with shared parameters reduces manufacturing complexity and supports scalable arrays for real-time applications like image processing. Without cloning, variations in templates would disrupt the network's homogeneity, complicating analysis and implementation. Template design often involves hand-crafted specifications for specific tasks, with examples illustrating their practical utility. For , a common template sets the center of A to 2 and the four cardinal neighbors to -0.5, yielding: A = \begin{pmatrix} 0 & -0.5 & 0 \\ -0.5 & 2 & -0.5 \\ 0 & -0.5 & 0 \end{pmatrix}, while B is typically a centered to emphasize local inputs; this configuration highlights boundaries by amplifying differences between a cell and its surroundings. Simpler patterns, such as spot finders, use templates that detect isolated bright regions by positive central and inhibitory neighbor weights, promoting to outputs for object . These examples demonstrate how templates encode task-specific operations without altering the underlying structure. Beyond manual design, template learning techniques emerged historically to optimize parameters for complex tasks, often performed offline. Genetic algorithms, introduced for CNNs in 1993 by Kozek et al., evolve template values through population-based search to minimize error on training patterns, proving effective for non-linear problems like . Later approaches incorporated methods to refine templates by backpropagating errors through the network's dynamics, though challenges like non-differentiability of the piecewise-linear output function necessitated hybrid strategies. These learning paradigms extend the versatility of cloned templates while maintaining architectural simplicity.

Theoretical Foundations

Reaction-Diffusion Equivalence

Cellular neural networks (CNNs) provide a for emulating continuous reaction-diffusion partial differential equations (PDEs), which describe phenomena such as chemical reactions and biological . The core CNN state equation, involving feedback templates, maps directly to the general reaction-diffusion form \frac{\partial x}{\partial t} = D \nabla^2 x + f(x) + g(u), where D represents the diffusion coefficient derived from the feedback template A, \nabla^2 is the Laplacian operator approximated by neighborhood summation, f(x) captures local nonlinear reaction kinetics, and g(u) accounts for input influences. This equivalence arises from the template-based diffusion mechanism in CNNs, where the A template encodes the spatial coupling that discretizes the diffusion term; for instance, a simple Laplacian template like A = \begin{bmatrix} 0 & 1 & 0 \\ 1 & -4 & 1 \\ 0 & 1 & 0 \end{bmatrix} approximates D \nabla^2 x for small grid spacing h, with D \propto 1/h^2. CNNs exhibit Turing patterns through driven by activator-inhibitor dynamics, where local activation amplifies perturbations while inhibitory suppresses them over larger scales, leading to stable spatial heterogeneities. Examples include dual-layer CNNs using smoothed cells, which generate spot, stripe, and labyrinthine patterns analogous to those in continuous systems. A sketch of the proof involves spatial : applying expansion to the continuous PDE around each grid point yields the CNN difference form as h \to 0, ensuring both and topological (matching qualitative behaviors like bifurcations and attractors) under suitable template designs. In applications, CNNs simulate the Belousov-Zhabotinsky (BZ) reaction by modeling its underlying Brusselator kinetics, where templates replicate oscillatory wave propagation and excitable media behaviors observed in chemical experiments. Unlike continuous PDEs, CNNs operate in discrete space, which enables precise real-time simulation on analog or digital hardware without numerical integration errors, though it may introduce phenomena like propagation failure for coarse grids.

Computational Universality

Cellular neural networks (CNNs) exhibit computational universality, meaning they can simulate any Turing machine given sufficient resources, thereby capable of performing arbitrary computations. This property arises from their ability to emulate discrete logic and dynamical systems through carefully designed cloning templates that govern cell interactions. Unlike traditional digital computers, CNNs achieve universality in a massively parallel, analog framework, enabling both local rule-based evolution and global pattern formation. Seminal work demonstrated this by mapping the dynamics of Turing-complete cellular automata to CNN arrays, where equilibrium states briefly referenced from the mathematical formulation allow stable realization of binary patterns. CNNs can realize any , serving as universal logic gates through threshold-based templates that enforce thresholds on cell outputs. For instance, linearly separable functions, including fundamental gates like , and , are implemented using (A) and control (B) templates with weights restricted to {-1, 0, 1} and a (Z) set to enforce the desired . This approach leverages the piecewise-linear output function of CNN cells, where inputs from neighboring cells sum to determine if the output saturates to +1 or -1, mimicking threshold logic units. Since threshold logic is computationally complete, a sufficiently large CNN array can compose these gates to form any digital circuit. Optimal templates for such realizations minimize while ensuring exact mapping in one time step. The simulation of Turing-complete cellular automata further underscores CNN universality, particularly through mapping elementary rules like to CNN dynamics. , known for its capacity to generate complex glider structures that enable universal , is realized using a simple 1D CNN template: feedback template A = [0, 1, 0], control template B = [-1, 0, 1], and bias Z = 0. This template propagates binary states synchronously across the array, faithfully reproducing 's evolution from initial conditions, including persistent patterns and glider signals. Such simulations confirm that CNNs can emulate any discrete automaton, inheriting their computational power without loss of fidelity in the ideal case. Turing machine emulation extends this capability, where a 2D CNN array acts as an infinite tape, with glider-like signals propagating read/write heads and state transitions. In mappings inspired by cellular automata universality proofs, local templates encode the Turing machine's transition rules, allowing gliders—stable, traveling binary patterns—to interact and modify the "tape" states collisionally. For example, analogous to Game of Life glider guns and eaters, CNN templates generate and annihilate these signals to simulate head movement and symbol rewriting, achieving arbitrary computation in polynomial time relative to the machine's steps. This demonstrates CNNs' equivalence to universal Turing machines in expressive power. Beyond discrete logic, CNNs leverage reaction-diffusion-like dynamics for solving NP-hard problems, such as , through pattern in oscillatory arrays. Polychronous oscillatory CNNs, where cells exhibit phase-locked oscillations, map graph vertices to cells and edges to inhibitory couplings; toward synchronized clusters assigns colors, minimizing conflicts via energy minimization in the network. For 4-coloring instances, simulations on random graphs up to 100 vertices achieve near-optimal solutions in sub-exponential time, exploiting analog parallelism for search. This approach highlights CNNs' utility in , where transient dynamics converge to valid colorings without exhaustive enumeration. Despite theoretical universality, practical limitations arise from analog implementation, where noise degrades precision compared to digital simulations. In analog CNN chips, thermal and process variations introduce perturbations that can destabilize threshold decisions or glider propagation, potentially leading to erroneous equilibria or halted computations in long-running universal simulations. Digital or hybrid realizations mitigate this by quantizing states, preserving exactness at the cost of parallelism, though proofs of universality typically assume noise-free conditions for rigor.

Comparison to Artificial Neural Networks

Cellular neural networks (CNNs) share several foundational principles with artificial neural networks (ANNs), including local connectivity among processing units, nonlinear activation functions, and inherent parallelism for distributed computation. In both architectures, individual elements—cells in CNNs and neurons in ANNs—interact primarily with nearby counterparts to process , enabling efficient handling of spatially structured data such as images. This locality mimics biological neural systems and supports scalable, massively parallel operations, where each unit contributes to global or recognition through collective dynamics. Despite these overlaps, CNNs diverge significantly from ANNs in structure and operation, emphasizing fixed topologies and predefined templates over trainable parameters. CNNs employ a uniform grid where each 's (A-template) and (B-template) weights are identical across the array via cloning, contrasting with ANNs' layer-wise, heterogeneous weights optimized through learning algorithms like . Furthermore, CNNs operate in continuous time via differential equations, allowing real-time evolution of cell states, whereas ANNs typically discretely across sequential layers. These choices in CNNs prioritize hardware realizability and over , making them less flexible for general-purpose tasks but more suited to deterministic . Hybrid models integrating CNNs and ANNs have emerged to leverage the strengths of both, particularly for applications requiring low-latency, energy-efficient processing. For instance, fusions combine CNNs' analog parallelism for initial feature extraction with ANNs' learning capabilities for higher-level , enabling compact systems for vision on resource-constrained devices. One such approach pairs extreme learning machines (a fast ANN variant) with CNNs to enhance image recognition speed and accuracy while reducing computational overhead. These hybrids address CNNs' lack of adaptability by incorporating ANN training, facilitating deployment in and systems. CNNs' analog-oriented design yields superior for specific tasks compared to ANNs, consuming orders of magnitude less power in implementations due to continuous-time operations without clocking or digitization overheads. Analog CNN chips, such as those based on VLSI, achieve sub-milliwatt operation for image processing, ideal for low-power vision systems, while ANNs scale better for large-scale software but demand higher energy for on GPUs. This efficiency stems from CNNs' avoidance of analog-to-digital conversions and exploitation of subthreshold for synaptic emulation. Historically, CNNs diverged from ANNs to target hardware-centric real-time applications, introduced by Chua and Yang in as a bridge between the discrete, rule-based rigidity of cellular automata and the global interconnectivity of traditional neural models. While ANNs evolved from software simulations focused on learning (e.g., via perceptrons and in the ), CNNs were motivated by the need for VLSI-compatible analog circuits for focal-plane image processing, avoiding the wiring complexity and training delays of ANNs. This shift positioned CNNs for dedicated vision hardware, contrasting ANNs' broader adoption in paradigms.

Connection to Cellular Automata

Both cellular neural networks (CNNs) and cellular automata (CAs) operate on a of cells arranged in a topology, where each cell's evolves based on local interaction rules involving only its nearest neighbors within a defined radius, often leading to complex emergent behaviors from homogeneous local dynamics. This shared architecture enables CNNs to emulate CA-like computations while extending their scope to continuous-time and real-valued dynamics. CNNs serve as a continuous generalization of discrete CAs, incorporating input signals and nonlinear feedback mechanisms that allow for analog processing beyond binary states. In the discrete-time variant (DT-CNN), introduced as a natural extension of continuous-time CNNs for digital implementations, the update rules can directly map to those of binary CAs by selecting appropriate cloning templates that enforce threshold-based state transitions, effectively replicating discrete evolution in a synchronous manner. For instance, every binary CA, regardless of dimensionality, emerges as a special case of a CNN with matching neighborhood size and thresholded outputs. A prominent example of this connection is the simulation of , a classic binary CA exhibiting self-organizing patterns, achieved through carefully designed CNN templates that enforce the four survival and birth rules via local nonlinear interactions. Such templates enable CNN processors to realize with minimal hardware complexity, highlighting the paradigm's versatility in bridging rule-based systems and continuous dynamics. Theoretically, both frameworks demonstrate computational universality, capable of emulating any given sufficient resources, yet CNNs distinguish themselves by supporting real-valued cell states that facilitate the processing of analog signals and continuous phenomena, unlike the strictly , finite-state nature of traditional CAs.

Implementations

Analog Hardware Realizations

The early analog hardware realizations of cellular neural networks (CNNs) were pioneered through semiconductor-based chips that enabled real-time . The first algorithmically programmable analog CNN processor, the CNN Universal Chip (CNN-UC), was developed in , featuring a 32×32 of cells with 10-bit digital-to-analog converters (DACs) for programming cloning templates, capable of processing up to 1000 frames per second. This chip marked a significant milestone by integrating analog on a single VLSI device, allowing for distributed sensing and dynamic processing of images without the need for off-chip . Subsequent advancements by AnaFocus and its predecessor AnaLogic focused on focal-plane processors that combined photosensors with CNN arrays for vision tasks. Building on this, the ACE16k chip released in 2005 featured a 128×128 cell array and achieved processing speeds up to 50,000 frames per second, enabling ultra-high-speed applications like motion detection and spatiotemporal wave propagation. These processors supported programmable templates via on-chip DACs, with template values applied through local feedback connections to realize various nonlinear dynamics. At the circuit level, analog CNN realizations typically employed switched-capacitor techniques for discrete-time approximations of cell dynamics or continuous-time designs using transconductance amplifiers to model and terms. Switched-capacitor circuits discretize the CNN equations by sampling and holding voltages with capacitors switched by clock signals, providing precise control over time constants while minimizing component count in processes. amplifiers, often implemented as operational transconductance amplifiers (OTAs), convert input voltages to currents for linear synaptic weights and nonlinear functions, enabling compact local interconnections within each . Digitally programmable variants of these amplifiers allowed coefficients to be set via digital , enhancing flexibility without sacrificing analog speed. These analog implementations offered key advantages, including sub-microsecond response times per cell iteration due to continuous-time and low consumption in the milliwatt range, making them suitable for battery-powered or vision systems. For instance, the ACE16k dissipated approximately 363 mW at a 3.3 V supply while handling high-frame-rate tasks, outperforming equivalent digital processors in for parallel analog computations. However, they were constrained by fixed precision limited by thermal and amplifier linearity, typically achieving 6-8 effective bits, and sensitivity to fabrication variations such as transistor mismatch, which could alter cell uniformity and template accuracy across the array. These limitations necessitated techniques during manufacturing to ensure consistent performance.

Digital and Reconfigurable Implementations

Digital implementations of cellular neural networks (CNNs) leverage field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs) to achieve flexibility and scalability, contrasting with the fixed architectures of analog realizations. These digital approaches discretize the CNN dynamics, often implementing discrete-time CNN (DT-CNN) models where cell states evolve iteratively based on neighborhood interactions, enabling programmable templates for diverse applications such as . Early efforts focused on mapping CNN arrays onto FPGA fabrics, utilizing lookup tables (LUTs) and blocks to simulate local interconnections and nonlinear activations. A notable example is the 2006 FPGA implementation on Virtex-II devices, which supported a scalable 64x64 for DT-CNN operations, achieving processing speeds up to 120 MHz for Gabor-type filtering tasks, demonstrating capability for moderate-sized arrays. This design employed serialized broadcast mechanisms to handle neighborhood feedback efficiently within FPGA resources, balancing area utilization and throughput. Subsequent advancements extended to larger , with Virtex-series FPGAs hosting emulated-digital CNN machines (CNN-UMs) that approximate continuous-time behaviors through numerical methods. Dedicated digital CNN processors emerged in the , exemplified by the architecture, an emulated-digital CNN-UM that approximates continuous-time CNN (CT-CNN) dynamics using forward-Euler integration for time-stepped simulations. Implemented on FPGAs or , Falcon supports variable precision up to 32 bits and array sizes up to 64x64, with templates loaded dynamically to adapt to tasks like or . This processor integrates global analogic processing units (GAPUs) for arithmetic operations, enhancing computational efficiency for solving via CNN templates. Reconfigurability is a key advantage of FPGA-based digital CNNs, achieved through runtime loading of cloning templates into LUTs, allowing seamless switching between feedback and control parameters without hardware redesign. For instance, partial reconfiguration techniques enable on-the-fly updates to neighborhood weights, supporting adaptive applications in systems. This flexibility facilitates prototyping and deployment across varying grid sizes and precisions, from 8-bit fixed-point for speed to 16-bit for accuracy. Performance in digital implementations trades analog paradigms' sub-microsecond speeds for enhanced and robustness to , typically operating in the kHz to MHz range depending on array scale and bit width. A 16-bit emulated-digital CNN on mid-range FPGAs might process a 32x32 at 50-100 kHz per iteration, sufficient for non-real-time simulations but scalable with parallelism. In contrast, larger DT-CNN arrays on high-end UltraScale devices reach 200-300 MHz, enabling processing for 64x64 inputs in tasks. Quantitative benchmarks highlight this: a Virtex-7 implementation achieved 100 frames per second for processing, underscoring digital CNNs' role in bridging simulation accuracy and . A 2025 survey of neuromorphic architectures on FPGAs reviews over 129 designs since 1998 (more than 50 since 2000), emphasizing digital reconfigurability for neuromorphic acceleration, with devices used in 86% of reported works. These efforts have evolved from basic DT-CNN mappers to systems integrating CNNs with other neural paradigms, affirming FPGAs' enduring utility for prototyping scalable, template-driven .

Emerging and Advanced Technologies

In the realm of emerging hardware paradigms, memristor-based cellular neural networks (CNNs) have gained prominence for their potential in energy-efficient analog computing. Introduced in 2016, the memristive multilayer (Mm-CNN) model employs memristors as synaptic elements to realize nonlinear dynamics and weight storage within the same device, facilitating in-memory computation that minimizes data movement and power dissipation. Memristors, particularly those fabricated using (RRAM) arrays, exhibit tunable conductance states that mimic , enabling compact implementations with densities exceeding traditional CMOS-based designs. This approach has demonstrated superior performance in tasks requiring local interactions, such as in images, where the analog nature of memristors allows for subthreshold operation at nanojoule energy levels per operation. Quantum extensions of CNNs represent a in leveraging for multidimensional processing. A three-dimensional quantum cellular neural network, proposed in 2017, models cells as quantum bits (qubits) interconnected via quantum gates, allowing superposition and entanglement to perform parallel evaluations across 3D lattices. This architecture extends classical CNN locality to quantum realms, where templates are encoded in unitary operators, enabling efficient handling of volumetric like . Simulations indicate that such networks can achieve exponential speedup in feature extraction compared to classical counterparts, particularly for noise-resistant , due to quantum preserving . Subsequent explorations have refined these models for practical quantum , though remains challenged by decoherence. Neuromorphic integrations combining field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs) with (SNNs) have advanced event-driven realizations of CNNs, particularly from 2024 onward. These hybrid systems adapt CNN templates to spike-based communication, where events trigger asynchronous updates only in active neighborhoods, drastically reducing idle power in dynamic environments. A 2025 survey highlights reconfigurable FPGA implementations that emulate SNN-CNN hybrids, supporting on-chip learning rules like spike-timing-dependent to approximate continuous-time CNN evolution. ASIC components handle high-density arrays, achieving throughputs of millions of operations per second at microwatt power scales, ideal for edge devices. This fusion bridges the gap between CNN's analog roots and SNN's bio-inspired sparsity, with prototypes demonstrating 10-100x energy savings over digital architectures for real-time sensing. Optical CNNs utilizing (VCSEL) arrays have emerged in the 2020s as a pathway to ultrafast, free from electronic bottlenecks. VCSEL arrays serve as photonic neurons, with emissions encoding cell states and optical interconnects realizing neighborhood templates via or free-space . A 2020 demonstration showcased VCSEL-based neuromorphic systems operating at GHz rates, where or implements nonlinear , enabling all-optical CNN iterations at latencies. These setups exploit the parallelism of for massive sizes, with energy efficiencies approaching femtojoules per synaptic event, surpassing electrical analogs in bandwidth-density products. Applications in high-speed benefit from VCSEL's , allowing nanoscale integration for compact, reconfigurable processors. Hybrid GPU accelerations have facilitated large-scale simulations of advanced CNN variants. A foundational 2021 model established p-adic CNNs, generalized over p-adic number fields to model hierarchical structures, as computationally universal.

Applications

Image and Signal Processing

Cellular neural networks (CNNs) have been extensively applied in image and signal processing due to their inherent parallelism and local connectivity, enabling efficient handling of spatially structured data such as visual inputs and temporal signals. Introduced in the late 1980s, CNNs facilitate operations like feature extraction and filtering through predefined cloning templates that govern cell interactions, allowing simultaneous processing across an array of cells. These capabilities make CNNs particularly suitable for tasks requiring real-time performance, where traditional digital processors may fall short in speed or power efficiency. In image processing, CNNs excel at edge and hole detection by employing specific templates that highlight boundaries or fill internal voids in binary or images. For instance, templates accentuate discontinuities in intensities, producing thin lines that delineate object outlines without excessive blurring, as demonstrated in early applications where such templates were used to process synthetic and real images. Hole detection and filling are achieved through iterative dynamics that propagate labels across connected regions, effectively closing gaps within objects while preserving their ; this is often combined with templates, which assign unique identifiers to spatially contiguous groups, aiding in object and . Noise removal is another key function, where inhibitory templates suppress isolated spurious s or salt-and-pepper artifacts, smoothing images while retaining structural details—templates for this purpose typically feature negative self- to dampen random fluctuations. These template-based methods, briefly referencing standard designs like those for , underscore CNNs' role in foundational tasks. CNNs enable real-time on analog , achieving speeds up to 50,000 frames per second () on specialized chips for applications like . In such systems, consecutive frames are differenced via subtractor templates, followed by thresholding to isolate moving regions, allowing robust detection of dynamic events in video streams without digital conversion overhead. This high throughput, demonstrated on early analog VLSI implementations, supports applications in and where latency must be minimized. For , one-dimensional (1D) CNNs extend these principles to temporal or sequential data, performing tasks such as () filtering and transforms through linear or nonlinear templates. In 1D configurations, cells align along the signal dimension, enabling parallel convolution-like operations; for example, decomposition templates approximate multi-resolution analysis by cascading high- and low-pass filters, facilitating and feature extraction in audio or signals. These 1D variants maintain the parallel efficiency of CNNs, processing long signals in constant time relative to length. Texture segmentation in CNNs leverages dynamics akin to Gabor filters, where oscillatory templates tuned to specific frequencies and orientations extract directional features from images. This approach segments regions based on local texture statistics, such as periodicity or granularity, by evolving cell states to emphasize boundaries between dissimilar patterns; implementations on reconfigurable hardware have shown effective discrimination in complex scenes like natural textures. The adoption of CNNs in the 1990s marked a paradigm shift in visual computing, transitioning from sequential digital algorithms to massively parallel analog arrays that mimicked biological vision processes. This era saw the development of universal CNN chips, enabling a broad class of nonlinear image operations at unprecedented speeds and inspiring interdisciplinary research in spatiotemporal processing.

Biomedical and Biological Modeling

Cellular neural networks (CNNs) have been employed to model neural tissues by simulating the spatiotemporal dynamics of biological systems such as the and , drawing on their local connectivity that mirrors neural architectures. In the , bio-inspired CNN models were developed to replicate retinal processing, where arrays of cells emulate photoreceptors, bipolar cells, and ganglion cells to perform tasks like and . These models leverage CNN templates to approximate the parallel, analog computation in retinal layers, enabling simulations of visual preprocessing with high fidelity to biological responses. Similarly, cortex-like CNN architectures have been proposed to capture short-range excitatory and long-range inhibitory connections in neural tissue, facilitating the study of wave propagation and synchronization in cortical networks. In , CNNs analyze spatiotemporal patterns for applications like tumor detection and ECG . For tumor identification, CNN-based segmentation techniques process MRI scans by applying diffusion templates to delineate abnormal regions, achieving accurate boundary extraction in and images through optimized mechanisms. Enhanced CNN algorithms further improve detection by incorporating swarm optimization for template design, enabling real-time identification of intracranial tumors with reduced false positives. In ECG , CNNs model cardiac wavefronts to detect arrhythmias, using reaction-diffusion principles to identify abnormal patterns in electrocardiograms, which supports distributed computational for implantable devices. CNNs also simulate reaction-diffusion processes central to biological pattern formation, particularly in morphogenesis. By mapping Turing's reaction-diffusion equations onto CNN arrays, these networks generate stable spatial patterns like spots and stripes, akin to those observed in animal coat markings or embryonic development. Seminal implementations demonstrate how CNN dynamics produce Turing instabilities, providing a computational framework for studying self-organization in biological systems without explicit global coordination. This approach briefly references the theoretical mapping of reaction-diffusion to CNNs, emphasizing local interactions for emergent complexity. Recent advancements in the 2020s extend CNNs to model dynamics and spreading. In protein studies, p-adic CNN variants incorporate hierarchical structures to simulate folding pathways, capturing non-local interactions in biomolecular configurations. For epidemics, chaotic CNN models analyze spatial-temporal spread, as seen in simulations of propagation using self-organizing templates to predict fractal-like outbreak patterns. Additionally, CNN arrays underpin prosthetic vision chips, where subretinal implants process images in to restore phosphene-based sight, integrating analog computation for low-power, biocompatible retinal prostheses.

Engineering Systems and Control

Cellular neural networks (CNNs) have been applied in systems and to address dynamic challenges in , communications, and process industries, leveraging their capabilities for real-time decision-making and stabilization. In robotic control, CNNs facilitate path planning by modeling environments as grids where cells evolve to generate obstacle-avoiding trajectories, enabling efficient navigation for mobile s. For instance, a CNN-based method processes visual inputs to compute collision-free paths in cluttered spaces, updating plans dynamically as the robot moves. for actuators integrates multi-modal data, such as from cameras and proximity sensors, into a unified control signal; state-controlled CNNs (SC-CNNs) have been used to fuse sensor data for precise actuation in distributed robotic structures, like space manipulators, ensuring coordinated motion under . In communication systems, CNNs support channel equalization and error correction through parallel decoding, particularly in the 1990s when their analog VLSI implementations enabled high-speed processing for noisy channels. A notable application involves using CNNs for maximum likelihood decoding of partial response signals, where the network's local interactions approximate Viterbi algorithms to mitigate , achieving low bit error rates in bandwidth-limited links. This parallel architecture allows simultaneous evaluation of multiple decoding paths, outperforming sequential methods in scenarios like modems. For error correction, CNNs implement iterative decoding for convolutional codes by propagating correction signals across cells, enhancing reliability in fading channels without excessive computational overhead. Process benefits from CNNs in modeling phenomena within chemical reactors, where reaction- equations are discretized into cellular templates for simulating mass and . These models predict concentration gradients and profiles, aiding in the of strategies for catalytic reactors; for example, CNN simulations approximate neutron transport in nuclear reactors—analogous to chemical processes—enabling predictive of reaction rates and safety margins. In the , multi-layer CNNs extended this to hierarchical , stacking layers to handle multi-scale : lower layers process local , while upper layers optimize global reactor stability, as demonstrated in of processes. CNNs also serve as controllers for stabilizing chaotic systems, exploiting their inherent ability to generate and suppress . In examples from the late and early , CNN templates are tuned to feedback chaotic oscillators, such as three-cell networks exhibiting Chua's circuit-like behavior, driving them to fixed points or periodic orbits via parameter adjustment. This approach has been realized in hardware, where analog CNN chips apply localized to dampen bifurcations, achieving global asymptotic stability with minimal intervention and low power consumption. Such controllers demonstrate CNN universality in taming nonlinear instabilities across engineering domains.

Integration with Modern AI

Cellular neural networks (CNNs) have been integrated with (SNNs) to create hybrid architectures that leverage the continuous-time dynamics of CNNs for enhanced temporal processing in tasks. A notable example is the Deep Cellular Recurrent Network (DCRN), introduced in 2021, which combines CNN templates with recurrent layers to model spatiotemporal dependencies in time-series data, achieving competitive accuracy on benchmarks like prediction while reducing computational overhead compared to traditional recurrent neural networks. This hybrid approach addresses limitations in conventional deep networks by incorporating local interactions and analog-like computation, enabling robust prediction in dynamic environments such as sensor data streams. In recent advancements, CNN templates have been adapted for neuromorphic chips to accelerate AI inference, particularly in energy-constrained settings. By 2024-2025, implementations on memristor-based neuromorphic hardware have demonstrated efficient execution of CNN cloning templates, offering significant energy savings over digital architectures for real-time tasks. These integrations exploit the inherent parallelism of , mapping feedback and connections directly onto chip arrays for low-latency in edge devices. Applications of these integrations span generative AI and security domains. Continuous-time CNNs (CT-CNNs) have been fused with models to enhance image generation, where CNN dynamics stabilize the denoising process, improving sample quality and convergence speed in , as shown in a 2024 that outperforms baseline models on FID scores for complex scene synthesis. In IoT contexts, hybrid CNNs enable by processing sensor streams through local neighborhood computations, detecting deviations in network traffic with over 95% accuracy and low false positives in industrial settings. Quantum extensions of CNNs further advance AI capabilities in . Three-dimensional quantum CNNs, proposed in 2017, utilize quantum cellular automata principles to process volumetric data, enabling feature extraction in high-dimensional spaces for tasks like analysis. Looking ahead, memristive CNNs promise energy-efficient AI paradigms, particularly for . These devices emulate through resistance states, bridging CNNs to scalable neuromorphic systems that reduce power consumption to microwatts per operation, facilitating deployment in battery-limited and autonomous systems. Ongoing research emphasizes their role in sustainable AI, with prototypes demonstrating viability for real-world bridging between classical and hardware-accelerated inference.

References

  1. [1]
  2. [2]
  3. [3]
    Cellular Neural Networks: A Survey - ScienceDirect.com
    Abstract. In this paper an overview of Cellular Neural Networks (CNNs) and their applications is reported. CNNs are nonlinear dynamical systems with a large ...
  4. [4]
    [PDF] Chapter 2 Cellular Neural Network
    The cellular neural network (abbreviated as CNN) is proposed by Chua and Yang in 1988 [1]-[2]. It is more general than Hopfield neural network.Missing: original paper
  5. [5]
    [PDF] Cellular Neural Network Friendly Convolutional Neural Networks
    Abstract—This paper discusses the development and evaluation of a Cellular Neural Network (CeNN) friendly deep learning network for solving the MNIST digit ...
  6. [6]
  7. [7]
    [PDF] Cellular neural networks: theory - Circuits and Systems, IEEE ...
    ... Cellular Neural Networks: Theory. LEON O. CHUA, FELLOW, IEEE, AND LIN YANG, STUDENT MEMBER, IEEE. 1257. Abstract—A novel class of information-processing ...Missing: original | Show results with:original
  8. [8]
    Memristor-The missing circuit element | IEEE Journals & Magazine
    >Volume: 18 Issue: 5. Memristor-The missing circuit element. Publisher: IEEE. Cite This. PDF. L. Chua. All Authors. Sign In or Purchase. 8433. Cites in. Papers.
  9. [9]
    [PDF] Copyright © 1984, by the author(s). All rights reserved. Permission to ...
    Nov 9, 1984 · Chua's circuit. 2. Chua's Attractor. Consider Chua's circuit in Fig. 1(a) with -the following element values: R=1.53 left , L=8mH , Cx ».005 ...
  10. [10]
    A silicon model of early visual processing - ScienceDirect.com
    An analog model of the first stages of retinal processing has been constructed on a single silicon chip.Missing: paper | Show results with:paper
  11. [11]
  12. [12]
    Cellular Neural Networks and Visual Computing
    Online publication date: May 2010 ; Print publication year: 2002 ; Online ISBN: 9780511754494 ; DOI: https://doi.org/10.1017/CBO9780511754494 ; Subjects: ...
  13. [13]
    Cellular neural networks and biologically inspired motion control
    A gallery of biologically inspired walking robots are presented. Moreover, the availability of efficient distributed control structures needs a sensing ...
  14. [14]
  15. [15]
    Hierarchical Neural Networks, p-Adic PDEs, and Applications ... - arXiv
    Jun 12, 2024 · Abstract:The first goal of this article is to introduce a new type of p-adic reaction-diffusion cellular neural network with delay.
  16. [16]
    Cellular neural network - Scholarpedia
    Dec 26, 2009 · Cellular Neural/Nonlinear Networks were conceived by Chua and Yang (1988) as a particular model of Neural Networks. Nevertheless, the recent ...Missing: Lin original<|control11|><|separator|>
  17. [17]
    (PDF) Cellular Neural Networks Learning using Genetic Algorithm
    Aug 10, 2025 · Different examples for image processing show the result of the proposed algorithm. Keyword: Cellular neural network, learning, genetic algorithm ...
  18. [18]
    Building cellular neural network templates with a hardware friendly ...
    Oct 27, 2018 · These learning algorithms are generally based on minimizing the output error using gradient descent method or evolutionary method. The ...
  19. [19]
    [PDF] Reaction-Diffusion Cellular Neural Network Models - WSEAS US
    Abstract: In this paper reaction-diffusion Cellular Neural Network models are considered. ... [2] Chua L.O., Yang L., Cellular Neural Network: Theory and ...
  20. [20]
    [PDF] Papers SOME ANALYTICAL CRITERIA FOR LOCAL ACTIVITY OF ...
    dual-layer two-dimensional reaction–diffusion CNN in order to obtain Turing patterns. ... & Chua, L. O. [1994] “The ana- logic cellular neural network as a bionic ...
  21. [21]
  22. [22]
    [PDF] OPTIMAL CNN TEMPLATES FOR LINEARLY-SEPARABLE ONE ...
    Abstract. In this tutorial, we present optimal Cellular Nonlinear Network (CNN) templates for implementing linearly-separable one-dimensional (1-D) Cellular ...
  23. [23]
  24. [24]
    None
    ### Summary of Similarities and Differences Between CNNs and ANNs
  25. [25]
    [PDF] A Hybrid of Extreme Learning Machine and Cellular Neural Network ...
    1) Cellular Neural Network: A Cellular Neural Network ... 5) Output calculation: The calculation process is carried out using an artificial neural network that ...
  26. [26]
    [PDF] Ultra Low Energy Analog Image Processing Using Spin Based ...
    Cellular neural network (CNN) can be regarded as a fusion of artificial neural network (ANN) and cellular automata [4, 9, 10, 27-. 29]. It borrows the basic ...
  27. [27]
    Artificial neural networks in hardware: A survey of two decades of ...
    We specifically discuss, in detail, neuromorphic designs including spiking neural network hardware, cellular neural network ... A low-power analog VLSI visual ...
  28. [28]
    The CNN universal machine: an analogic array computer
    This is the first algorithmically programmable analog array computer having real-time and supercomputer power on a single chip.Missing: CNNUC | Show results with:CNNUC
  29. [29]
    Spatiotemporal pattern formation in the ACE16k CNN chip
    The CNN chip can be programmed with a cloning template in order to generate spiral waves and autowaves.Missing: specifications | Show results with:specifications
  30. [30]
    ACE16k: the third generation of mixed-signal SIMD-CNN ACE chips ...
    Aug 7, 2025 · This paper presents considerations pertaining to the design of a member of the third generation of ACE chips, namely to the so-called ACE16k ...
  31. [31]
    [PDF] The CNN paradigm - Circuits and Systems I
    Roska and L. O. Chua, "Cellular neural networks with nonlinear and delay-type template elements,” in Proc. IEEE Int. Workshop on Cellular. Neural ...Missing: CNNUC | Show results with:CNNUC
  32. [32]
    VLSI implementation of a reconfigurable cellular neural network ...
    A new integrated circuit cellular neural network implementation having digitally or continuously selectable template coefficients is presented.
  33. [33]
    [PDF] Dynamic Precision Analog Computing for Neural Networks - arXiv
    Feb 12, 2021 · In this work, we derive a relationship between analog precision, which is limited by noise, and digital bit precision.Missing: microsecond | Show results with:microsecond
  34. [34]
    Analog architectures for neural network acceleration based on non ...
    Jul 9, 2020 · Analog hardware accelerators, which perform computation within a dense memory array, have the potential to overcome the major bottlenecks ...
  35. [35]
    (PDF) A Scalable FPGA Implementation of Cellular Neural Networks ...
    We describe an implementation of Gabor-type filters on field programmable gate arrays using the cellular neural network (CNN) architecture.
  36. [36]
    [PDF] On Hardware Implementation of Discrete-Time Cellular Neural ...
    Cellular Neural Network, Discrete-Time Cellular Neural Network, Field- ... Similar to Cellular Automata [10], any cell is connected only to its.
  37. [37]
    A Memristive Multilayer Cellular Neural Network With Applications to ...
    May 13, 2016 · The cellular neural network (CNN) is one of the most implementable artificial neural network models and capable of massively parallel analog ...Missing: definition | Show results with:definition
  38. [38]
  39. [39]
    [PDF] Reconfigurable Digital FPGA Implementations for Neuromorphic ...
    The Xilinx Spartan-3 FPGA synthesis of the original DSSN illustrates 1150 slice registers, 1050 slice LUTs, and 63 MHz speed and the synthesis of the modified ...
  40. [40]
    [PDF] VCSELs for fast neuromorphic photonic systems operating at GHz ...
    Main Text. Computational algorithms based on artificial neural networks are quickly becoming ubiquitous in our everyday lives.
  41. [41]
    [2107.07980] p-adic Cellular Neural Networks - arXiv
    Jul 16, 2021 · In this article we introduce the p-adic cellular neural networks which are mathematical generalizations of the classical cellular neural networks (CNNs) ...
  42. [42]
    Edge detection of noisy images based on cellular neural networks
    This paper studies a technique employing both cellular neural networks (CNNs) and linear matrix inequality (LMI) for edge detection of noisy images.
  43. [43]
    [PDF] Edge Detection and Noise Removal in Cellular Neural Networks ...
    In this study, we propose Switching Two-Type Templates CNN. In our proposed method, Edge Detector template and Proposed template with reference to Small Object ...
  44. [44]
    Cellular Neural Networks and their Applications (CNNA 2000) - DTIC
    May 23, 2000 · This is an interdisciplinary conference. Topics include basic theory of cellular nonlinear spatiotemporal phenomena, physical implementations ( ...
  45. [45]
    Filtering and spectral processing of 1-D signals using cellular neural ...
    By using appropriate templates and shifting the input signal the CNN array is capable of performing FIR filtering, discrete Fourier transform, and wavelet ...Missing: 1D | Show results with:1D
  46. [46]
    Introduction (Chapter 1) - Cellular Neural Networks and Visual ...
    The cheap laser and fiber optics, which resulted in cheap bandwidth at the end of the 1980s, led to the Internet industry of the 1990s. The third wave, the ...
  47. [47]
    CNN-based retinal model uncovers a new form of edge ...
    By modeling retina activity using cellular neural network (CNN) one can begin to think about retinal interactions in space/time, and consider the activity ...Missing: simulation | Show results with:simulation
  48. [48]
  49. [49]
    A cortex-like architecture of a cellular neural network - IEEE Xplore
    A CORTEX-LIKE ARCHITECTURE OF A CELLULAR NEURAL NETWORK ... Short-range connections in the model determine the. "stiffness" of the neural tissue. ... Brain-State-in ...
  50. [50]
    Implementation of an improved cellular neural network algorithm for ...
    Implementation of an improved cellular neural network algorithm for brain tumor detection · 32 Citations · 6 References.
  51. [51]
    Hardware‐Mappable Cellular Neural Networks for Distributed ...
    May 12, 2022 · Herein, a closed-loop solution is proposed, where a cellular neural network is used to detect abnormal wavefronts and wavebrakes in cardiac ...
  52. [52]
  53. [53]
    Turing patterns via pinning control in the simplest memristive cellular ...
    ... biology ... Turing instability and give rise to pattern formation. ... Cellular neural network, Reaction-diffusion system, Mathematical modeling, Morphogenesis.
  54. [54]
    (PDF) p-adic Cellular Neural Networks - ResearchGate
    PDF | In this article we introduce the p -adic cellular neural networks which are mathematical generalizations of the classical cellular neural networks.Missing: acceleration | Show results with:acceleration
  55. [55]
    Spatial and Temporal Spread of the Coronavirus Pandemic using ...
    The case study is a chaotic cellular neural network (CNN), for which the main goal is generating fractional orders of the neurons whose Kaplan-Yorke dimension ...
  56. [56]
    The cellular neural network as a retinal camera for visual prosthesis
    The design of the chip is based on cellular neural network technology, a massively parallel analog processor of enormous power. The patterns are based upon ...
  57. [57]
    Design of an Integrated Subretinal Implant Using Cellular Neural ...
    Cellular Neural Network. Conference Paper. Design of an Integrated Subretinal Implant Using Cellular Neural Networks for Binary Image Generation in a 130 nm ...
  58. [58]
  59. [59]
    Optimal Robot Path Planning with Cellular Neural Network
    Aug 9, 2025 · Cellular neural networks have been employed for optimal robot path planning [31] and real-time thermal modeling in robotic trajectory ...
  60. [60]
    SC-CNNs for sensors data fusion and control in space distributed ...
    This system, here named Analog Cellular Networks (ACNs), takes its inspiration from the State-Controlled. Cellular Neural Networks (SC-CNN) paradigm [ 1,3,4].Missing: actuators | Show results with:actuators
  61. [61]
    Maximum likelihood decoding of the partial response signal with ...
    The partial response maximum likelihood (PRML) decoder is designed with the analog parallel processing circuits of the CNN.
  62. [62]
    Cellular neural network approach to a class of communication ...
    In this paper we discuss the design of a cellular neural network (CNN) to solve a class of optimization problems of importance for communication networks.
  63. [63]
    Application of cellular neural network (CNN) method to the nuclear ...
    Aug 6, 2025 · Stable Diffusion with Memristive Cellular Neural Networks. Conference ... Simulation of nuclear reactor core kinetics using multilayer 3-D ...
  64. [64]
    The learning problem of multi-layer neural networks - ScienceDirect
    Diamond in multi-layer cellular neural networks (submitted for... Y. Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning.
  65. [65]
    [PDF] Hierarchical Neural Networks for Image Interpretation
    Jun 13, 2003 · Cellular Neural Networks. While continuous neural fields facilitate ... in combination with a multi-layer perceptron. The proposed ...
  66. [66]
    Control of a real chaotic cellular neural network - IEEE Xplore
    In this paper we study the possibilities of suppressing chaotic behaviour of the three- cell Cellular Neural Network. We present the laboratory environment ...
  67. [67]
    Discrete-Continuous Control for Chaotic Cellular Neural Networks
    ... Cellular Neural Network (CNN) model with chaotic behaviour is studied. New approach for controlling the chaotic CNN is proposed. An algorithm for the ...
  68. [68]
    [PDF] Deep Cellular Recurrent Network for Efficient Analysis of Time ...
    Lim, "Learning robust features using deep learning for automatic seizure detection," pp. ... Yang, "Cellular neural networks: theory," IEEE. Transactions on ...
  69. [69]
    An improved memristive model driven cellular neural networks for ...
    Feb 19, 2025 · Furthermore, the memristor-based cellular neural network outperforms existing algorithms in image processing, as evidenced by improved peak ...
  70. [70]
    [PDF] Stable Diffusion with Continuous-time Neural Networks - arXiv
    Oct 16, 2024 · As a latent diffusion model, Stable Diffusion belongs to the category of deep generative artificial neural networks. ... and memristive cellular ...
  71. [71]
    Industrial Internet of Things Anti‐Intrusion Detection System by ...
    Jun 6, 2022 · The network successfully uses the ReLU activation function as the activation layer of the cellular neural network, which alleviates the ...
  72. [72]
    Fully memristive spiking-neuron learning framework and its ...
    Aug 25, 2020 · A fully memristive cellular neural network is designed for edge detection based on the spiking-neuron. As the diffusion memristor model in ...