Fact-checked by Grok 2 weeks ago

Neuroevolution of augmenting topologies

Neuroevolution of Augmenting Topologies (NEAT) is a genetic algorithm designed to evolve both the weights and topologies of artificial neural networks, beginning with minimal structures and incrementally adding complexity through mutations that introduce new nodes and connections, thereby efficiently exploring the search space while protecting structural innovations via speciation. Developed by Kenneth O. Stanley and Risto Miikkulainen in 2002, NEAT represents a significant advancement in neuroevolution by addressing longstanding issues in evolving network architectures, such as the competing conventions problem—where misalignment of corresponding genes hinders crossover—through the use of historical markings that assign unique innovation numbers to genes based on their evolutionary origin, enabling precise alignment during genetic operations. This method employs a direct encoding scheme, representing networks as genomes of node and connection genes, and leverages speciation to divide the population into subpopulations of topologically similar individuals, calculated via a compatibility distance metric that considers excess, disjoint, and weight differences, thus shielding novel topologies from premature extinction in competitive fitness evaluations. Unlike traditional fixed-topology neuroevolution approaches, which require predefined structures and often struggle with , NEAT's complexification principle starts from a simple (e.g., a single link between input and output) and evolves recurrent or more elaborate forms only as needed, resulting in parsimonious solutions that generalize better; for instance, it solved the XOR problem in 32 generations with 4,755 evaluations and the double pole-balancing task without velocity information in 33,184 evaluations, outperforming methods like Cellular Encoding by 25 times and Enforced Subpopulations by 5 times in efficiency. NEAT's impact extends beyond its original benchmarks, influencing applications in , , game AI (e.g., evolving strategies for ), and control tasks like , where it demonstrates superior performance in high-dimensional, non-stationary environments compared to gradient-based methods. Over the subsequent two decades, NEAT has spawned numerous extensions, including HyperNEAT for indirect encodings via compositional pattern-producing networks to handle larger-scale patterns, DeepNEAT for evolving deep architectures, and hybrid variants integrating or , with over 60 documented successors that enhance scalability, diversity, and adaptability in domains such as financial trading and behavioral evolution.

Background

Neuroevolution Fundamentals

Neuroevolution is the application of evolutionary algorithms to the design and optimization of artificial neural networks (ANNs), where network characteristics such as weights, topologies, or both are encoded in artificial genomes and iteratively evolved based on performance evaluations derived from fitness functions. This method automates the search for effective neural architectures and parameters, providing a gradient-free alternative to traditional training techniques like . By leveraging principles of , , and crossover, neuroevolution facilitates the discovery of solutions for complex optimization problems, particularly those with non-differentiable objectives, such as tasks where rewards are sparse or delayed. In contrast to conventional neural network training, which often assumes a predefined topology and relies on differentiable loss functions for weight updates, neuroevolution co-evolves and parameters without such constraints, enabling adaptation to diverse criteria beyond error minimization. approaches are classified into fixed-topology methods, which optimize weights for a predetermined network structure using direct encodings like real-valued strings representing synaptic strengths, and variable-topology methods, which also evolve the number of neurons and connections to allow structural . Fixed-topology simplifies the search space by focusing solely on parameter tuning, making it suitable for initial explorations of evolutionary optimization in . The origins of neuroevolution date to the late 1980s and early 1990s, when researchers began applying genetic algorithms to evolve weights in fixed-topology networks as an alternative to . A foundational example is the work of Miller, Todd, and Hegde (1989), who demonstrated the use of genetic algorithms to train multilayer perceptrons for tasks like , encoding weights directly in binary genomes and selecting individuals based on error rates on validation sets. Similarly, Montana and Davis (1989) applied genetic algorithms to a fixed-topology network for target classification, showing faster convergence than by evolving real-valued weight strings through population-based selection and mutation. These early efforts established 's viability for problems, laying the groundwork for more advanced applications.

Challenges in Evolving Topologies

One major challenge in evolving topologies lies in the competing conventions problem, where structurally equivalent can be represented by genetically incompatible encodings, complicating effective crossover operations during . This issue arises because the same functional topology may be encoded in multiple ways, such as different orderings of neurons, leading to misalignment when combining parent genomes and potentially producing offspring with disrupted performance. For instance, in a with three hidden neurons, there are 3! = 6 possible permutations, each representing the identical structure but yielding incompatible genetic alignments. Closely related is the problem, which exacerbates structural changes across generations by disrupting the of connection weights, as reordering nodes invalidates prior optimizations and hinders the preservation of beneficial traits. Early methods, such as those using direct encoding, were particularly vulnerable to this, as the explicit mapping of nodes and connections amplified sensitivity to ordering variations. Historical and weight evolving artificial neural networks (TWEANNs), including GNARL and Cellular Encoding, illustrated these struggles, often resulting in inefficient search and suboptimal network discoveries due to crossover incompatibilities. GNARL, for example, evolved recurrent networks but faced difficulties in maintaining structural coherence during recombination, limiting its ability to scale to complex tasks. Similarly, Cellular Encoding, an indirect approach using grammar-based development, mitigated some direct encoding issues but still encountered challenges in aligning evolved structures for crossover, as demonstrated in comparisons showing slower convergence on control problems. Scalability presents another significant hurdle, as the search space for topologies explodes combinatorially with increasing network size; for a network with n nodes, the potential connections grow as O(n^2), rendering exhaustive exploration computationally infeasible even for modest complexities. This dimensionality curse often traps evolution in local optima, where simple topologies dominate despite the potential benefits of more intricate structures for tasks requiring nuanced representations. To navigate these vast and rugged landscapes, incremental becomes essential, gradually increasing task from minimal starting points to avoid premature to suboptimal solutions in the full search space. Without such staging, evolving topologies risks inefficiency, as initial populations of complex networks rarely progress beyond rudimentary behaviors.

History and Development

Original Publication

The Neuroevolution of Augmenting Topologies (NEAT) method was first introduced in the 2002 paper titled "Evolving Neural Networks through Augmenting Topologies" by Kenneth O. Stanley and Risto Miikkulainen, published in the journal (Volume 10, Issue 2, pages 99–127). The work was developed at the Department of Computer Sciences, , as part of research in the Neural Network Research Group. The primary motivation for NEAT was to overcome limitations in traditional neuroevolution approaches that relied on fixed network topologies, which often struggled with tasks requiring structural innovation, such as the pole-balancing problem where networks needed to adaptively develop complex topologies to balance a cart-pole system without predefined architectures. By enabling the simultaneous evolution of both network weights and topologies, starting from minimal structures, NEAT aimed to improve efficiency in discovering innovative solutions while facilitating genetic crossover across diverse architectures through mechanisms like historical markings. Initial experiments demonstrated NEAT's effectiveness on benchmark tasks. On the XOR problem, NEAT consistently solved it in an average of 32 generations (approximately 4,755 evaluations) using networks with about 2.35 hidden nodes on average across 100 runs. For the double pole-balancing task with velocity information (DPV), NEAT required around 3,600 evaluations to achieve reliable performance, outperforming methods like SANE (12,600 evaluations) and (307,200 evaluations), and matching (3,800 evaluations). On the more challenging double pole-balancing without velocity information (DPNV), NEAT succeeded in about 33,184 evaluations, which was 25 times faster than Cellular Encoding (840,000 evaluations) and 5 times faster than (169,466 evaluations). This publication, which received the 2017 International Society for (ISAL) Award for the Outstanding Paper of the Decade (2002-2012), marked the first demonstration of a system capable of incrementally evolving from simple, minimal topologies to complex structures without requiring a priori specification of network size or layout, thereby protecting structural innovations and enhancing overall evolvability in artificial neural networks.

Key Contributors and Evolution

The Neuroevolution of Augmenting Topologies (NEAT) algorithm was primarily developed by Kenneth O. Stanley and Risto Miikkulainen, who introduced it in their seminal 2002 paper as a method for evolving both the weights and topologies of artificial neural networks through genetic algorithms. Stanley, the lead developer, has since advanced research in open-ended evolution, emphasizing systems that generate novel behaviors without fixed objectives; he later served as head of Core AI research at AI Labs, where he applied these concepts to exploratory frameworks. Miikkulainen, a prominent expert in , co-authored the original work and contributed significantly to NEAT's mechanism, which protects structural innovations by grouping similar genomes into species during evolution. Following its 2002 inception, NEAT evolved theoretically through integrations and extensions that addressed limitations in exploration and scalability. A key advancement came in 2008–2009 with the incorporation of novelty search, developed by Joel Lehman and Stanley, which replaced traditional fitness objectives with a drive toward behavioral novelty to escape local optima in deceptive search spaces; this hybrid approach enhanced NEAT's performance in tasks like maze navigation. NEAT also influenced indirect encodings, such as HyperNEAT, which uses compositional pattern-producing networks (CPPNs) to generate regularities in large-scale topologies, enabling more efficient evolution of complex structures. Notable milestones include the 2005 introduction of real-time NEAT (rtNEAT), which enabled continuous evolution during gameplay in the , allowing agents to adapt on-the-fly without generational pauses. In 2007, Stanley's work on CPPNs further extended NEAT's principles by abstracting developmental processes to produce geometric patterns, laying groundwork for scalable in higher dimensions. By the , NEAT saw widespread adoption in for evolving controllers in tasks like locomotion and object manipulation, demonstrating robustness in real-world simulators, and in games for generating adaptive opponents, as evidenced by its integration into commercial titles and research benchmarks. Its open-source growth accelerated during this period, with implementations like the original C++ package from the University of Texas and Python libraries such as NEAT-Python fostering community-driven enhancements and applications across diverse domains.

Core Principles

Incremental Complexity

The incremental complexity principle in Neuroevolution of Augmenting Topologies (NEAT) initiates the evolutionary process with the simplest possible structures to prioritize functional optimization before architectural elaboration. Specifically, NEAT begins evolution with genomes representing that have no hidden nodes, featuring only direct connections from input nodes to output nodes, thereby focusing the initial search on weight adjustments in a minimal-dimensional space. This approach augments network topologies gradually through targeted structural , such as adding new connections between existing nodes or inserting hidden nodes that split existing connections, but only when such changes demonstrably improve . By constraining growth to beneficial increments, NEAT avoids the proliferation of unnecessary parameters that could otherwise hinder . The benefits of this strategy include a substantial reduction in the overall search space, as the explores lower-dimensional solutions first and only expands when necessary, preventing premature entanglement in overly complex architectures. Additionally, it fosters by encouraging the development of "minimal sufficient" network structures that capture essential task dynamics without extraneous features, leading to more robust and efficient evolved controllers. A representative example is NEAT's application to the double pole-balancing task without velocity sensors, where evolution starts with a simple linear controller comprising direct input-output links and progressively introduces recurrent connections to handle the task's temporal dependencies, achieving solution in an average of 33,184 fitness evaluations across 20 runs. In contrast to methods that employ full search—such as random generation of complete architectures—NEAT's guided, incremental mutations ensure that complexity emerges organically from functional pressures, outperforming fixed- approaches like Evolutionary Strategies with Permissible (ESP), which required 169,466 evaluations on average for the same task. This principle relies on mutation operators that enable controlled addition of structural elements, as detailed in the algorithm's .

Speciation and Diversity Protection

In NEAT, genomes are assigned to based on their , a metric that quantifies topological and parametric similarities between structures, with each maintaining a representative genome from the previous generation to facilitate consistent grouping. This process begins by randomly assigning initial minimal genomes to separate and proceeds by comparing each new genome to existing representatives; if the exceeds a dynamic , a new is formed, otherwise the genome joins the most compatible existing . Diversity protection in NEAT is achieved through a combination of intra-species competition, where genomes within the same species vie for reproduction based on relative fitness, and inter-species fitness sharing, which distributes reproductive opportunities across species to prevent any single group from monopolizing the population. The compatibility distance, detailed in the algorithm mechanics section, underpins this by enabling the isolation of structurally distinct genomes into niches that evolve semi-independently. To further maintain balance, species fitness is adjusted according to group size using a sharing function that reduces the adjusted fitness of individuals in larger species, thereby curbing dominance and promoting equitable allocation of offspring slots proportional to overall species performance. Additionally, stagnant species—those showing no improvement in maximum fitness over 15 consecutive generations—are systematically removed to reallocate resources to more promising lineages. This speciation mechanism plays a crucial role in fostering by shielding novel topological from immediate , even if they initially underperform, allowing them time to be refined through subsequent and selections within protected niches. For instance, in the double pole-balancing task without , preserved a that evolved a recurrent loop and hidden node to estimate velocities, enabling its gradual optimization and contributing to superior performance over fixed-topology approaches.

Historical Markings and Innovation Tracking

In Neuroevolution of Augmenting Topologies (NEAT), historical markings provide a mechanism for tracking the lineage of structural innovations in evolving genomes, enabling precise identification of homologous s across individuals. Each in a genome—representing either a or a —carries an innovation number, which is a assigned from a global counter when the gene first emerges through . This number records the historical origin of the , serving as direct evidence of if two genes in different genomes share the same innovation number, indicating from a common ancestor. The process begins with minimal genomes consisting of only input and output nodes, lacking hidden nodes or connections. When a introduces a new structural element, such as adding a or splitting a , the global innovation counter increments to assign a novel number to the new (s); for instance, a new connection might receive innovation number 7, while a subsequent node split could assign numbers 8 and 9 to the resulting genes. If a duplicates an existing structure within a , the offspring genomes inherit the ancestor's innovation numbers for those genes, preserving the historical trace without introducing duplicates in the global counter. This ensures that innovations are uniquely marked only upon their initial appearance, facilitating efficient lineage tracking across generations. A key benefit of historical markings is the resolution of the competing conventions problem, where genomes with identical functionality might encode the same differently due to positional variations in gene ordering. By aligning genes based on shared innovation numbers rather than their sequential position, NEAT avoids misalignment during genetic operations, such as crossover, where compatible mates are identified through historical similarity. For example, if two genomes descend from a common parent and both undergo a to add a hidden node in the same generation, the resulting node genes will share the same innovation number, confirming their despite potential differences in surrounding structures. This approach promotes the protection and propagation of beneficial innovations by maintaining a clear record of evolutionary novelty.

Algorithm Mechanics

Genome Representation

In Neuroevolution of Augmenting Topologies (NEAT), the genome employs a direct encoding scheme where the structure and weights of a are explicitly represented through discrete genes, allowing for precise tracking of evolutionary changes. Each consists of genes and genes, which together define the network's implicitly via the connections rather than an explicit . This approach ensures a compact representation that can evolve both and recurrent networks without predefined structural constraints. Node genes specify individual neurons and include an identifying ID (a unique integer for referencing and historical tracking), a type indicating whether the node is an input, hidden, output, or unit, and an —typically a f(x) = \frac{1}{1 + e^{-x}} in the original formulation, applied uniformly to compute the node's output. genes, which form the core of the encoding, link pairs of nodes and contain fields for the in-node ID (source), out-node ID (target), weight (a floating-point value initializing the connection strength), an innovation number (a unique global identifier assigned upon creation to track across genomes), and an enabled flag (a determining if the is active in the ). This structure allows disabled connections to persist in the genome, enabling potential reactivation through mutation without losing historical context. An example initial for a simple network with two inputs and one output begins with minimal direct s from inputs to the output, each as a gene with random weights and innovation numbers starting from 1. As proceeds, structural mutations introduce complexity: adding a hidden splits an existing by inserting a new gene and creating two new genes (one to the hidden and one from it to the original target), all assigned fresh innovation numbers to maintain topological lineage. The resulting representation is evolvable and compact, as extraneous genes can be pruned while preserving the ability to align structurally similar networks during reproduction.

Mutation Operators

In NEAT, mutation operators introduce variation to genomes by modifying connection weights and incrementally augmenting network topology while preserving existing functionality through historical markings. These operators are applied probabilistically during reproduction to balance exploration and exploitation. Structural mutations enable the evolution of network complexity. The add-connection mutation selects two unconnected nodes—typically from different layers—and inserts a new connection gene between them with a randomly initialized weight, assigning a novel innovation number to track its origin. This operation promotes recurrent or feedforward links without disrupting established pathways. The add-node mutation, rarer than add-connection, splits an existing connection by inserting a new hidden node: the original connection is disabled and receives a new innovation number for potential future reactivation, while two new connections are created—the input to the new node with weight 1.0 and the new node to the original output with the original connection's weight; the new node's bias is initialized to 0.0. This process allows gradual complexity growth, as the new node initially acts as a pass-through before further evolution refines it. Weight mutations refine behavior without altering . Each generation, existing connection weights are perturbed with an 80% probability: 90% of the time via uniform random adjustment within a small range around the current value, or 10% of the time by replacing with a entirely new random value from the initialization distribution. These perturbations, often Gaussian in modern implementations though uniform in , enable of synaptic strengths. Enabling and disabling mutations toggle the expression of genes in the without removing them from the . A connection gene's enabled can be flipped with low probability, temporarily a link to explore alternative configurations while retaining the structural information for potential re-enablement later. This , applied during , has a 25% chance of reactivating disabled inherited genes in some variants. Mutation rates in NEAT are adaptive to and , with structural mutations occurring less frequently than weight changes to avoid premature disruption of functional topologies. For small populations, add-node has a 0.03 probability and add-link 0.05 per reproduction; these increase to 0.3 and 0.3, respectively, for larger populations to accelerate innovation. Enabling/disabling occurs at around 0.01 probability, ensuring conservative toggling.

Crossover and Compatibility Distance

In the Neuroevolution of Augmenting Topologies (NEAT) , crossover combines two genomes to by aligning genes according to their innovation numbers, which are unique historical markers assigned to track the origin and of each across generations. This alignment enables precise matching of homologous genes despite varying topologies, ensuring that shared innovations are preserved while allowing the incorporation of novel features. Genes sharing the same innovation number are classified as matching, and their weights in the offspring are inherited by randomly selecting from either parent, promoting a blend of both parents' weight configurations for these aligned components. In contrast, disjoint genes—those whose innovation numbers fall within the range of the other parent's genome but do not match—and excess genes—those with innovation numbers outside that range—are inherited exclusively from the more fit parent, prioritizing the preservation of advantageous traits and s. This preferential inheritance from the fitter parent helps maintain adaptive substructures in the offspring. For instance, when performing crossover between two genomes with similar topologies, the shared innovation backbone is faithfully reproduced, while disjoint or excess elements from the superior parent introduce targeted enhancements without disrupting compatibility. To facilitate speciation, NEAT employs a compatibility distance metric that quantifies structural and parametric differences between genomes, determining whether they can interbreed within the same species. The distance \delta is computed as follows: \delta = \frac{c_1 E}{N} + \frac{c_2 D}{N} + c_3 \cdot W where E denotes the count of excess genes, D the count of disjoint genes, W the average absolute difference in weights among matching genes, N the number of genes in the larger genome (or 1 if fewer than 20 genes total), and c_1, c_2, c_3 are adjustable coefficients that balance the influence of topological disparities versus weight variations (typically set to 1.0, 1.0, and 0.4, respectively). Genomes exceeding a compatibility threshold \delta_t (e.g., 3.0) are deemed incompatible and assigned to separate species, thereby shielding innovative topologies from competition with established ones. This distance measure underscores topological mismatches by counting disjoint and excess genes relative to the aligned innovation ranges, providing a robust gauge of evolutionary divergence.

Evaluation and Selection

In NEAT, fitness is defined in a task-specific manner to measure an individual's performance on the given problem. For instance, in the pole-balancing task, fitness is calculated as the number of time steps the poles remain balanced before falling, or a weighted combination such as F = 0.1 f_1 + 0.9 f_2, where f_1 and f_2 represent steps balanced with one and both poles upright, respectively. To encourage innovation across species, fitness is shared within each species using an adjusted fitness formula: f'_i = \frac{f_i}{\sum_{j=1}^n sh(\delta(i,j))}, where sh(\delta) = 1 if two genomes are compatible (within a threshold \delta_t, typically 3.0), and 0 otherwise; this sharing penalizes species with many similar genomes by dividing their raw fitness among members. Selection in NEAT employs a combination of and proportional mechanisms to preserve high performers while promoting diversity. Within each containing more than five genomes, the champion (highest- individual) is copied unchanged to the next generation via , ensuring the best solutions survive directly. The number of offspring allocated to a is then determined proportionally to the sum of its members' adjusted values, with an implicit penalty for larger species due to fitness sharing; lower-performing genomes are eliminated to make room for new offspring. The generational cycle in NEAT proceeds as follows: all genomes in the are first evaluated for raw on the task; is then adjusted via species ; genomes are speciated based on compatibility distances; parents are selected within , and offspring are produced through 25% asexually via only and 75% sexually via crossover followed by ; this process repeats, gradually increasing network complexity over generations. To handle stagnation, that show no improvement in maximum for 15 consecutive generations are prevented from reproducing, and if the overall does not improve for 20 generations, reproduction is restricted to the top two . In applications such as , is often defined as a score reflecting task achievement, such as points earned for behaviors like approaching targets or avoiding hazards in the NERO game, where players adjust coefficients for components like enemy approach (reward) or getting hit (penalty). Through and shared , NEAT credits complex structures with potential for higher scores, even if their initial performance is lower than simpler networks, allowing innovative topologies to persist and evolve.

Performance and Evaluation

Benchmark Tasks and Results

NEAT was initially evaluated on the XOR problem, a classic benchmark for assessing the ability to evolve hidden units in neural networks to solve non-linearly separable tasks. In experiments, NEAT achieved a 100% success rate over 100 runs, solving the problem in an average of 32 generations (approximately 4,755 evaluations, with a standard deviation of 2,553), producing networks with an average of 2.35 hidden nodes and 7.48 non-disabled connections—close to of one hidden node. A primary benchmark for NEAT's capabilities is the pole-balancing tasks, where a must one or two poles without falling for extended periods. The double-pole balancing without velocities (a non-Markovian task requiring recurrent connections for memory) was solved by NEAT in an average of 221 generations (33,184 evaluations), outperforming fixed-topology methods like Evolutionary Strategies with Permuting the Order () by a factor of 5 (169,466 evaluations) and () by a factor of 25 (840,000 evaluations), while achieving an average of 286 successes out of 625 initial states across simulations. These results highlight NEAT's efficiency in evolving compact recurrent networks, often with 0–2 hidden nodes and minimal connections, compared to larger fixed structures in other methods. For the easier double-pole balancing with velocity information, NEAT required an average of 24 generations (3,600 evaluations). Subsequent evaluations in 2006 on simple control tasks, such as variants of the cart-pole balancing problem, confirmed NEAT's ability to evolve minimal networks quickly, solving the double-pole task with incomplete state information in under 7,000 evaluations for standard fitness measures—demonstrating superior performance over traditional approaches in terms of generations to solution and network parsimony. Key metrics across these benchmarks include generations to convergence, final network size, and , with NEAT consistently producing smaller, more efficient topologies than gradient-based or fixed-topology alternatives. Despite these strengths, NEAT has limitations in scaling to very high-dimensional input spaces without extensions, as its process can lead to slower in non-deceptive environments lacking large local optima.

Comparisons with Other Methods

Neuroevolution of Augmenting Topologies (NEAT) demonstrates advantages over fixed-topology methods by dynamically evolving both network structure and weights, leading to more effective solutions on tasks that require complex topologies. In particular, on the double pole balancing without benchmark—a continuous-time problem—NEAT consistently solves the task by evolving nodes and connections, outperforming fixed-topology approaches that struggle to discover optimal structures without manual design. For instance, NEAT requires about 5 times fewer evaluations than , a method that evolves weights across multiple fixed topologies in parallel subpopulations, while producing simpler, more generalizable networks. Compared to other topology and weight evolving artificial neural networks (TWEANNs), such as Enforced Subpopulations (), NEAT excels in convergence speed and network complexity minimization due to its mechanism and historical markings, which enable meaningful crossover and protect novel structures from premature elimination. On the double pole balancing task, NEAT requires about 5 times fewer evaluations than . NEAT offers benefits over traditional methods like , particularly on non-differentiable tasks and games where gradient-based optimization is inefficient or infeasible, as it treats the fitness function as a and evolves policies directly without . In comparative studies on self-driving agents in 2D environments, NEAT converges to high-performing controllers with greater sample efficiency than tabular , requiring fewer interactions to handle sparse rewards and continuous action spaces. This gradient-free nature makes NEAT suitable for evolving behaviors in deceptive or multimodal landscapes, where can get stuck in local optima. The innovation protection provided by NEAT's speciation and historical markings contributes to its efficiency, as demonstrated in analyses showing it requires fewer evaluations than non-speciated evolutionary methods to reach comparable levels on tasks, by allowing diverse structural innovations to coexist and compete over generations. In the context of developments through the and beyond, NEAT's principles have influenced broader but have been surpassed in scalability by (deep RL) techniques on large-scale, high-dimensional tasks like , where deep RL leverages massive parallel computation and for superior sample efficiency at scale. However, hybrid approaches combining NEAT-like evolution with deep RL—such as neuroevolution for architecture search followed by RL fine-tuning—have emerged as promising for balancing exploration and exploitation in complex environments, with ongoing research as of 2021 documenting over 60 successors enhancing its applicability.

Applications

Control and Robotics

NEAT has been successfully applied to dynamic control problems in robotics, particularly the cart-pole balancing task, where neural network controllers are evolved to keep a pole upright by moving a cart along a track. In the standard setup, the input includes the pole's angle and cart position, with the output controlling cart velocity. For the more challenging velocity-less variant, where velocity signals are unavailable, NEAT employs recurrent neural networks to internally estimate velocity from sequential position observations, enabling partial observability. This approach leverages NEAT's incremental evolution of topologies, starting from simple feedforward networks and adding recurrent connections as needed. Experimental results show NEAT solving the task with compact topologies of 0-2 hidden nodes, outperforming fixed-topology methods like Enforced Subpopulations (ESP) by a factor of 5 (33,184 vs. 169,466 evaluations) and Cellular Encoding by 25 times (33,184 vs. 840,000 evaluations), while using a specialized fitness function that rewards balance steps and penalizes falls. In , NEAT evolves effective navigation and controllers for mobile platforms, demonstrating its utility in physical systems with continuous action spaces. Early applications included 2004 experiments evolving controllers for simulated Khepera-like wheeled robots in a competitive framework, where robots learned behaviors while avoiding opponents. These controllers integrated proximity and light sensors to produce adaptive locomotion patterns, such as obstacle avoidance and pursuit, in dynamic arenas. The evolved networks handled the robot's differential drive mechanics, achieving coordinated movement superior to random or static policies. For legged systems, similar principles have been extended to evolve gaits, though initial demonstrations focused on wheeled platforms like Khepera to validate transfer to hardware. Multi-robot coordination benefits from NEAT variants like odNEAT, which supports decentralized, online evolution for swarm behaviors without central oversight. odNEAT, building on earlier on-board evolution work from 2009, allows each to independently evolve its neural controller using local evaluations and periodic genome exchanges with neighbors. In 2012 simulations with groups of e-puck-like robots, odNEAT evolved aggregation strategies with emergent behavioral diversity, leading to effective group clustering in dynamic environments. This distributed approach scaled to swarms of 5-30 robots, achieving performance comparable to centralized methods like rtNEAT due to parallel adaptation. NEAT's advantages in and stem from its capacity to evolve robust topologies that tolerate noisy and produce adaptive behaviors in uncertain environments. By complexifying networks gradually, NEAT generates controllers resilient to sensor inaccuracies, such as infrared proximity noise in mobile robots, and that generalize to real hardware, as demonstrated in various applications. variants of NEAT further enhance these capabilities for online robotic adaptation.

Games and Content Generation

NEAT has been applied to evolve controllers for board games, notably in , where it demonstrates the ability to develop competitive strategies without predefined expert knowledge. In one study, a variant of NEAT known as MM-NEAT was used to evolve players by incorporating domain-specific features into the representation, resulting in agents that outperformed baselines in win rates against standard opponents. This approach achieved near-expert performance levels, highlighting NEAT's capacity to discover effective decision-making topologies in discrete, turn-based environments. In video games, NEAT variants have powered controllers for classic arcade titles like , enabling behaviors such as simultaneous pursuit of pellets and evasion of ghosts. Using the NEAT to evolve modular neural networks, these controllers adapt in to partial observability, achieving higher scores than fixed-topology methods by balancing multiple objectives like survival and resource collection. Similarly, the NEAT (rtNEAT) extension has been employed to evolve agent behaviors in dynamic video games, such as the NERO game, where players train teams of agents during gameplay to perform tasks like target seeking and obstacle avoidance, fostering emergent teamwork and adaptation without pausing the experience. Although initially developed for PC platforms, rtNEAT's principles have influenced adaptive AI in console environments, including demonstrations in Xbox-compatible simulations for evolving responsive opponents. For content generation, the content-generating NEAT (cgNEAT) variant extends the algorithm to evolve assets tailored to player preferences, such as tile-sets for levels and sound effects in games. In the Galactic (GAR), cgNEAT uses player feedback to iteratively evolve weapon designs, including visual tile patterns and auditory cues, producing novel content that increases engagement by aligning with individual tastes— for instance, generating aggressive laser sounds for action-oriented players. This innovation allows games to procedurally create diverse assets, reducing manual design efforts while ensuring variety in procedural worlds. A representative example of NEAT in is its use for evolving in racing simulations, where controllers learn to navigate tracks by optimizing speed and through augmenting topologies. In the Xpilot-AI platform's racing mode, NEAT-evolved neural s discover novel strategies that surpass hand-crafted heuristics in lap times and adaptability to varying track conditions. These emergent behaviors illustrate NEAT's strength in uncovering innovative solutions beyond human-designed rules. The impact of NEAT in games extends to indie development, where it enables adaptive opponents that evolve based on player skill, enhancing replayability without extensive scripting. In titles like , cgNEAT and rtNEAT variants create dynamic foes that adjust tactics mid-game, providing personalized challenges that keep indie experiences fresh and scalable for small teams. This has inspired broader adoption in procedural games, allowing for opponent behaviors that feel intelligent and responsive to user playstyles.

Recent Industrial and Scheduling Uses

In recent years, NEAT has been applied to dynamic scheduling in , particularly for the flexible problem (FJSP), where it evolves dispatch rules to handle uncertainties like machine breakdowns and job arrivals. A 2024 study introduced a NEAT-based approach that integrates rules for job selection and machine allocation, optimizing objectives such as maximum completion time and average tardiness through multiobjective fitness evaluation using the weight . This method demonstrated superior performance in dynamic scenarios, reducing average completion time to 142.14 seconds in a 50-job, 20-machine instance compared to 644.36 seconds for traditional heuristics like MOCR/SPT. To enable efficient large-scale evolution for industrial optimization, a 2024 tensorization of NEAT, known as TensorNEAT, was developed using the framework to transform variable network topologies into uniform tensors for parallel computation. This GPU-accelerated variant achieves up to 500x speedups over baseline NEAT implementations, facilitating the evolution of populations in complex, high-dimensional problems without sacrificing topological diversity. TensorNEAT supports benchmark environments like and Brax, making it suitable for hardware-accelerated simulations in and . NEAT has also been employed in supply chain optimization, specifically for multi-echelon inventory systems, where it evolves neural networks to make inventory decisions across supplier, manufacturer, distributor, retailer, and customer levels. In a 2025 application to linear multi-echelon inventory systems, NEAT minimized holding and penalty costs over 35 periods by optimizing a 10-dimensional state space, achieving a 25.02% compared to (), with peak performance in 1,000 generations versus PPO's 2,000 iterations. This approach leverages NEAT's ability to adapt policies without fixed architectures, improving responsiveness to demand fluctuations. In , variants of NEAT have been used to evolve controllers for efficient , such as in operations. A 2025 study proposed Adaptive-Advanced NEAT (AANEAT) for pure electric buses, optimizing strategies through evolved neural topologies that achieve high accuracy in dynamic tasks. This method demonstrates NEAT's efficacy in handling real-time constraints, with reported success rates exceeding 90% in predictive scenarios. A key advantage of NEAT in these and scheduling applications is its capacity to manage non-linear interactions and uncertainties without requiring predefined structures, allowing topologies to complexify incrementally during . For instance, in job sequencing for dynamic FJSP, NEAT outperforms static heuristics by adapting to disruptions with lower variance in objective values, as evidenced by reduced mean completion times and in simulated environments.

Extensions and Variants

Real-Time and Pruning Variants

The real-time neuroevolution of augmenting topologies (rtNEAT) extends the NEAT algorithm to enable ongoing evolution of neural networks during task execution, allowing agents to adapt dynamically without pausing for full generations. Introduced in , rtNEAT operates by continuously evaluating agent performance in the and replacing low-fitness individuals with at regular intervals, such as every 20 simulation ticks in a of agents. This approach facilitates mid-run updates to genomes, promoting adaptive behaviors in time-sensitive scenarios where traditional batch evolution would be impractical. A key mechanism in rtNEAT is the use of fitness proxies for rapid evaluations, where human-defined objectives—such as approaching an or hitting a target—are assigned via adjustable sliders and normalized as Z-scores to guide selection without requiring complete task simulations. Speciation is preserved in real-time by maintaining a small number of (e.g., targeting four) with a dynamic compatibility starting at 4.0, ensuring topological diversity amid frequent updates. These features enable rtNEAT to evolve increasingly complex controllers on-the-fly, contrasting with offline methods by integrating directly into or operation. Phased pruning complements rtNEAT and other NEAT variants by periodically removing ineffective or redundant connections from topologies during , thereby streamlining architectures and reducing computational overhead without sacrificing . Developed as an extension by Colin Green and implemented in the SharpNEAT framework, this technique applies after specific training phases, such as every few hundred generations, to eliminate functionally superfluous structures and refocus search on simpler, more efficient solutions. is particularly valuable in settings, as it prevents network bloat—where unnecessary nodes and links accumulate—allowing faster and in resource-constrained environments. Empirical studies confirm that phased yields less complex networks while maintaining or improving on tasks. Applications of rtNEAT and pruning variants span online robot learning and game AI, where rapid adaptation is essential. In the NERO video game, rtNEAT trains virtual for combat tasks; for instance, agents achieve 90% success in approaching enemies within an average of 99.7 seconds (standard deviation 44.5 seconds) through player-designed exercises, demonstrating quick behavioral in dynamic arenas. enhances these systems by producing leaner controllers suitable for deployment, such as in robotic or adaptive game opponents, where evolved networks must respond instantaneously to changing conditions.

Compositional and Geometric Variants

HyperNEAT, or Hypercube-based NeuroEvolution of Augmenting Topologies, extends the NEAT algorithm by employing an indirect encoding mechanism to generate large-scale neural networks that exploit geometric regularities in the task domain. Introduced in 2007, HyperNEAT uses Compositional Pattern Producing Networks (CPPNs) as the core generative component; these CPPNs are evolved using the standard NEAT process but function differently by taking spatial coordinates of neurons as inputs and outputting connection weights or topology decisions based on those coordinates. This geometric mapping allows the encoding to produce regular connectivity patterns, such as symmetries or repetitions, across expansive substrates without explicitly specifying each connection, enabling the evolution of networks with thousands of nodes and millions of connections from compact CPPN genomes. The benefits of HyperNEAT's approach are particularly evident in domains requiring scalability and generalization, such as image processing and , where direct encodings like standard NEAT become computationally infeasible for large topologies. For instance, in visual tasks, HyperNEAT has demonstrated the ability to evolve networks that scale from low-resolution (e.g., 11×11 substrates with ~14,000 connections) to high-resolution (e.g., 55×55 substrates with over 9 million connections) while maintaining performance, as the CPPN captures reusable patterns that adapt to varying input sizes without retraining. A notable application is in evolving morphologies and controllers, where HyperNEAT generates symmetric limb patterns and coordinated gaits, leveraging the inherent of the robot's body to produce efficient, modular structures that generalize across different body configurations. Building on HyperNEAT, ES-HyperNEAT (Evolvable-Substrate HyperNEAT), introduced in 2011, further enhances this framework by dynamically evolving the placement and density of hidden nodes within the through adaptive sampling techniques. Unlike fixed-substrate HyperNEAT, ES-HyperNEAT uses a quadtree-based algorithm to sample CPPN outputs along hypercube cross-sections, identifying high-variance regions to position neurons where connectivity patterns carry the most information, thus reducing unnecessary connections and improving efficiency in evolving modular architectures. This leads to better performance in complex tasks, such as maze navigation, where ES-HyperNEAT solves problems in fewer generations compared to fixed-substrate variants by focusing evolution on structurally relevant areas.

Modern Specialized Adaptations

In recent years, variants of NEAT have been developed to address domain-specific challenges, particularly in , multi-agent coordination, and computational efficiency through . These adaptations build on NEAT's core principles but incorporate modifications for specialized applications, often post-2015, to enhance performance in targeted scenarios. Content-Generating NEAT (cgNEAT), an extension from the late 2000s with ongoing applications in the 2010s, evolves neural networks to produce adaptive game content tailored to user preferences, such as of weapons or levels in video games like Galactic Arms Race. This approach allows for the automatic creation of novel, engaging elements by evolving both network structures and content parameters simultaneously, demonstrating NEAT's utility in creative domains beyond pure control tasks. Online Distributed NEAT (odNEAT), introduced around 2012 but applied in recent multi-agent research, enables decentralized evolution of neural controllers directly on robots without central coordination, making it suitable for in dynamic environments. In odNEAT, each agent evolves its own and weights onboard during task execution, promoting and emergent collective behaviors in scenarios like robot swarms navigating obstacles or . Recent evaluations highlight its effectiveness in real-world multirobot systems, where it achieves despite hardware failures or environmental changes. Tensorized NEAT, proposed in , reformulates NEAT genomes and operations as tensors to leverage GPU parallelization, transforming variable topologies into uniform shapes for efficient in . This adaptation accelerates evolution by up to 500 times compared to traditional CPU-based implementations, particularly in high-dimensional tasks like simulations on environments such as Brax, where larger populations (up to 10,000) yield faster convergence to optimal policies. For instance, in tasks like HalfCheetah locomotion, Tensorized NEAT reduces runtime dramatically while maintaining solution quality, enabling scalable for complex, real-time applications. Hybrids integrating NEAT with deep learning components, such as self-attention mechanisms inspired by transformers, have emerged post-2015 to handle high-dimensional inputs in neuroevolution. Hybrid Self-Attention NEAT (2021) incorporates attention layers into evolving topologies, improving performance on sequence-based tasks by dynamically weighting connections, thus bridging NEAT's structural evolution with transformer-like efficiency. These hybrids extend NEAT's applicability to modern deep architectures, though they retain its focus on topology augmentation for interpretable, adaptive networks.

Implementations

Original and Early Tools

The original implementation of Neuroevolution of Augmenting Topologies (NEAT), developed by Kenneth O. Stanley in 2002, was written in C++ and released as open-source software by the University of Texas at Austin's Neural Network Research Group. This foundational code incorporates key NEAT mechanisms, including speciation to protect structural innovations during evolution and historical innovation tracking to align genes across genomes for effective crossover operations. The package supports command-line execution for benchmark tasks such as the XOR problem and pole-balancing experiments (single-pole, Markovian double-pole, and non-Markovian double-pole balancing), while its modular design allows extension to custom neuroevolution tasks. Early adaptations of NEAT included a MATLAB port released in 2003 by Christian Mayr, building directly on Stanley's C++ codebase to provide MATLAB source code for evolving neural networks via the augmenting topologies approach. This implementation retained core features like and innovation tracking, with an included experiment for the XOR task, and was designed for compatibility with 's environment to facilitate prototyping and analysis in academic settings. Both the original C++ and versions were extensible for user-defined problems but lacked , making them computationally intensive for large population sizes typical in complex evolutionary runs. These initial tools remain archived and downloadable from the University of Texas at Austin's Neural Network Research Group website, preserving the foundational software for NEAT research and replication.

Contemporary Libraries and Frameworks

One of the most widely used contemporary libraries for implementing NEAT is NEAT-Python, a pure-Python framework first released in 2016 and actively maintained through 2025. It provides a complete implementation of the core NEAT algorithm, including representation, , and operations, with no external dependencies beyond the Python . NEAT-Python supports extensions like HyperNEAT through compatible integrations, such as converting genomes to Compositional Pattern-Producing Networks (CPPNs) for indirect encoding. Additionally, it facilitates for parallel evaluation of genomes, enabling efficient scaling across CPU cores during fitness assessments in large populations. In Java and other languages, the Encog framework remains influential, with its last stable release in September 2017, offering robust support for NEAT and HyperNEAT in both Java and C# environments. Encog's modular design influenced subsequent tools by providing interchangeable models, including genetic algorithms for . For web-based applications, implementations like NEAT-JavaScript provide a lightweight, browser-compatible library for evolving neural networks, suitable for interactive demos and client-side simulations. Modern integrations enhance NEAT's applicability in reinforcement learning (RL) by combining it with environments like OpenAI Gym. Libraries such as neat-openai-gym and neat-gym allow direct training of NEAT agents on Gym tasks, such as CartPole or , bridging neuroevolution with standard RL benchmarks. For tensorized variants, hybrid frameworks like -NEAT and -NEAT convert NEAT genomes into differentiable or models, enabling gradient-based fine-tuning alongside evolutionary search. Recent advancements include GPU acceleration, with the 2024 release of TensorNEAT, a JAX-based library that tensorizes NEAT operations for high-throughput evolution on modern hardware. This addresses scalability limitations in vanilla implementations, achieving up to 100x speedups on GPU clusters for complex substrates. Educational resources, including repositories with example code and courses on NEAT applications, further democratize access to these tools. Practical in libraries like NEAT-Python involves tuning speciation parameters, such as the compatibility (typically 3.0–5.0), which determines similarity for species assignment and affects diversity—lower values promote more species and , while higher ones encourage . is supported via the library's built-in , which exports genomes to Graphviz format for rendering network topologies as directed graphs, aiding debugging and analysis of evolved structures.

References

  1. [1]
    [PDF] The MIT Press Journals - Neural Network Research Group
    An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolu-.
  2. [2]
    A Systematic Literature Review of the Successors ... - MIT Press Direct
    Mar 1, 2021 · NeuroEvolution of Augmenting Topologies (NEAT) is considered one of the most influential algorithms in the field. Eighteen years after its ...
  3. [3]
    [PDF] Neuroevolution: from architectures to learning - CNRS
    Jan 10, 2008 · New insights in both neuroscience and evolu- tionary biology have led to the development of increasingly powerful neuroevolution techniques over ...
  4. [4]
    [PDF] Neuroevolution in Deep Neural Networks - arXiv
    Jun 9, 2020 · Neuroevolution is the automated configuration and training of DNNs using Evolutionary Algorithms (EAs), which are nature-inspired techniques.
  5. [5]
    [PDF] Designing Neural Networks using Genetic Algorithms - ResearchGate
    Miller, Todd and Hegde plication. Our major criterion has been ability to success- fully learn the input-output mappings specified by each task. A more ...
  6. [6]
    [PDF] Incremental Evolution of Complex General Behavior 1 Introduction
    Jun 1, 1996 · This paper proposes an approach where complex general behavior is learned incrementally, starting with simpler behavior and gradually making ...
  7. [7]
    Evolving Neural Networks Through Augmenting Topologies (2002)
    We present a method, NeuroEvolution of Augmenting Topologies (NEAT) that outperforms the best fixed-topology method on a challenging benchmark reinforcement ...
  8. [8]
    POET: Endlessly Generating Increasingly Complex and Diverse ...
    Jan 8, 2019 · Uber AI Labs introduces the Paired Open-Ended Trailblazer (POET), an algorithm that leverages open-endedness to push the bounds of machine ...
  9. [9]
    Compositional pattern producing networks: A novel abstraction of ...
    May 10, 2007 · Stanley, K.O. Compositional pattern producing networks: A novel abstraction of development. Genet Program Evolvable Mach 8, 131–162 (2007).
  10. [10]
    [PDF] Real-Time Neuroevolution in the NERO Video Game
    This paper describes such a method, a real-time enhancement of the NeuroEvolution of Augmenting. Topologies method (NEAT; Stanley and Miikkulainen 2002b, 2004a) ...
  11. [11]
    NEAT C++ Original - NNRG Software
    The NEAT package contains source code implementing the NeuroEvolution of Augmenting Topologies method. The source code is written in C++.
  12. [12]
    Python implementation of the NEAT neuroevolution algorithm - GitHub
    Aug 11, 2025 · This project is a pure-Python implementation of NEAT with no dependencies beyond the standard library. It was forked from the excellent project ...
  13. [13]
    NEAT Overview — NEAT-Python 0.92 documentation - Read the Docs
    Each genome contains two sets of genes that describe how to build an artificial neural network: Node genes, each of which specifies a single neuron.Missing: components | Show results with:components
  14. [14]
    [PDF] Efficient Non-Linear Control through Neuroevolution - IDSIA
    CoSyNE was the most efficient method for both fitness measures, outperforming ESP and NEAT by a factor of six on the standard fitness, and. CMA-ES by a factor ...<|control11|><|separator|>
  15. [15]
  16. [16]
    (PDF) Neuro-Evolution Through Augmenting Topologies Applied To ...
    A powerful new algorithm for neuroevolution, Neuro-Evolution for Augmenting Topologies (NEAT), is adapted to the game playing domain.
  17. [17]
    [PDF] Comparison of two methods for evolving recurrent artificial neural ...
    Braun&Weisbrod (1993) describe another more conventional method called ENZO (Evol- utiver Netzwerk-Optimierer). It evolves feed-forward network architectures ...
  18. [18]
    [2209.09007] Comparative Study of Q-Learning and NeuroEvolution ...
    Sep 19, 2022 · We will focus reinforcement learning algorithms, more specifically Q-learning algorithms and NeuroEvolution of Augment Topologies (NEAT).
  19. [19]
    [PDF] How does the performance of NEAT compare to Reinforcement ...
    Mar 10, 2022 · This study examined the relative performance of Deep Reinforcement Learning com- pared to a neuroevolution algorithm called NEAT when used ...
  20. [20]
    NEAT Robot Competitive Coevolution Demo
    This page contains links to movies (in the form of GIF animations) depicting evolved robot controllers controlling simulated Khepera-like robots in a real-time ...Missing: locomotion 2004
  21. [21]
    (PDF) On-Line, On-Board Evolution of Robot Controllers
    Sep 2, 2025 · On-Line, On-Board Evolution of Robot Controllers. June 2010; Lecture Notes in Computer Science. DOI:10.1007/978-3-642-14156-0_10. Conference ...
  22. [22]
    [PDF] odNEAT: An Algorithm for Distributed Online, Onboard Evolution of ...
    We demonstrate odNEAT in a simulated experiment where a group of e-puck-like robots (Mondada et al., 2009) running an EA independently, online and onboard, must ...
  23. [23]
    Evolution of Adaptive Behaviour in Robots by Means of Darwinian ...
    Jan 26, 2010 · The input neurons of the neural network are activated by the robot's sensors, and the output neurons control the motors of the robot.
  24. [24]
    [PDF] A STEP TOWARD EVOLVING BIPED WALKING BEHAVIOR ...
    In this thesis, an indirect encoding called HyperNEAT evolves a controller for a biped robot in a computer simulation. To most effectively explore the deceptive ...
  25. [25]
    [PDF] Using Neuroevolution on Checkers Players
    board state using MM-NEAT. Our project showed success in using expert knowledge features to evolve a checkers player. While demonstrating success, our ...
  26. [26]
  27. [27]
    [PDF] Evolving content in the Galactic Arms Race video game
    The content-generating NeuroEvolution of Augmenting Topologies (cgNEAT) algorithm is introduced, which automatically evolves game content based on player ...<|separator|>
  28. [28]
    Automatic content generation in the Galactic Arms Race video game
    ... content-generating NeuroEvolution of Augmenting Topologies (cgNEAT) is introduced that automatically generates graphical and game content while the game is ...
  29. [29]
    Neuro-Evolution of Augmenting Topologies for Dynamic Scheduling ...
    Sep 24, 2024 · This study introduces a neuro-evolution of augmenting topologies (NEAT) algorithm aimed at optimizing the scheduling efficiency in flexible job shops.
  30. [30]
    Tensorized NeuroEvolution of Augmenting Topologies for GPU ...
    Apr 2, 2024 · This paper introduces a tensorization method for the NEAT algorithm, enabling the transformation of its diverse network topologies and associated operations.
  31. [31]
    Employing Neuroevolution of Augmenting Topologies (NEAT) in ...
    Mar 31, 2025 · This study seeks to integrate Neuroevolution of Augment Topologies (NEAT) – a hybrid of model-based reinforcement learning and evolutionary algorithms – into ...Missing: supply optimization 2020-2025
  32. [32]
    Energy management strategy design for pure electric buses based ...
    An enhanced neuroevolutionary framework of the NEAT algorithm, termed Adaptive-Advanced Neuro-Evolution of Augmenting Topology (AANEAT), is proposed. This ...
  33. [33]
    Phased Searching with NEAT - SharpNEAT
    2) The pruning phase removes all or most of the functionally redundant structure in a population, moving the search back to a baseline lower-dimensional search ...Missing: neuroevolution | Show results with:neuroevolution
  34. [34]
    [PDF] Empirical study on the performance of Neuro Evolution of ...
    ABSTRACT. In this paper we provide empirical results on training a neural network with a genetic algorithm. We test various features.<|separator|>
  35. [35]
    (PDF) A Hypercube-Based Encoding for Evolving Large-Scale ...
    Aug 5, 2025 · This article presents a method called hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) that aims to narrow this gap.
  36. [36]
    [PDF] Generating Large-Scale Neural Networks Through Discovering ...
    This paper introduces such a method, called Hypercube-based. Neuroevolution of Augmenting Topologies (HyperNEAT), which evolves a novel generative encoding ...
  37. [37]
    [PDF] Evolving Coordinated Quadruped Gaits with the HyperNEAT ...
    Previous work has shown that generative encodings produce more modular, complex phenotypes with higher fitnesses and more beneficial mutations on average than ...
  38. [38]
    [PDF] Enhancing ES-HyperNEAT to Evolve More Complex Regular Neural ...
    To highlight the challenge of deciding the location and number of available hidden nodes, ES-HyperNEAT is com- pared to four fixed-substrate variants.
  39. [39]
    An Enhanced Hypercube-Based Encoding for Evolving the ...
    ES-HyperNEAT can evolve modular ANNs more easily than the original fixed- substrate HyperNEAT, in part because it facilitates evolving networks with limited ...<|control11|><|separator|>
  40. [40]
  41. [41]
    Galactic Arms Race: an experiment in evolving video game content
    The unique feature of GAR is that the game continually introduces new such weapons evolved by the cgNEAT algorithm. The philosophy behind GAR is that one of the ...
  42. [42]
    odNEAT: An Algorithm for Decentralised Online Evolution of Robotic ...
    Sep 1, 2015 · In this article, we present a novel algorithm for online evolution of ANN-based controllers in multirobot systems called Online Distributed ...
  43. [43]
    odNEAT: An Algorithm for Distributed Online, Onboard Evolution of ...
    odNEAT is a completely distributed evolutionary algorithm for online learning in groups of embodied agents such as robots. While previous approaches to online ...
  44. [44]
    Recent trends in robot learning and evolution for swarm robotics
    Apr 23, 2023 · The use of robot swarms has been proposed for coordinating groups of robots in missions in dynamic or unknown environments, such as for space ...
  45. [45]
    NEAT Matlab - NNRG Software
    The Matlab NEAT package contains Matlab source code for the NeuroEvolution of Augmenting Topologies method (see the original NEAT C++ package).Missing: early Octave
  46. [46]
    uber-research/PyTorch-NEAT - GitHub
    PyTorch NEAT builds upon NEAT-Python by providing some functions which can turn a NEAT-Python genome into either a recurrent PyTorch network or a PyTorch CPPN.
  47. [47]
    Configuration file description — NEAT-Python 0.92 documentation
    The NEAT section specifies parameters particular to the generic NEAT algorithm or the experiment itself. This section is always required, and is handled by the ...Missing: speciation | Show results with:speciation
  48. [48]
    Encog Machine Learning Framework - Heaton Research
    Encog is a pure-Java/C# machine learning framework that I created back in 2008 to support genetic programming, NEAT/HyperNEAT, and other neural network ...
  49. [49]
    [PDF] Encog: Library of Interchangeable Machine Learning Models for ...
    This paper introduces the Encog library for Java and C#, a scalable, adaptable, multi- platform machine learning framework that was first released in 2008.
  50. [50]
    NEAT-JavaScript Documentation
    A JavaScript implementation of the NeuroEvolution of Augmenting Topologies (NEAT) algorithm for evolving neural networks.
  51. [51]
    NEAT for Reinforcement Learning on the OpenAI Gym - GitHub
    This project applies Neuroevolution of Augmented Topologies (NEAT) on a number of OpenAI Gym Reinforcement Learning scenarios.
  52. [52]
    simondlevy/neat-gym: Neuro-evolution for OpenAI Gym environments
    This repository contains code allowing you to train, test, and visualize OpenAI Gym environments (games) using the NEAT algorithm and its variants.Quickstart · Hyperneat And Es-Hyperneat · Novelty Search
  53. [53]
    TensorFlow Eager implementation of NEAT and Adaptive HyperNEAT
    TensorFlow NEAT builds upon PyTorch-NEAT and NEAT-Python by providing some functions which can turn a NEAT-Python genome into either a recurrent TensorFlow ...Missing: hybrids | Show results with:hybrids
  54. [54]
    TensorNEAT: A GPU-accelerated Library for NeuroEvolution of ...
    Apr 16, 2025 · This paper introduces TensorNEAT, a GPU-accelerated library that applies tensorization to the NEAT algorithm.
  55. [55]
    yaricom/neat-libraries-compared - GitHub
    The repository with source code which can be used to compare performance of different implementations of the NEAT algorithm and its derivatives.
  56. [56]
    Compatibility_Threshold vs Target_Number_Of_Species · Issue #246
    Jul 30, 2022 · It would be a useful feature to have since one wouldn't have to manually tweak the compatibility threshold to achieve a sensible number of species.