Fact-checked by Grok 2 weeks ago

Particle swarm optimization

Particle swarm optimization (PSO) is a population-based algorithm designed for solving continuous nonlinear optimization problems, inspired by the observed in flocks of or schools of . Developed by and Russell Eberhart in 1995, PSO simulates the way individuals in a swarm adjust their positions based on their own experiences and those of their neighbors to collectively search for food or optimal locations. In PSO, a set of potential solutions, called particles, is initialized randomly within the search space, each with an associated and . The algorithm evaluates the of each particle using the objective and tracks two key values for each: the personal best (pbest), which is the best the particle has achieved so far, and the global best (gbest), the best found by any particle in the swarm. Particles update their and iteratively according to the velocity update equation:
v_{i,k+1} = w v_{i,k} + c_1 r_1 (pbest_i - s_{i,k}) + c_2 r_2 (gbest - s_{i,k})
where v_{i,k} is the of particle i at k, w is an weight, c_1 and c_2 are cognitive and constants (typically around 2.0), and r_1, r_2 are random numbers between 0 and 1. This update balances exploration (via and randomization) and exploitation (via attraction to best ), enabling the swarm to converge toward optimal solutions without requiring information.
PSO's simplicity, with few parameters to tune (e.g., weight decreasing from 0.9 to 0.4 over iterations, and factors for better convergence), makes it computationally efficient and easy to implement compared to other evolutionary algorithms like genetic algorithms. It has been widely applied in fields such as engineering design, training, function optimization, and image processing, often demonstrating robust performance on multimodal problems. Since its inception, numerous variants have extended PSO to discrete, constrained, and , enhancing its adaptability while preserving the core paradigm.

Introduction

History and Development

Particle swarm optimization (PSO) was invented in 1995 by social psychologist and electrical engineer Russell C. Eberhart as a computational method inspired by simulations of in bird flocking and fish schooling, originally explored in for modeling emergent . The algorithm emerged from efforts to simulate social interactions but quickly transitioned into an optimization tool when Kennedy and Eberhart recognized its potential for searching complex solution spaces by mimicking . Their seminal work was first published in the Proceedings of the Sixth International Symposium on Micro Machine and Human Science, an IEEE , where PSO was presented as a population-based optimizer outperforming traditional gradient-based methods in certain functions. Key milestones in PSO's early development included the 2001 book by , Eberhart, and Yuhui Shi, which formalized the algorithm's foundations, provided theoretical insights, and expanded its applications beyond initial simulations to practical problem-solving in and . In 1998, Shi and Eberhart introduced the weight in their "A Modified Particle Swarm Optimizer," enhancing the algorithm's balance between exploration and to improve on optimization problems. This was followed in 2002 by Maurice Clerc and James 's analysis of the algorithm's properties and selection in their "The particle swarm—explosion, stability, and in a multidimensional complex space," building on the constriction factor introduced by Clerc in 1999 to ensure stability and prevent particle divergence. Through the 2000s, PSO evolved into a staple of , with widespread incorporation into optimization software libraries such as MATLAB's Global Optimization Toolbox and integration into hybrid frameworks for real-world applications in systems and training. Its recognition grew through publications in prestigious venues like IEEE Transactions on Evolutionary Computation, where early papers analyzed PSO's theoretical properties and variants, solidifying its role alongside genetic algorithms and other metaheuristics. By the late 2000s, PSO had been applied in over a thousand studies annually, demonstrating scalability in high-dimensional problems. Recent developments up to 2025 have focused on PSO's integration with , particularly for hyperparameter tuning in deep neural networks, where it automates the selection of learning rates and layer sizes to outperform grid search in efficiency on large datasets like those for prediction models. Open-source implementations, such as the PySwarms library released in 2017, have democratized access, enabling researchers to extend PSO for single-objective and in environments with built-in support for custom topologies and constraints. By 2025, hybrid PSO variants combined with large language models have further advanced speeds in tuning complex architectures, reducing evaluation costs in resource-intensive training scenarios.

Core Principles and Inspiration

Particle swarm optimization (PSO) draws its primary inspiration from the collective behaviors observed in natural swarms, such as bird flocking, fish schooling, and insect swarming, where individuals interact locally to achieve group-level goals without centralized coordination. These behaviors enable efficient navigation and resource location in complex environments, mimicking how decentralized systems can solve optimization problems through emergent intelligence. A foundational influence is Reynolds' model, which simulates through three core rules: separation (avoiding crowding), alignment (matching velocity of neighbors), and cohesion (moving toward the average position of the flock). In PSO, these principles translate to particles representing candidate solutions in a multidimensional ; each particle adjusts its based on its own best-known position (cognitive component, akin to personal experience) and the swarm's best position (social component, akin to group cohesion). This social sharing guides the collective toward promising regions, balancing —through inherent randomness in movements—with of high-quality solutions. Unlike genetic algorithms, which evolve discrete populations via explicit operators like and crossover to simulate , PSO operates in continuous spaces with velocity-based updates that propagate information fluidly across the swarm. This approach fosters decentralized , where arises from simple local interactions rather than hierarchical control, enabling robust convergence in non-linear, multimodal optimization landscapes.

Fundamental Algorithm

Initialization and Particle Representation

In particle swarm optimization (PSO), the algorithm begins with the composition of a swarm consisting of a of N particles, where each particle serves as a candidate solution in a D-dimensional continuous search space and is represented as a \mathbf{x}_i \in \mathbb{R}^D for i = 1, \dots, N. This structure draws from the analogy, enabling collective exploration of the search space through individual and group interactions. The swarm size N typically ranges from 20 to 50 particles in standard implementations, as this provides a balance between maintaining population diversity for effective exploration and limiting computational overhead; larger sizes, such as 70 to 500, may be employed for more complex problems to enhance global search capabilities, though they increase evaluation costs. The initial positions of the particles are generated randomly from a within the predefined bounds of the optimization problem, expressed as \mathbf{x}_i(0) \sim \mathcal{U}(\mathbf{l}, \mathbf{u}), where \mathbf{l} and \mathbf{u} denote the lower and upper bounds of the feasible search space, respectively. This uniform randomization ensures an even spread across the domain, promoting initial and preventing premature to suboptimal regions. Such initialization is crucial for covering the search space broadly at the outset, as particles will subsequently adjust their positions based on personal and collective experiences. Initial velocities \mathbf{v}_i(0) for each particle are commonly set to small random values within a constrained range, such as [-v_{\max}, v_{\max}] where v_{\max} is often a fraction of the search dimensions (e.g., 10-20% of the bound width), or simply to zero to restrict early boundary violations and encourage gradual movement. Random initialization over the full domain can lead to particles exiting the feasible prematurely, reducing , whereas near-zero velocities allow for more controlled initial steps. Following and velocity setup, the objective function f(\mathbf{x}) is evaluated for each initial to determine fitness values. The personal best position \mathbf{p}_i for particle i is then initialized to its starting position \mathbf{x}_i(0), reflecting its initial "experience," while the global best position \mathbf{g} is selected as the \mathbf{p}_i with the highest across the swarm. This step establishes the reference points for subsequent and cognitive influences, with the initial global best serving as an early for the . These initializations set the foundation for , where from random starts interacts with parameters like inertia weight to guide , though detailed tuning of such parameters occurs separately.

Velocity and Position Update Rules

The core iterative mechanism in particle swarm optimization (PSO) advances each particle's search through updates to its and , incorporating influences to promote and toward optimal solutions. The update for particle i at t+1 combines the particle's with cognitive and components, formulated as: \mathbf{v}_i(t+1) = w \mathbf{v}_i(t) + c_1 r_1 (\mathbf{pbest}_i - \mathbf{x}_i(t)) + c_2 r_2 (\mathbf{gbest} - \mathbf{x}_i(t)) Here, w denotes the inertia weight that controls the influence of prior (), typically decreasing over iterations to shift from to ; c_1 and c_2 are positive coefficients balancing individual (cognitive term) and (social term), often set to values around 2 for balanced behavior; r_1 and r_2 are independent random values drawn uniformly from [0, 1] to introduce stochasticity, preventing premature ; \mathbf{pbest}_i is the best-known position of particle i based on its history; \mathbf{gbest} is the best position across the entire swarm; and \mathbf{x}_i(t) is the current position of particle i. This , refined from the original formulation, was introduced by Shi and Eberhart to improve performance on complex optimization landscapes. The position update follows directly from the new velocity, ensuring particles move in the search space according to their adjusted trajectories: \mathbf{x}_i(t+1) = \mathbf{x}_i(t) + \mathbf{v}_i(t+1) This simple addition maintains continuity in particle movement while allowing directed progress toward personal and global optima, as established in the foundational PSO model by and Eberhart. Following each position update, the fitness function f(\mathbf{x}_i(t+1)) is evaluated for particle i. If this fitness improves upon the particle's historical best (i.e., f(\mathbf{x}_i(t+1)) < f(\mathbf{pbest}_i) for minimization), then \mathbf{pbest}_i is updated to \mathbf{x}_i(t+1); subsequently, if the updated \mathbf{pbest}_i yields a better fitness than the current \mathbf{gbest}, the swarm's global best is revised to this position. These updates reinforce the roles of \mathbf{pbest}_i and \mathbf{gbest} in guiding future velocities, with the stochastic elements r_1 and r_2 ensuring varied trajectories across particles to sustain population diversity. To manage search space boundaries and prevent particles from drifting indefinitely, velocity clamping is commonly applied by limiting |\mathbf{v}_i(t+1)| \leq V_{\max} component-wise, where V_{\max} is a user-defined maximum velocity (often set to 10-20% of the search space diagonal) to curb excessive steps while preserving exploration. For positions exceeding bounds, a reflecting strategy repositions the particle by mirroring it across the boundary (e.g., if x_{i,j} > upper_j, set x_{i,j} = 2 \cdot upper_j - x_{i,j}), or alternatively, clamping sets it to the nearest feasible value; these methods maintain feasibility without altering the underlying dynamics. The following pseudocode outlines a single iteration of the update process for the swarm, assuming a population of N particles and prior initialization of positions, velocities, \mathbf{pbest}, and \mathbf{gbest}:
For t = 1 to maximum [iteration](/page/Iteration)s:
    For i = 1 to N:
        // Update velocity
        For each [dimension](/page/Dimension) d:
            v_{i,d}(t+1) = w * v_{i,d}(t) + c1 * r1 * (pbest_{i,d} - x_{i,d}(t)) + c2 * r2 * (gbest_d - x_{i,d}(t))
            // Clamp velocity
            If v_{i,d}(t+1) > V_max: v_{i,d}(t+1) = V_max
            If v_{i,d}(t+1) < -V_max: v_{i,d}(t+1) = -V_max
        
        // Update position
        For each [dimension](/page/Dimension) d:
            x_{i,d}(t+1) = x_{i,d}(t) + v_{i,d}(t+1)
            // Handle boundaries (e.g., clamp)
            If x_{i,d}(t+1) > upper_d: x_{i,d}(t+1) = upper_d
            If x_{i,d}(t+1) < lower_d: x_{i,d}(t+1) = lower_d
        
        // Evaluate [fitness](/page/Fitness)
        new_fitness = f(x_i(t+1))
        
        // Update personal best
        If new_fitness < f(pbest_i):
            pbest_i = x_i(t+1)
        
        // Update global best
        If f(pbest_i) < f(gbest):
            gbest = pbest_i
This structure highlights the stochastic velocity adjustments and sequential fitness checks that drive PSO's emergent optimization behavior.

Parameter Selection and Tuning

Inertia Weight

The inertia weight parameter, denoted as w, was introduced by Shi and Eberhart in as a mechanism to regulate the magnitude of particle velocities in particle swarm optimization (PSO), serving to dampen excessive momentum and thereby balance the algorithm's exploratory and exploitative behaviors without relying on explicit velocity clamping. This addition addressed limitations in the original PSO formulation—which relied on velocity clamping—by incorporating an inertia term to better control particle momentum and prevent uncontrolled divergence. A prominent for implementing the weight, detailed in Shi's empirical study, involves a linearly decreasing schedule that starts with a higher value to favor broad and gradually reduces to emphasize . The formula is given by: w(t) = w_{\max} - (w_{\max} - w_{\min}) \cdot \frac{t}{T_{\max}} where t is the current , T_{\max} is the maximum number of iterations, and typical bounds are w_{\max} = 0.9 for initial global and w_{\min} = 0.4 for later local refinement. High values of w (close to 0.9) promote by preserving particle and enabling wider traversal of the , while low values (near 0.4) enhance by reducing and focusing particles around promising regions. Empirical tuning of the inertia weight has shown sensitivity to problem characteristics, with optimal ranges varying across functions. For instance, on the unimodal function, a linear decrease from 0.9 to 0.4 yields near-optimal performance with average errors on the order of $10^{-81}, demonstrating effective convergence. Similarly, for the multimodal , the same range achieves competitive accuracy with an average error of approximately 39.71, though chaotic variants can further improve to around 3.22 by maintaining diversity. These results underscore the need for function-specific adjustments, as deviations from the 0.4–0.9 range often lead to premature stagnation or oscillation. Alternatives to the linear decreasing strategy include constant inertia weights, such as w = 0.729 when paired with a factor of approximately 0.729 (derived from \phi = 4.1), which ensures stable across diverse problems without temporal variation. In contrast, adaptive schemes adjust w dynamically based on metrics, increasing it when particle positions to reinvigorate and decreasing it otherwise to promote refinement, as explored in diversity-guided adaptations.

Acceleration Coefficients and Other Parameters

In particle swarm optimization (PSO), the cognitive acceleration coefficient c_1 scales the influence of a particle's personal best position on its velocity update, thereby promoting reliance on individual exploratory experience. Typical values for c_1 range from 2.0 to 2.5, as these settings balance local search without excessive individualism. Higher c_1 values enhance diversity by encouraging particles to deviate from group consensus, which is particularly useful in avoiding local optima. The social acceleration coefficient c_2 governs the attraction toward the swarm's global best position, fostering collective convergence based on shared . Like c_1, c_2 is commonly set between 2.0 and 2.5 to ensure of promising regions. Elevated c_2 strengthens group conformity, accelerating progress in unimodal landscapes but risking premature stagnation in ones. Balancing c_1 and c_2 is essential for effective search dynamics; equal values, such as c_1 = c_2 = 2.05, are standard to equally weigh personal and social components, while unequal settings—like a higher c_1— toward for diverse problem landscapes. This balance complements the inertia weight by focusing on directional pulls rather than persistence. Beyond acceleration coefficients, swarm size determines population diversity and computational cost, typically ranging from 10 to 100 particles depending on problem dimensionality and complexity, with 20–50 often optimal for standard benchmarks. Smaller swarms suffice for unimodal functions due to faster , whereas larger ones aid exploration by increasing solution variety. The maximum velocity v_{\max} bounds particle steps to prevent erratic movements and search space explosion, commonly set as a fraction (e.g., 10–20% ) of the problem's per . This parameter stabilizes trajectories without overly restricting . Parameter selection, including c_1, c_2, swarm size, and v_{\max}, relies on empirical tuning via functions; for instance, lower coefficients favor rapid on unimodal problems like the sphere function, while higher values and larger swarms improve performance on ones such as Rastrigin. Sensitivity analyses confirm that coefficients particularly affect speed and diversity in settings.

Neighborhood Structures

Global Best Topology

In the global best topology of particle swarm optimization (PSO), the swarm forms a fully connected network, often modeled as a , where every particle has to from all other particles. This structure enables centralized communication, allowing the entire population to share a single global best position, denoted as g_{\text{best}}, which represents the optimal solution discovered by any particle thus far. Introduced as the default configuration in the original PSO formulation by and Eberhart in 1995, this topology simplifies the social interaction component of the algorithm by aggregating the best personal experiences across the whole swarm. The implementation of the global best topology involves updating g_{\text{best}} iteratively as the position that minimizes the objective function among all particles' personal bests: g_{\text{best}} = \arg\min_{i=1,\dots,N} f(p_{\text{best},i}), where N is the swarm size, f is the , and p_{\text{best},i} is the personal best of the i-th particle. Each particle then adjusts its and toward both its personal best and this shared g_{\text{best}}, promoting rapid alignment of the swarm's search efforts. This topology excels in propagating information quickly throughout the swarm, which facilitates convergence on unimodal optimization problems where a single global optimum dominates the search space. For instance, in such landscapes, the unified attraction to g_{\text{best}} efficiently guides particles toward the solution without unnecessary exploration. However, the global best topology's emphasis on collective consensus can lead to a rapid loss of population diversity, making it susceptible to premature in problems with multiple local . In these scenarios, the swarm may cluster around a suboptimal early on, as the dominant pull of a single g_{\text{best}} stifles broader exploration. Unlike local topologies that maintain decentralized neighborhoods for enhanced robustness, the global variant prioritizes speed over sustained diversity.

Local and Ring Topologies

Local topologies in particle swarm optimization (PSO) represent decentralized neighborhood structures that limit information sharing among particles to a subset of nearby individuals, promoting and diversity within the swarm. Unlike the global best topology, where every particle has access to the overall best found by the population, local topologies define neighbors based on predefined connections, such as linear or arrangements, to mitigate premature in complex search spaces. This approach draws from models, where influence propagates gradually through limited interactions, enhancing the algorithm's ability to navigate landscapes. The ring topology is a fundamental local structure in which particles are organized in a cyclic , with each particle connected exclusively to its two immediate neighbors—one to the left and one to the right—forming a closed . In this setup, the local best position (lbest_i) for particle i is determined as the best-performing position among itself and its two neighbors, guiding the particle's movement toward promising regional rather than a single global . This topology, often implemented with a neighborhood size k=2, fosters a unidirectional or bidirectional flow of information around the ring, which helps maintain population diversity by slowing the spread of superior solutions and preventing the swarm from collapsing into suboptimal areas too early. The ring structure was explored as an effective means to balance exploration and exploitation in early modifications to the PSO . The topology extends the local paradigm by arranging particles in a two-dimensional , where each particle interacts with a fixed number of adjacent neighbors, typically four (up, down, left, and right), mimicking the neighborhood in cellular automata. This grid-like connectivity provides moderate information sharing, with lbest_i selected from the particle and its neighbors, allowing for more robust local guidance than the linear ring while still restricting global propagation. Neighborhood sizes in von Neumann setups commonly range from k=4 to k=8, depending on whether diagonal connections are included, influencing the between speed and stagnation avoidance; smaller k values emphasize , while larger ones accelerate . Empirical studies have shown this topology to perform well in maintaining swarm adaptability across dimensions. In local topologies, the velocity update rule is adapted by replacing the global best (gbest) with the particle-specific lbest_i in the social component of the equation: \mathbf{v}_i(t+1) = w \mathbf{v}_i(t) + c_1 r_1 (\mathbf{pbest}_i - \mathbf{x}_i(t)) + c_2 r_2 (\mathbf{lbest}_i - \mathbf{x}_i(t)) where w is the inertia weight, c_1 and c_2 are acceleration coefficients, and r_1, r_2 are random values in [0,1]. This modification ensures that each particle is influenced primarily by local successes, reducing the risk of collective entrapment. Typical neighbor sizes k range from 2 to 5, with smaller values like k=2 in ring topologies favoring prolonged exploration and larger ones in von Neumann structures promoting faster local convergence without full swarm synchronization. Empirical comparisons demonstrate that local and ring topologies exhibit slower convergence rates compared to global structures but yield superior performance on multimodal benchmark functions, such as the Griewank function, where they achieve higher success rates in locating multiple by preserving . For instance, in high-dimensional tests, ring topologies with k=2 have been observed to avoid stagnation more effectively, resulting in better average fitness scores over repeated runs, though at the cost of increased computational iterations. topologies similarly enhance multimodal handling, with empirical studies showing improved success rates on benchmark functions relative to fully connected alternatives. These findings underscore the role of limited connectivity in enhancing PSO's robustness for real-world optimization challenges involving rugged fitness surfaces.

Theoretical Foundations

Convergence Properties

Particle swarm optimization (PSO) exhibits convergence properties that have been rigorously analyzed through and deterministic frameworks, revealing conditions under which the algorithm stabilizes and approaches optimal solutions. A key result is Van den Bergh's theorem, which demonstrates convergence of PSO to a local optimum when employing a constricted velocity update mechanism, specifically requiring an inertia weight w < 1 and acceleration coefficients satisfying c_1 + c_2 < 4. This theorem establishes that, under these parameter constraints, the probability of the swarm's best position improving diminishes over time, ensuring eventual stagnation at a stable point with probability one. Stability analysis of PSO relies on eigenvalue examination of the and update equations, treating the as a linear . For damped oscillatory behavior leading to , the parameters must ensure all eigenvalues have magnitudes strictly less than 1, preventing or perpetual ; this is achieved in the constricted model when the sum of acceleration coefficients exceeds 4, allowing a suitable factor to dampen velocities appropriately. Eigenvalue magnitudes greater than or equal to 1 result in explosive growth or undamped cycles, underscoring the need for careful parameter selection within the region. Regarding trajectory convergence, particles in PSO move toward the global best position (gbest) via weighted combinations of their personal best and the swarm's best, forming a directional pull that aligns individual trajectories with the collective optimum. Probabilistic guarantees exist for the swarm reaching this alignment in finite iterations under the stochastic conditions, as the random cognitive and influences drive the system to a bounded state . The inertia weight w significantly influences this process: a decreasing w over iterations promotes enhanced final by progressively reducing and enabling precise local search, whereas a fixed w can sustain oscillations around the optimum due to insufficient . Despite these , PSO lacks a of global , relying instead on empirical performance for escaping , and it remains susceptible to trapping in suboptimal solutions within non-convex landscapes where multiple attractors exist. Global best topologies tend to accelerate rates compared to variants by propagating improvements swarm-wide more rapidly.

Adaptive and Self-Tuning Mechanisms

Adaptive and self-tuning mechanisms in particle swarm optimization (PSO) enable dynamic adjustments to key and structures during the optimization process, enhancing the algorithm's ability to balance exploration and exploitation while mitigating issues like premature . These mechanisms monitor swarm performance metrics, such as diversity or stagnation, and modify elements like the inertia weight or neighborhood topologies in to adapt to the problem . Unlike static parameter settings, adaptive approaches respond to runtime conditions, improving robustness across diverse optimization scenarios. One prominent adaptive strategy involves adjusting the weight based on population to maintain search breadth. For instance, the weight w can be increased when is high to promote global search and decreased as diminishes to focus on near promising regions. Proposed in early adaptive PSO variants, this -guided adaptation has been shown to outperform fixed strategies on benchmarks by preserving swarm variability. Fuzzy logic systems provide another self-tuning approach, particularly for acceleration coefficients c_1 and c_2, which control cognitive and social influences. In fuzzy adaptive PSO, inputs such as progress and recent success rates (e.g., improvement in global best fitness) feed into a system that dynamically tunes c_1 and c_2 to favor individual or collective guidance as needed. Developed by Shi and Eberhart, this method uses linguistic rules to handle uncertainty in parameter selection, resulting in more stable compared to constant values. Evaluations on standard test functions demonstrate that fuzzy-tuned coefficients reduce sensitivity to initial settings and enhance solution quality. Topology adaptation addresses stagnation by dynamically altering neighborhood connections, such as expanding the neighbor radius when no personal best updates occur over a threshold K iterations. In small-world PSO variants, a stagnation coefficient—calculated as the average number of stagnant particles—triggers rewiring to introduce long-range links, boosting information flow and escaping local optima. This runtime adjustment mimics evolving social networks, improving global search in complex landscapes. Self-tuning perturbations, like chaos-based velocity modifications, further enhance diversity; for example, chaotic maps perturb velocities of stagnant particles to inject randomness without full reinitialization. Eberhart's contributions to parameter dynamics laid groundwork, but chaos integrations, such as those using logistic maps for perturbation, have shown efficacy in maintaining exploration. Additionally, velocity reinitialization selectively resets velocities of low-performing particles to random values within bounds, restoring diversity when swarm homogeneity is detected. These mechanisms, including Eberhart-inspired perturbations around 2001, promote sustained search vigor. Empirical assessments indicate that adaptive and self-tuning PSO variants achieve superior performance compared to standard PSO, often ranking among top evolutionary algorithms. However, these enhancements introduce computational overhead from monitoring and adjustment logic, trading simplicity for improved reliability in non-stationary or deceptive problems.

Variants and Extensions

Hybridization with Other Techniques

Particle swarm optimization (PSO) has been hybridized with to combine PSO's efficient continuous search capabilities with GA's crossover and operators, which are particularly effective for handling aspects of optimization problems. In one early framework, PSO guides the population towards promising regions while GA operations introduce diversity through recombination, enhancing overall exploration in mixed continuous- spaces. This approach was demonstrated in the optimization of a profiled corrugated , where the hybrid outperformed standalone PSO and GA in terms of solution quality and convergence speed on instances. Hybrids of PSO and neural networks typically employ PSO to optimize network weights and biases, addressing the limitations of gradient-based methods like getting trapped in local minima. By treating network parameters as particle positions and using fitness based on error minimization, PSO provides a derivative-free alternative that promotes global search across the high-dimensional parameter space. A notable application in 2006 involved an optimized PSO variant (OPSO) for training neural networks to predict blood-brain barrier , achieving superior performance compared to standard on pharmaceutical datasets. These hybrids have shown improved training efficiency and accuracy in tasks, avoiding through PSO's stochastic updates. Local search hybrids, such as memetic PSO, integrate hill-climbing or neighborhood search operators into PSO iterations to boost exploitation around promising solutions identified by . In memetic frameworks, particles undergo periodic local refinements after velocity updates, balancing PSO's broad with intensified local optimization to escape premature convergence. An early memetic PSO scheme, building on adaptations, applied local search to refine particle positions, demonstrating enhanced on combinatorial problems like scheduling. This has proven effective in improving without significantly increasing computational overhead. Hybrids with () incorporate DE's mutation and crossover strategies to perturb PSO velocities, injecting directed diversity to maintain population spread and enhance global search in landscapes. In the DEPSO algorithm, DE operators are selectively applied to update particle velocities, combining PSO's learning with DE's perturbations for better handling of stagnation. Introduced in 2003, this hybrid showed superior results on standard function optimization benchmarks, achieving lower error rates than pure PSO or DE on problems like Rastrigin and Griewank functions. Overall, these hybridization strategies yield benefits such as enhanced global exploration and reduced susceptibility to local optima, with representative examples including improved tour lengths in traveling salesman problem (TSP) instances via PSO-GA combinations, where hybrids showed significant improvements over standalone methods on TSPLIB benchmarks. In function optimization, hybrids like DEPSO have demonstrated faster convergence and higher success rates in locating global minima compared to individual algorithms. These integrations leverage complementary strengths, making PSO more robust for complex, real-world optimization challenges.

Strategies to Avoid Premature Convergence

Standard particle swarm optimization (PSO) is prone to premature , where the swarm stagnates around local due to loss of , as analyzed in properties of . One key strategy for preservation in PSO involves the swarm's variance and applying random perturbations to particle when it falls below a predefined . This approach ensures that particles do not cluster too closely, maintaining capability throughout the search process. For instance, if the variance in or drops, a random noise term is added to the update to reinject , preventing the swarm from collapsing into suboptimal regions. This technique has been shown to enhance global search in landscapes by dynamically balancing and . Repulsion mechanisms address premature by temporarily directing particles away from the global best position (gbest) to avoid overcrowding around potentially local . In the niching particle swarm optimizer proposed by Brits, Engelbrecht, and van den Bergh in 2002, subpopulations are formed around personal bests, and particles within a close radius experience a repulsive force from the gbest, effectively creating temporary avoidance zones. This niching approach promotes the discovery of multiple by isolating promising regions and has demonstrated improved performance on deceptive functions. A related variant, the guaranteed PSO (GCPSO) by van den Bergh and Engelbrecht (2003), incorporates a dedicated "informer" particle that excludes gbest influence in its update, ensuring at least one particle explores independently to guarantee eventual while mitigating stagnation. Subpopulation division strategies divide the swarm into multiple independent groups that evolve separately and merge periodically to share information, thereby preserving overall diversity. The multi-swarm PSO introduced by Blackwell and Branke in employs several small, independent swarms, each with its own local best, that periodically exchange particles or best positions to reinvigorate search. This periodic merging prevents any single subpopulation from dominating and converging prematurely, allowing the algorithm to track multiple attractors in the . Such multi-swarm structures have proven effective in dynamic and environments by sustaining exploration across disjoint search areas. Variations in velocity clamping, such as adaptive maximum (v_max) based on swarm success rate, further help avoid premature by adjusting exploration bounds dynamically. In this method, v_max is increased if the success rate—defined as the proportion of particles improving their in recent iterations—falls below a , allowing larger steps to escape local traps; conversely, it is decreased during successful phases to refine searches. This adaptive clamping maintains velocity diversity without fixed limits that might overly restrict movement. Seminal work on parameter adaptation, including velocity limits, highlights how such success-rate-based adjustments improve reliability on complex problems. These strategies collectively enhance PSO's robustness on deceptive multimodal functions, such as the Schaffer benchmark, where standard PSO often fails to locate the global optimum. For example, niching and multi-swarm variants have achieved higher success rates in identifying the global minimum compared to basic PSO, as evidenced by reduced mean errors and increased metrics in empirical tests. Overall, these pure PSO modifications prioritize maintenance to escape local optima without relying on external hybridization.

Simplified and Accelerated Forms

The Bare Bones Particle Swarm Optimization (BBPSO), introduced by in 2003, simplifies the standard PSO by removing the velocity vector entirely, thereby eliminating the need for inertia weight and acceleration coefficients in the update rules. Instead, each particle's new position is generated by sampling from a Gaussian distribution with equal to the of its personal best position \mathbf{p}_i and the global best position \mathbf{g}, and standard deviation \sigma = \frac{|\mathbf{p}_i - \mathbf{g}|}{\sqrt{2}}. This stochastic update mimics the exploratory behavior of the original velocity-based mechanism while reducing computational overhead and hyperparameters, making it suitable for theoretical analysis of swarm dynamics. In BBPSO, the position update can be expressed as: \mathbf{x}_i(t+1) \sim \mathcal{N}\left( \frac{\mathbf{p}_i + \mathbf{g}}{2}, \left| \mathbf{p}_i - \mathbf{g} \right|^2 / 2 \right) where the variance is \sigma^2 = \left( \frac{|\mathbf{p}_i - \mathbf{g}|}{\sqrt{2}} \right)^2. This approach trades the historical velocity memory of standard PSO—which helps maintain —for pure probabilistic sampling, potentially leading to faster in unimodal landscapes but increased risk of stagnation in ones. The Accelerated Particle Swarm Optimization (APSO), developed by in 2008, further streamlines PSO by fixing the inertia weight at w = 1 and setting both cognitive and social acceleration coefficients to c_1 = c_2 = \frac{\phi}{2}, where \phi = \frac{1 + \sqrt{5}}{2} \approx 1.618 is the . This configuration enhances convergence speed for unimodal problems by emphasizing exploitation toward the global best while retaining a simplified update similar to the original PSO rules. No separate velocity clamping (v_{\max}) is required, as the fixed parameters promote rapid movement without additional tuning. Both BBPSO and APSO reduce the parameter space compared to canonical PSO, which typically involves w, c_1, c_2, and v_{\max}, allowing for easier implementation in resource-constrained environments. However, these simplifications can compromise diversity, as BBPSO lacks from prior iterations and APSO's fixed coefficients may accelerate toward local optima in complex search spaces. In practice, BBPSO and APSO have been applied in systems and optimization tasks, such as parameter in systems, where reduced computational demands outweigh the loss of fine-grained .

Multi-Objective and Constrained Optimization

Particle swarm optimization (PSO) has been extended to address problems, where multiple conflicting objectives must be optimized simultaneously, leading to a set of solutions known as the . A seminal approach, known as Multi-Objective Particle Swarm Optimization (MOPSO), was proposed by Coello Coello, Pulido, and Lechuga in 2002, which incorporates Pareto dominance into the PSO framework to identify non-dominated solutions. In MOPSO, an external archive stores these non-dominated solutions encountered during the search process, serving as a repository of potential Pareto-optimal candidates that guide the swarm's particles. To maintain diversity in the archive and prevent overcrowding, a mechanism, inspired by the non-dominated II (NSGA-II), is employed for selecting and pruning solutions, ensuring a well-distributed representation of the . Leader selection in MOPSO plays a crucial role in balancing convergence and diversity. The global best position (gbest) for each particle is selected from the external archive using a roulette wheel selection scheme, where solutions in sparsely populated regions of the objective space receive higher selection probabilities to promote exploration. Additionally, the sigma method, introduced by Mostaghim and Teich in , is integrated to maintain spread by assigning a scalar value to each archived solution based on its relative to others, favoring leaders that enhance the overall and prevent premature to a subset of the . This method calculates the sigma value as a measure of a solution's contribution to the front's coverage, guiding particles toward underrepresented areas. For constrained multi-objective optimization, PSO adaptations incorporate constraint-handling techniques to navigate feasible regions while pursuing Pareto optimality. Feasibility rules, adapted from Deb's constrained optimization principles, prioritize feasible solutions over infeasible ones during dominance comparisons and leader selection, ensuring that only viable particles influence the swarm's direction unless no feasible solutions exist, in which case the closest infeasible ones are considered. Dynamic penalty functions are also employed, where the fitness evaluation dynamically adjusts penalties based on iteration progress and constraint violation degrees, gradually increasing pressure toward feasibility as the search advances. An example integration is the ε-constraint method, proposed by Takahama and Sakai in 2006, which transforms constraints into a single additional objective relaxed by an ε parameter that decreases over time, allowing PSO to treat the problem as unconstrained multi-objective while progressively enforcing feasibility. Performance of these MOPSO variants is evaluated using metrics such as hypervolume, which measures the volume dominated by the approximated relative to a reference point, and , which quantifies the uniformity and extent of solution distribution along . Empirical studies on benchmark test suites like ZDT demonstrate that MOPSO often outperforms NSGA-II in terms of hypervolume and , achieving better convergence to known Pareto fronts with fewer function evaluations due to PSO's swarm-based guidance. A notable variant incorporates time-variant weights to balance exploration and exploitation in multi-objective settings, as proposed by , Bandyopadhyay, and Pal in 2007, where the inertia decreases nonlinearly over iterations to initially favor diversity across objectives and later promote convergence to the . This adaptation enhances MOPSO's ability to handle problems with varying objective landscapes without requiring manual parameter tuning.

Discrete and Combinatorial Adaptations

Particle swarm optimization (PSO), originally designed for continuous search spaces, has been adapted for and problems by redefining and concepts to handle binary strings, permutations, and other non-numeric representations. These adaptations map continuous updates to probabilistic or operator-based changes in discrete domains, enabling PSO to tackle NP-hard problems like , knapsack, and scheduling. A prominent discrete adaptation is the binary PSO, introduced by and Eberhart in 1997, which transforms particle positions into binary vectors where each represents a bit (0 or 1). In this framework, the continuous velocity vector is interpreted as the probability of flipping each bit, using a \sigma(v_{ij}) = \frac{1}{1 + e^{-v_{ij}}} to map velocities to the interval [0, 1]. The new position is determined stochastically: for each bit, a r is generated, and the bit is set to 1 if r < \sigma(v_{ij}), otherwise 0. To incorporate cognitive and social influences, the velocity update often involves XOR operations between the current position and the personal best (pbest) or global best (gbest), emphasizing differences that guide bit changes toward promising solutions. This approach allows binary PSO to effectively search binary decision spaces, such as in 0/1 knapsack problems where bits indicate item inclusion. For permutation-based problems, such as the traveling salesman problem (TSP), discrete PSO variants redefine as sequences of swaps or exchange probabilities rather than additive updates. In these models, a particle's represents a of elements (e.g., city visit order), and is encoded as a prioritized list of swap operations between positions in the . The update process applies these swaps to the current to generate the new , with the number and selection of swaps influenced by cognitive and social components scaled to discrete actions. For instance, magnitudes determine the extent of rearrangements, often normalized to a fixed number of swaps per to maintain feasibility. This swap-based mechanism preserves the permutation structure while simulating through probabilistic exchanges. A theoretical analysis of such discrete PSO for TSP demonstrates properties similar to continuous PSO, with particles aggregating around high-quality tours under appropriate and parameters. Combinatorial adaptations extend these ideas to problems requiring ordered selections, like job scheduling, by incorporating rank-based selection or operator-driven updates. In a 2005 discrete PSO for permutation flowshop scheduling, particles represent job sequences, and velocity is modeled as swap sequences that reorder the permutation to minimize makespan. The social and cognitive updates generate swap probabilities based on differences from pbest and gbest sequences, with exchanges applied to evolve the particle toward better schedules. Additional operators, such as crossover-like mechanisms for the social component, blend segments from gbest into the current position, while velocity is normalized to discrete actions like single or multiple swaps. These methods handle constraints inherent in combinatorial spaces, such as no repeated elements in permutations. Such discrete and combinatorial PSO variants have been benchmarked on classic NP-hard problems, demonstrating competitiveness with ant colony optimization (ACO) and genetic algorithms (GA). For the 0/1 knapsack problem, binary PSO achieves near-optimal solutions on instances with up to 500 items, often outperforming GA in convergence speed due to its probabilistic bit-flipping. In TSP benchmarks like TSPLIB, swap-based discrete PSO finds tours within 5-10% of optimal for instances up to 200 cities, rivaling ACO in solution quality while requiring fewer parameters to tune. These adaptations thus provide robust alternatives for discrete domains, balancing exploration and exploitation without the need for problem-specific heuristics.

Applications and Limitations

Real-World Applications

Particle swarm optimization (PSO) has found extensive application in domains, particularly for optimizing designs. A seminal work demonstrated its effectiveness in synthesizing nonuniform linear array antennas by adjusting element positions and excitations to achieve desired radiation patterns while minimizing sidelobe levels, outperforming traditional methods in convergence speed and solution quality. In , PSO has been employed for structural optimization, such as and designs, where it minimizes weight subject to and constraints; for instance, applications to tower structures have shown PSO variants achieving material savings compared to baseline designs. In machine learning, PSO facilitates and clustering tasks by searching high-dimensional spaces for optimal subsets that enhance model performance. During the 2010s, kernel-based PSO variants were integrated with support vector machines (SVMs) to tune hyperparameters like the and regularization constant, improving classification accuracy on biomedical datasets. Similarly, PSO-optimized density-based clustering has addressed limitations in traditional algorithms like , enabling better handling of noise and varying cluster densities in real-world data. The energy sector leverages PSO for power system dispatch and forecasting. In economic dispatch problems, PSO minimizes generation costs while satisfying demand and emission constraints, with variants showing superior for large-scale grids involving renewables. For wind energy, multi-objective PSO (MOPSO) has optimized farm layouts post-2020, balancing annual energy production and wake effects; a 2023 study on active yaw control layouts reported 8-12% increases in output over uniform placements. PSO-SVM have also improved short-term wind forecasting accuracy on operational datasets from multiple farms. Recent advancements since 2020 highlight PSO's role in path planning, where improved variants generate collision-free trajectories in dynamic environments; for example, bi-population PSO with perturbation strategies has reduced path lengths in simulated navigation compared to standard A* algorithms. During the , PSO adaptations optimized , such as vaccine distribution and emergency supply routing; surveys noted PSO's use in multi-objective models for and assignments, achieving better equity in allocations across regions than approaches. Overall, comparative studies often show PSO's advantages in computational efficiency over genetic algorithms (GAs) on real datasets across these domains, attributed to its fewer parameters and faster , as evidenced in benchmarks on and optimization. This to high-dimensional problems underscores its practical impact.

Challenges and Limitations

One of the primary challenges in particle swarm optimization (PSO) is premature , where the swarm rapidly loses and stagnates at a local optimum rather than exploring the solution space, particularly in high-dimensional or problems. This arises due to the particles' tendency to around promising but suboptimal points, leading to a significant reduction in search as the algorithm progresses. Studies indicate that this issue is prevalent in standard PSO implementations, often occurring in complex landscapes where the initial swarm positioning and velocity updates fail to maintain exploration. PSO's performance is highly sensitive to parameter tuning, including the inertia weight w, cognitive c_1, and social c_2, with suboptimal choices leading to degraded speed and solution quality. Research shows that inappropriate values can cause erratic particle movement or excessive , resulting in no universal optimal parameter set that performs consistently across diverse problem types. For instance, fixed parameters often yield inconsistent results, necessitating problem-specific adjustments to exploration and effectively. Scalability poses another limitation, as PSO's computational complexity is O(N \cdot D) per iteration, where N is the number of particles and D is the problem dimensionality, making it inefficient for large-scale optimizations. In practice, swarm sizes are typically limited to fewer than particles to manage computational costs, and performance deteriorates in dimensions exceeding 50 due to the curse of dimensionality, where the search space volume explodes, amplifying the risk of missing global optima. Due to its heuristic nature, PSO lacks formal convergence guarantees and can fail on noisy or dynamic environments without modifications, as stochastic updates do not ensure optimality or robustness to perturbations. Comparative analyses from the 2020s reveal that PSO underperforms on expensive black-box functions, where the latter's surrogate modeling enables fewer evaluations for high-quality solutions. Additionally, in applications like fairness tuning, PSO's heuristic decisions may perpetuate biases if not carefully adapted, raising ethical concerns about equitable outcomes in sensitive domains.

References

  1. [1]
  2. [2]
    None
    ### Summary of Particle Swarm Optimization (PSO) Fundamentals
  3. [3]
    [PDF] A new optimizer using particle swarm theory - Semantic Scholar
    A new optimizer using particle swarm theory · R. Eberhart, J. Kennedy · Published in MHS'95. Proceedings of the… 4 October 1995 · Computer Science, Engineering ...
  4. [4]
    The particle swarm optimization algorithm: convergence analysis ...
    The particle swarm optimization algorithm is analyzed using standard results from the dynamic system theory. Graphical parameter selection guidelines are ...
  5. [5]
    A quarter century of particle swarm optimization
    Apr 4, 2018 · In this paper, the historical development, the state-of-the-art, and the applications of the PSO algorithms are reviewed. In addition, the ...
  6. [6]
    Particle Swarm Optimization-Based Hyperparameters Tuning of ...
    The main objective of this research is to develop and evaluate optimized ML models for predicting COVID-19 risk by identifying the optimal hyperparameters that ...
  7. [7]
    Welcome to PySwarms's documentation! — PySwarms 1.3.0 ...
    PySwarms is an extensible research toolkit for particle swarm optimization (PSO) in Python. It is intended for swarm intelligence researchers, practitioners, ...Understanding the PySwarms... · Basic Optimization · Pyswarms.single package
  8. [8]
    Particle Swarm Optimization: A Survey of Historical and Recent ...
    This paper serves to provide a thorough survey of the PSO algorithm with special emphasis on the development, deployment, and improvements of its most basic<|separator|>
  9. [9]
    (PDF) Particle Swarm Optimization - ResearchGate
    The algorithm and its concept of "Particle Swarm Optimization"(PSO) were introduced by James Kennedy and Russel Ebhart in 1995 [4]. However, its origins go ...
  10. [10]
    Particle Swarm Optimization Algorithm and Its Applications
    Apr 19, 2022 · These agents (swarm individuals or insects) are relatively gullible with simple own capabilities. ... animals like fish schooling and bird ...
  11. [11]
    Population size in Particle Swarm Optimization - ScienceDirect.com
    While 20-50 particles is common, studies show that 70-500 particles often yields better performance in PSO, and larger sizes may be needed for difficult ...
  12. [12]
    [PDF] Particle Swarm Optimization: Velocity Initialization - Zenodo
    Particle positions are initialized to random positions within the domain of the optimization problem, and such that the search space is uniformly covered. ...
  13. [13]
    A modified particle swarm optimizer | IEEE Conference Publication
    Eberhart, 1995; J. Kennedy, 1997). As in other algorithms, a population of individuals exists. This algorithm is called particle swarm optimization (PSO) since ...
  14. [14]
    [PDF] Boundary Handling Approaches in Particle Swarm Optimization
    Jul 28, 2012 · This paper attempts to review popular bound handling methods, in context to PSO, and proposes new methods which are found to be robust and ...
  15. [15]
    (PDF) A Modified Particle Swarm Optimizer - ResearchGate
    Shi and Eberhart [32] introduced an inertia weight parameter ω to the basic PSO algorithm for controlling particle movement momentum, as shown in Eq. (3) ...
  16. [16]
    [PDF] Inertia Weight Strategies in Particle Swarm Optimization
    In 1998, first time Shi and Eberhart [2] presented the concept of Inertia. Weight by introducing Constant Inertia Weight. They stated that a large Inertia ...
  17. [17]
  18. [18]
    [PDF] Empirical Study of Particle Swarm Optimization - Yuhui Shi
    By introducing a linearly decreasing inertia weight into the original version of PSO, the performance of PSO has been greatly improved through experimental ...
  19. [19]
    Limiting the Velocity in the Particle Swarm Optimization Algorithm
    1 Introduction. The Particle Swarm Optimization (PSO) algorithm was originally proposed by Kennedy and Eberhart in the mid-1990s,. · 2 Velocity Regulation · 3 ...
  20. [20]
    (PDF) Particle Swarm Optimization: An Overview - ResearchGate
    Aug 10, 2025 · ArticlePDF Available. Particle Swarm Optimization: An Overview. October 2007; Swarm Intelligence 1(1). DOI:10.1007/s11721-007-0002-0. Source ...
  21. [21]
    [PDF] Particle swarm optimization | Semantic Scholar
    Particle swarm optimization · James Kennedy, Russell Eberhart · Published in International Conference on… 6 August 2002 · Computer Science, Mathematics.Missing: original | Show results with:original
  22. [22]
  23. [23]
    Particle Swarm Optimization Using a Ring Topology - ResearchGate
    Aug 9, 2025 · This paper describes a simple yet effective niching algorithm, a particle swarm optimization (PSO) algorithm using a ring neighborhood topology, ...
  24. [24]
    [PDF] Tournament Topology Particle Swarm Optimization
    In the “ring” topology, which favors exploration, each particle communicates with its two adjacent particles in index order [14]. In the “von Neumann” topology, ...Missing: seminal | Show results with:seminal
  25. [25]
    An Investigation of Particle Swarm Optimization Topologies ... - MDPI
    Jun 1, 2021 · In this article, we conduct a performance investigation of eight PSO topologies in SDD. The success rate and mean iterations that are obtained from the ...Missing: seminal | Show results with:seminal
  26. [26]
    Topology selection for particle swarm optimization - ScienceDirect.com
    Oct 1, 2016 · In the lbest topology, each particle is only connected with its nearest K neighbors. In most publications and in this paper, K = 2 is used ...
  27. [27]
    (PDF) A Convergence Proof for the Particle Swarm Optimiser
    Aug 5, 2025 · This paper provides a proof to show that the original PSO does not have guaranteed convergence to a local optimum.
  28. [28]
    Particle Swarm: Explosion, Stability, Convergence in Complex Space
    The particle swarm - explosion, stability, and convergence in a multidimensional complex space. Abstract: The particle swarm is an algorithm for finding optimal ...
  29. [29]
    Population Diversity Based Inertia Weight Adaptation in Particle ...
    In this paper, we propose two new inertia weight adaptation strategies in Particle Swarm Optimization (PSO). The two inertia weight adaptation strategies ...
  30. [30]
  31. [31]
    [PDF] Small-World Particle Swarm Optimization with Topology Adaptation
    The proposed topology adaptation mechanism of ASWPSO is described as follows. Define a stagnation coefficient Sc as. 1. N i i s N.
  32. [32]
    Chaos embedded particle swarm optimization algorithms
    May 30, 2009 · This paper proposes new particle swarm optimization (PSO) methods that use chaotic maps for parameter adaptation.
  33. [33]
    Balancing Exploitation and Exploration in Particle Swarm Optimization
    Aug 9, 2025 · In this paper, we propose a new method to extend PSO, velocity-based reinitialization (VBR). VBR is both simple to implement and effective at enhancing many ...
  34. [34]
    Empirical analysis and improvement of the PSO-sono optimization ...
    According to the results reported in [20], PSO-sono generally outperforms other popular PSO-variants on many benchmark test sets (namely, IEEE CEC 2013, 2014 ...
  35. [35]
  36. [36]
    Optimized Particle Swarm Optimization (OPSO) and its application to ...
    Mar 10, 2006 · In this experiment, we decided not to use a restriction constant for the maximum velocity V max . ... We also employed the V max constant for our ...
  37. [37]
    Memetic particle swarm optimization | Annals of Operations Research
    Aug 9, 2007 · Particle swarm optimization for integer programming. In Proceedings of the IEEE 2002 congress on evolutionary computation (pp. 1576–1581).
  38. [38]
  39. [39]
    [PDF] A Convergence Proof for the Particle Swarm Optimiser
    Abstract. The Particle Swarm Optimiser (PSO) is a population based stochastic optimisation algo- rithm, empirically shown to be efficient and robust.
  40. [40]
    Particle Swarm Optimization Algorithm with Random Perturbation ...
    Particle Swarm Optimization Algorithm with Random Perturbation around Convergence Center. Trans Tech Publications Ltd. Advanced Materials Research. March 2012 ...
  41. [41]
    [PDF] A-niching-particle-swarm-optimizer.pdf - ResearchGate
    A NICHING PARTICLE SWARM OPTIMIZER. R.Brits, A.P. Engelbrecht, F. van den Bergh. Department of Computer Science, University of Pretoria, Pretoria, South ...
  42. [42]
    Multi-swarm Optimization in Dynamic Environments - SpringerLink
    In this paper, we present new variants of Particle Swarm Optimization (PSO) specifically designed to work well in dynamic environments.
  43. [43]
    (PDF) A niching particle swarm optimizer - ResearchGate
    Brits et al. [13] proposed a niching particle swarm optimizer (NichePSO). In NichePSO, the main swarm generates multiple subpopulations to find optimal ...
  44. [44]
    Bare bones particle swarms | IEEE Conference Publication
    This paper strips away some traditional features of the particle swarm in the search for the properties that make it work.
  45. [45]
    Bare bones particle swarms - ResearchGate
    Barebones is another version of the PSO algorithm that was proposed by Kennedy in 2003 [26] . In this variant, the velocity and position of particles are ...
  46. [46]
    Accelerated Particle Swarm Optimization Algorithms Coupled with ...
    Apr 2, 2023 · A Feature Paper should be a substantial original Article ... Yang developed the accelerated particle swarm optimization (APSO) algorithm [28].
  47. [47]
    Heterogeneous Cooperative Bare-Bones Particle Swarm ... - MDPI
    This paper proposes a novel Bare-Bones Particle Swarm Optimization (BBPSO) algorithm for solving high-dimensional problems.
  48. [48]
    On the performance of accelerated particle swarm optimization for ...
    In this paper, a newly emerged Accelerated particle swarm optimization (APSO) technique was applied and compared with standard particle swarm optimization (PSO)
  49. [49]
  50. [50]
  51. [51]
  52. [52]
    Multi-Objective Particle Swarm Optimization with time variant inertia ...
    Nov 15, 2007 · In this article we describe a novel Particle Swarm Optimization (PSO) approach to multi-objective optimization (MOO), called Time Variant Multi-Objective ...
  53. [53]
    A discrete binary version of the particle swarm algorithm - IEEE Xplore
    Abstract: The particle swarm algorithm adjusts the trajectories of a population of "particles" through a problem space on the basis of information about ...
  54. [54]
    Discrete Particle Swarm Optimization for TSP - SpringerLink
    Particle swarm optimization (PSO) is a nature-inspired technique originally designed for solving continuous optimization problems.
  55. [55]
    Discrete Particle Swarm Optimization (DPSO) Algorithm for ...
    In this paper a discrete particle swarm optimization (DPSO) algorithm is proposed to solve permutation flowshop scheduling problems with the objective of ...
  56. [56]
    Biomedical classification application and parameters optimization of ...
    Oct 25, 2016 · Compared with PSO-RBF kernel and PSO-mixed kernel, the improved PSO-mixed kernel SVM can effectively improve the classification accuracy through ...Proposed Algorithm · Kernel Function Selection · The Particle Swarm...<|separator|>
  57. [57]
    Particle swarm Optimized Density-based Clustering and Classification
    In this paper a novel Particle swarm Optimized Density-based Clustering and Classification (PODCC) is proposed, designed to offset the drawbacks of DBSCAN.
  58. [58]
  59. [59]
    Particle swarm optimization of a wind farm layout with active control ...
    In the present method, a farm layout is globally optimized with simultaneous consideration of yaw angles for various wind speeds and directions.Missing: MOPSO | Show results with:MOPSO
  60. [60]
    [PDF] New PSO-SVM Short-Term Wind Power Forecasting Algorithm ...
    Sep 20, 2022 · Power Prediction Tool (WPPT) wind power forecasting system, which could forecast multiple wind farms and regional wind farms. e DTU School ...
  61. [61]
    Mobile robot path planning based on bi-population particle swarm ...
    This paper proposes a bi-population PSO algorithm with a random perturbation strategy (BPPSO), which divides particles into two subpopulations.Missing: post- | Show results with:post-
  62. [62]
    Differential evolution and particle swarm optimization against COVID ...
    Aug 19, 2021 · In this paper a survey of DE and PSO applications for problems related with COVID-19 pandemic that were rapidly published in 2020 is presented
  63. [63]
    Performance Comparison between Particle Swarm Optimization and ...
    This study focuses on a postman delivery routing problem of the Chiang Rai post office, located in the Chiang Rai province of Thailand.
  64. [64]
    Performance comparison of genetic algorithms and particle swarm ...
    Aug 7, 2025 · Performance comparison of genetic algorithms and particle swarm optimization for model integer programming bus timetabling problem · Figures.
  65. [65]
    [PDF] PARTICLE SWARM OPTIMIZATION BASED ALGORITHMS TO ...
    Apr 24, 2014 · ABSTRACT. Particle Swarm Optimization (PSO) is a biologically inspired computational search and optimization.
  66. [66]
    Parameter selection in particle swarm optimization - SpringerLink
    Dec 10, 2005 · This paper first analyzes the impact that inertia weight and maximum velocity have on the performance of the particle swarm optimizer, and then provides ...
  67. [67]
    Measuring the curse of dimensionality and its effects on particle ...
    Aug 7, 2025 · The divergence of these growth rates has important effects on the parameters used in particle swarm optimization and differential evolution as ...
  68. [68]
    [PDF] arXiv:2201.06809v2 [physics.data-an] 13 Oct 2023
    Oct 13, 2023 · In this paper, we compare the performance of two algorithms, particle swarm optimisation (PSO) and Bayesian optimisation (BO), for the ...
  69. [69]
    Fairness optimisation with multi-objective swarms for explainable ...
    Apr 3, 2024 · FOMOS has demonstrated an innovative application of swarm intelligence to solve challenges in fairness-aware methods in streaming data.