Fact-checked by Grok 2 weeks ago

Variable neighborhood search

Variable neighborhood search (VNS) is a framework for solving combinatorial and problems, particularly NP-hard ones where exact methods are computationally infeasible, by systematically varying neighborhood structures within a local search to escape local optima and explore the solution space more effectively. Introduced by Nenad Mladenović and Pierre Hansen in , VNS builds on the observation that a problem's local minima differ depending on the chosen neighborhood structure, that the global minimum is a local minimum in at least one possible neighborhood, and that local minima across neighborhoods tend to be near one another, providing pathways to better solutions. The method integrates a "shaking" , where random perturbations are applied using progressively larger neighborhoods to diversify the search, with a local search to intensify exploration around promising areas. At its core, the basic VNS involves selecting a set of neighborhood structures N_k, initializing a x, and iterating until a stopping criterion is met: starting with the smallest neighborhood (k=1), generate a random neighbor x' in N_k(x), apply local search to obtain a local optimum x'', and if x'' improves , x to x'' and k to 1; otherwise, increment k to explore larger neighborhoods. Variants include variable neighborhood (VND), a deterministic using steepest or first-improvement across neighborhoods without , and reduced VNS (RVNS), which skips full local search after shaking for faster exploration. VNS has been applied successfully to diverse problems, such as the , scheduling of workover rigs in oil production (achieving up to 109 m³ daily reductions in unproduced ), and the weighted , often outperforming other metaheuristics like or genetic algorithms in terms of solution quality and computational efficiency. Its simplicity allows easy integration with existing local search methods, and extensions like general VNS (GVNS) incorporate advanced shaking and local search strategies for even better performance on complex instances.

Introduction and Background

Overview

Variable neighborhood search (VNS) is a framework designed for constructing effective heuristics to solve combinatorial and problems by systematically varying neighborhood structures during the search process. It builds upon local search methods, which are prerequisite components that iteratively improve solutions within a defined neighborhood until a local optimum is reached. The core idea of VNS is to employ multiple neighborhood structures—foundational elements that define sets of neighboring solutions based on a in the solution space—to systematically explore the solution landscape and escape local optima. By changing neighborhoods, VNS allows the search to move to more distant solutions when improvements are not found in closer ones, thereby diversifying the exploration while focusing on promising areas. VNS offers key benefits, including its simplicity in implementation as it requires only a basic local search subroutine and a set of neighborhood structures, along with few parameters to tune. This framework is particularly effective for finding high-quality solutions quickly in complex optimization problems, often outperforming traditional local search by enhancing solution quality without excessive computational overhead. A brief of the general VNS involves two main phases repeated until a stopping condition is met:
Initialize [incumbent](/page/Incumbent) solution x and neighborhood index k = 1
While stopping condition not met:
    Shaking: Generate a random solution x' in neighborhood N_k(x)
    Local search: Apply local search to x' to obtain local optimum x''
    If x'' improves upon x:
        Set x = x'' and k = 1
    Else:
        Set k = k + 1 (or select next k)
This highlights the shaking phase, which perturbs the to a new solution in the current neighborhood, and the local search phase, which refines it toward a local optimum.

History

Variable neighborhood search (VNS) was proposed by Nenad Mladenović and Pierre Hansen in 1997 as a framework for solving problems, addressing limitations in traditional local search methods by systematically varying neighborhood structures to escape local optima. The initial formulation emphasized the simplicity and effectiveness of neighborhood changes within local search to improve solution quality without requiring complex diversification mechanisms. The foundational publication appeared in 1997 in Computers & Operations Research, where Mladenović and Hansen introduced the basic VNS and demonstrated its application to problems such as the p-median and . Key milestones followed rapidly: in 1999, and Mladenović provided an to VNS in a chapter on metaheuristics advances. By 2003, they expanded the framework with more sophisticated variants integrating advanced descent and perturbation strategies, as detailed in their chapter in the Handbook of . In the early 2000s, VNS was adapted for , with early extensions appearing in works like the 2003 handbook chapter by and Mladenović, enabling its use in unconstrained and nonlinear problems through modified neighborhood definitions. The method's popularity surged in the subsequent decades, leading to over 3,000 publications as of 2025, as indexed in , reflecting its integration with other metaheuristics such as genetic algorithms and . Post-2020 developments have increasingly focused on hybrid VNS approaches for tasks, including hyperparameter tuning via hybrids, and applications in scenarios like large-scale clustering and optimization. Influential contributions from Mladenović, , and collaborators, including Jack Brimberg and Urošević, have been codified in authoritative resources such as the Handbook of Metaheuristics editions (2003, 2010, 2018), solidifying VNS as a cornerstone of heuristic optimization.

Problem Formulation

Optimization Contexts

Variable neighborhood search (VNS) is applicable to a wide range of problems, including the traveling salesman problem (TSP) and , where solutions involve discrete structures such as permutations or assignments. It has also been extended to problems in continuous spaces, addressing unconstrained and constrained nonlinear programs through adapted neighborhood structures. These contexts share the goal of navigating complex search spaces to identify high-quality solutions efficiently. In these optimization problems, the task is to minimize (or equivalently maximize) an objective function f(\mathbf{x}) over a feasible set X, where \mathbf{x} represents a . The general is given by \min_{\mathbf{x} \in X} f(\mathbf{x}), with X defining the space of admissible solutions that satisfy any problem-specific constraints. In combinatorial settings, X is typically discrete and finite, comprising enumerable configurations like tours in TSP. In contrast, continuous involves X \subseteq \mathbb{R}^n, often bounded by inequalities g_i(\mathbf{x}) \leq 0 (for i = 1, \dots, m), equalities h_j(\mathbf{x}) = 0 (for j = 1, \dots, p), and box constraints \mathbf{l} \leq \mathbf{x} \leq \mathbf{u}. Feasible solutions must adhere to these constraints to remain valid, ensuring practicality in real-world applications. A solution \mathbf{x} is considered a local optimum with respect to a neighborhood N_k(\mathbf{x}) if f(\mathbf{x}) \leq f(\mathbf{y}) for all \mathbf{y} \in N_k(\mathbf{x}), meaning no immediate improvement is possible within that local structure. VNS tackles key challenges in these domains, such as multimodal landscapes where multiple local exist and algorithms risk entrapment, particularly in NP-hard combinatorial problems like TSP and that lack efficient exact solvers for large instances. By systematically varying neighborhoods, VNS escapes such traps to pursue global improvements, making it suitable for both discrete and continuous search spaces.

Neighborhood Structures

In variable neighborhood search (VNS), a neighborhood structure N_k(x) is defined as the set of solutions in the k-th neighborhood of a current solution x, obtained by applying a specific operator or move of type k to x. For instance, in the traveling salesman problem (TSP), common operators include k-insert, which relocates k nodes to different positions in the tour. Another prominent example is the k-opt operator for permutation-based problems like TSP, where k edges in the tour are removed and reconnected in a different configuration to generate neighboring solutions. VNS employs a systematic variation of these neighborhoods by progressively increasing the index k from 1 to a maximum k_{\max}, thereby expanding the exploration scope from smaller, more local changes to larger, more disruptive modifications. This variation allows the algorithm to escape local optima by considering solutions farther from the current one when smaller neighborhoods fail to yield improvements. A key design principle is the use of nested neighborhoods, where N_i(x) \subseteq N_{i+1}(x) for i = 1, \dots, k_{\max}-1, ensuring that closer neighborhoods are exhaustively searched before venturing into broader ones, which enhances efficiency in covering the solution space. Neighborhoods can be generated through shaking, which involves selecting a solution x' at random from N_k(x) to introduce diversification, or via deterministic methods that systematically enumerate neighbors for more controlled exploration. The design of these structures must balance computational complexity, as larger k values exponentially increase the neighborhood size—often from O(n) for k=1 to O(n^k) in permutation problems—leading to higher evaluation times that trade off against the benefits of wider search scope. Overall, the union of all N_k(x) should ideally span the entire feasible solution set to prevent the search from being confined to suboptimal regions.

Local Search Foundations

Descent Heuristics

Local search is a fundamental technique in optimization that starts with an initial feasible solution and iteratively improves it by exploring neighboring solutions until reaching a local optimum. The goal is to systematically reduce the objective function value through successive improvements within a defined neighborhood structure, providing a baseline for escaping poor initial solutions. A key variant of local search is steepest descent, also referred to as best-improvement local search, which exhaustively evaluates all candidate solutions in the neighborhood N(x) of the current solution x and selects the one yielding the maximum improvement in the objective function f. This approach ensures the strongest possible local move at each iteration but requires scanning the entire neighborhood, making it computationally intensive for large |N(x)|. The for steepest descent is as follows:
1: x ← initial solution
2: while true do
3:     x' ← x
4:     x ← arg min_{y ∈ N(x)} f(y)
5:     if f(x) ≥ f(x') then break
6: end while
Here, the loop continues until no further improvement is possible. The trade-off in steepest descent lies in its completeness versus efficiency: it guarantees finding the best local improvement if the neighborhood is fully explored, but the time complexity grows linearly with neighborhood size, rendering it ideal for problems with modest |N(x)| where thoroughness outweighs speed concerns. Termination occurs when no neighbor y \in N(x) satisfies the improvement condition \Delta f = f(y) - f(x) < 0 for a minimization problem, confirming that x is a local optimum relative to N(x). \Delta f = f(y) - f(x) < 0, \quad y \in N(x) Such descent heuristics underpin the local improvement mechanisms in metaheuristics like variable neighborhood search.

Improvement Strategies

In local search methods, the first-improvement heuristic enhances efficiency by systematically scanning the neighborhood of the current solution and immediately accepting the first neighboring solution that yields an objective function improvement, defined as Δf < 0, without evaluating the entire neighborhood. This approach contrasts with the steepest descent strategy, which requires a complete neighborhood exploration to select the globally best neighbor, making first-improvement particularly suitable for large-scale problems where computational resources are limited. Empirical studies demonstrate that first-improvement often achieves comparable solution quality to steepest descent while providing substantial speedups, typically several times faster due to reduced evaluations per iteration. For instance, in combinatorial optimization contexts, the partial scanning reduces the average complexity compared to full enumeration, where |N| denotes neighborhood size, enabling quicker convergence to local optima. Implementation of first-improvement benefits from ordered neighbor generation, where moves are prioritized based on heuristics such as promising directions informed by problem-specific knowledge, like distance metrics in routing problems, to further accelerate discovery of improvements. This ordering ensures that potentially superior moves are encountered early, balancing speed and effectiveness. Hybrid approaches integrate first-improvement with tabu lists to promote diversification, where recently performed moves are temporarily forbidden to prevent cycling and explore broader solution spaces. In such hybrids, the tabu mechanism filters candidates during the scan, allowing non-improving but diversifying moves under controlled conditions, as seen in extensions of for vehicle routing. Despite these advantages, first-improvement can lead to suboptimal paths, where the order of exploration influences the trajectory, potentially trapping the search in inferior local optima compared to more exhaustive methods.

Core VNS Framework

Basic Principles

Variable neighborhood search (VNS) is grounded in the principle of systematically varying neighborhood structures to explore the solution space more effectively than traditional local search methods. A core idea is the use of multiple neighborhoods, denoted as N_k(x) for k = 1, \dots, k_{\max}, where x is the current solution and each N_k represents a progressively larger or different set of neighboring solutions. This approach exploits the observation that different local optima may be found in different neighborhoods, allowing the algorithm to escape suboptimal solutions by changing the neighborhood structure rather than the solution itself. Local search serves as the intensification component, refining solutions within a fixed neighborhood to identify local optima. The shaking procedure is a key mechanism in VNS for diversification, where the current solution x is perturbed to generate a new solution y randomly selected from N_k(x). This random perturbation intentionally moves the search away from the current local optimum, preventing the algorithm from getting trapped in cycles or revisiting previously explored areas. Unlike deterministic moves, shaking introduces controlled randomness without requiring parameter tuning for the perturbation process, ensuring broad exploration while maintaining computational efficiency. The neighborhood change rule governs how these neighborhoods are explored systematically. The process begins with the smallest neighborhood index k = 1; if local search on the shaken solution yields no improvement over the incumbent, k is incremented to explore a larger neighborhood in the next iteration. This continues until an improving solution is found or k reaches k_{\max}, at which point k is reset to 1 to restart the cycle. This rule embodies the principle of systematic diversification, wherein larger neighborhoods are invoked only when necessary to find better solutions, avoiding unnecessary computational overhead in expansive searches. The maximum neighborhood size k_{\max} is typically selected based on the problem's scale, such as the number of variables or elements involved, to balance exploration depth with practical limits. Overall, the rationale of VNS lies in its elegant balance between intensification through local search and diversification via shaking and neighborhood changes, enabling effective navigation of complex search landscapes without relying on complex parameter tuning or population-based mechanisms. This foundational framework has proven robust across diverse optimization problems by leveraging the geometry of neighborhoods to uncover global optima more reliably than static neighborhood methods.

Algorithm Description

The Basic Variable Neighborhood Search (VNS) algorithm is a metaheuristic framework that systematically varies neighborhood structures to explore the solution space of optimization problems, combining randomization through shaking with deterministic improvement via local search. It operates by starting from an initial solution and iteratively perturbing it to escape local optima, applying local search to refine candidates, and adjusting the neighborhood size based on whether improvements are found. This process enables effective diversification and intensification without requiring problem-specific tuning beyond neighborhood definitions. The algorithm proceeds in four main steps within an outer loop controlled by a stopping condition, such as a maximum number of iterations or computational time limit. First, shaking generates a perturbed solution y from the current solution x using a randomly selected neighbor in the k-th neighborhood structure N_k(x), where k begins at 1. Second, local search is applied to y to obtain a refined solution x', typically using a method like first-improvement or steepest descent within a fixed neighborhood. Third, if the objective function value f(x') is better than f(x), the solution is updated to x', and k is reset to 1 to restart exploration in smaller neighborhoods; otherwise, k is incremented to explore larger neighborhoods. Fourth, the process repeats until the stopping condition is met or k reaches its maximum value k_max, at which point the outer loop may continue from the updated x. Key parameters include the sequence of neighborhood structures {N_k | k = 1, ..., k_max}, where smaller k corresponds to more local changes (e.g., swapping adjacent elements in a permutation) and larger k to broader perturbations (e.g., swapping distant elements); the initial solution x; the local search procedure; and the stopping criterion. The choice of local search, such as first-improvement (accepting the first better neighbor found), balances computational efficiency with solution quality. The following pseudocode outlines the Basic VNS:
Initialization:
Select a set of neighborhood structures N_k for k = 1, ..., k_max
Generate an initial solution x
Set stopping condition (e.g., maximum iterations t_max)

Repeat until stopping condition is met:
    k ← 1
    while k ≤ k_max do
        Shaking: Generate a random solution y ∈ N_k(x)
        Local Search: Apply local search to y to obtain x'
        Move or Not: If f(x') < f(x) then
            x ← x'
            k ← 1
        else
            k ← k + 1
    end while
This structure features an outer loop for overall iterations, an inner while loop for neighborhood progression, and nested calls to shaking and local search. The flow can be visualized as a nested loop: the outer loop iterates over global trials, with the inner loop progressively enlarging neighborhoods until improvement triggers a reset, effectively creating a dynamic exploration path that contracts upon success and expands on stagnation. To illustrate, consider a hypothetical application to the (TSP) on a small graph with 5 cities, where neighborhoods N_k involve k random edge swaps in the tour permutation, f measures total distance, and local search uses improvements. Start with initial tour x = (1-2-3-4-5-1), f(x) = 50. In iteration 1, k=1: shake to y = (1-3-2-4-5-1), f(y)=55; local search yields x' = (1-2-4-3-5-1), f(x')=45 < 50, so update x = x', reset k=1. Next inner loop, k=1: shake to y = (1-2-5-4-3-1), f(y)=52; local search gives x' = (1-2-3-5-4-1), f(x')=48 < 45, update x = x', reset k=1. If no improvement occurs after k=3 (e.g., f(x') ≥ f(x) for k=1,2,3), the inner loop ends, and the outer loop proceeds with the current x, incrementing the global iteration count. This trace shows how k resets on improvements, keeping exploration focused, while increments broaden search when stuck.

Variants and Extensions

Reduced VNS

Reduced Variable Neighborhood Search (RVNS) is a variant of the basic VNS that simplifies the approach by omitting the local search phase to enable faster exploration, particularly suited for large-scale problems where computational resources are limited. This stochastic method relies on generating random points within progressively larger neighborhoods around the current solution, accepting improvements to diversify the search without systematic descent. In RVNS, the algorithm begins with an initial solution x and a set of nested neighborhood structures N_k(x), where k = 1, \dots, k_{\max}. The process iteratively shakes the solution by selecting a random point y \in N_k(x). If f(y) < f(x), the solution updates to y and k resets to 1 for intensified search; otherwise, k increases to explore farther neighborhoods. This continues until a stopping criterion, such as maximum iterations or time, is reached. The key distinction from basic VNS lies in forgoing local search after shaking, which eliminates the deterministic improvement loop and emphasizes probabilistic diversification through random point generation and neighborhood expansion only upon non-improvement. By avoiding the computational overhead of local optimization, RVNS enables faster exploration while maintaining comparable solution quality in practice for problems where local optima are densely clustered. The following pseudocode outlines the RVNS procedure:
1: x ← initial_solution()
2: k ← 1
3: while stopping_condition_not_met() do
4:     y ← random_point_in(N_k(x))
5:     if f(y) < f(x) then
6:         x ← y
7:         k ← 1
8:     else
9:         k ← k + 1
10:    if k > k_max then
11:        k ← 1
12:    end if
13: end while
14: return x
RVNS is particularly advantageous in scenarios with tight computational budgets, such as optimization or very large instances (e.g., thousands of variables), where its reduced overhead allows multiple runs within the same time frame, potentially yielding better overall results through ensemble-like exploration despite less guided intensification.

General VNS

General Variable Neighborhood Search (GVNS) represents a significant extension of the basic Variable Neighborhood Search (VNS) framework, introduced by and Mladenović in to enhance the local search phase by incorporating variable local search methods tailored to different neighborhood levels. Unlike the standard VNS, which employs a fixed local search procedure across all neighborhoods, GVNS allows the use of distinct local search heuristics L_k for each neighborhood index k, enabling more adaptive exploration of the solution space. This approach was formalized to better handle the varying complexity and size of neighborhoods, where smaller neighborhoods (low k) might benefit from more intensive searches, such as steepest descent, while larger ones (high k) could use faster methods like first-improvement to maintain computational efficiency. In the GVNS algorithm, the core retains the shaking and neighborhood change mechanisms from basic VNS but modifies the local search step to apply L_k specifically after generating a perturbed solution y in neighborhood N_k(x). The process begins with an initial solution x, followed by shaking to produce y \in N_k(x), then applying the neighborhood-specific local search L_k(y) to obtain an improved candidate x'. If x' yields a better objective value than x, the algorithm moves to x' and resets k to 1; otherwise, k is incremented, and the process repeats until a stopping criterion, such as maximum iterations or , is met. This adaptation is reflected in the as follows:
Initialize x
k ← 1
While stopping condition not met:
    y ← Shake(x, N_k)  // Generate y in N_k(x)
    x' ← L_k(y)        // Apply [variable](/page/Variable) local search L_k to y
    If f(x') < f(x):   // If improvement
        x ← x'
        k ← 1
    Else:
        k ← k + 1
    If k > k_max: k ← 1
Such modifications allow GVNS to dynamically adjust the search intensity based on the current neighborhood scale, often leading to superior solution quality compared to uniform local search strategies. The key parameters in GVNS include the sequence of local search operators L_1, L_2, \dots, L_{k_{\max}}, where each L_k is selected to match the characteristics of the corresponding neighborhood N_k, along with the maximum neighborhood index k_{\max} that defines the exploration depth. By adapting the local search method to neighborhood size—for instance, using exhaustive steepest ascent for small, dense neighborhoods and heuristic-based first-improvement for expansive ones—GVNS achieves better balance between intensification and diversification, resulting in improved to high-quality solutions across diverse optimization problems. Empirical evaluations demonstrate enhancements in solution quality over baseline local searches across various problems.

Advanced Variants

Hybrid approaches integrate Variable Neighborhood Search (VNS) with other metaheuristics to enhance and capabilities. In hybrid VNS with genetic algorithms (VNS-GA), population-based search mechanisms from genetic algorithms are combined with VNS's neighborhood shaking and local search phases, allowing for diverse solution generation followed by refined intensification. For instance, this hybridization has been applied to bi-level optimization problems like de-icing position allocation, where VNS-GA outperforms standalone methods by leveraging crossover and operators alongside variable neighborhoods to handle complex constraints. Similarly, VNS-GA variants address multi-objective scheduling in workshop logistics, using Pareto-based selection to balance objectives such as and , demonstrating superior on large-scale instances compared to traditional genetic algorithms alone. Hybrid VNS with tabu search (VNS-TS) incorporates memory structures from to avoid cycling in local optima during the shaking phase, enhancing the diversity of neighborhood explorations. A notable VNS-TS formulation uses tabu lists to guide variable neighborhood descent in vehicle routing problems with time windows, where the integration prevents revisitation of recent solutions while systematically varying neighborhood sizes, leading to improved solution quality on benchmark instances. This approach extends the general VNS framework by adding aspiration criteria that allow tabu moves if they yield better overall progress, as seen in multi-depot green vehicle routing, where it achieves up to 15% better routes than non-hybrid variants. Parallel VNS exploits to accelerate neighborhood evaluations, particularly for large-scale problems where sequential searches are computationally prohibitive. In distributed implementations, multiple VNS instances run concurrently on separate processors, exchanging elite solutions periodically to synchronize global progress, which has been shown to reduce computation time by factors of 10-20 on tasks like inventory management. Recent 2020s developments focus on GPU-accelerated VNS, where hybrid CPU-GPU schemes parallelize the local search within neighborhoods using OpenACC directives; for example, in multi-echelon inventory optimization, this yields speedups of up to 50x over CPU-only versions while maintaining solution accuracy, by offloading neighborhood generation and evaluation to GPU threads. Topological Variable Neighborhood Search (TVNS), introduced in 2024, redefines neighborhoods using graph topology to better suit big data environments with inherent relational structures. Unlike classical VNS, TVNS constructs neighborhoods based on topological distances in data graphs, enabling more intuitive perturbations that respect underlying connections, such as in social network analysis or molecular graphs. Theoretical analysis proves TVNS's convergence under mild conditions, and empirical tests on large datasets show it outperforms standard VNS by 20-30% in solution quality for graph-based clustering, due to reduced irrelevant explorations. Continuous Variable Neighborhood Search (CVNS) adapts VNS for real-valued optimization by employing continuous operators, such as Gaussian distributions, to generate neighboring solutions within constraints. In Gaussian VNS, shaking involves adding noise from distributions with varying standard deviations to solution vectors, allowing systematic exploration of continuous spaces; this method excels in benchmarks, achieving near-optimal solutions faster than on multimodal functions by dynamically adjusting radii. The approach ensures diversification in high-dimensional settings, with local search using gradient-based moves within neighborhoods. Skewed VNS introduces bias toward promising regions to escape large valleys in the search , extending general VNS by accepting slightly worse solutions during shaking if they lie in potentially better basins. In skewed general VNS, the acceptance criterion uses a to control the , favoring moves that improve a skewed objective function, which has proven effective for capacitated problems by improving efficiency and yielding 10-15% better results than unbiased variants on real-world facility location instances. This variant mitigates premature in rugged landscapes, particularly when combined with variable neighborhood descent for intensification.

Applications and Performance

Combinatorial Problems

Variable neighborhood search (VNS) has been extensively applied to discrete problems, where it systematically varies neighborhood structures to escape local optima and improve solution quality beyond standard local search methods. In these settings, VNS leverages problem-specific operators to explore the solution space efficiently, often yielding high-quality approximations for NP-hard problems that are intractable by methods. Key advantages include its , adaptability to different neighborhood definitions, and ability to handle large-scale instances without extensive parameter tuning. For the traveling salesman problem (TSP), VNS employs neighborhoods based on k-opt exchanges, such as 2-opt and 3-opt moves that remove and reinsert edges to shorten tour lengths. A general VNS heuristic systematically shakes and improves solutions using these operators, achieving competitive results on TSPLIB benchmarks and outperforming fixed-neighborhood local searches on symmetric TSP instances. In graph coloring, VNS uses frequency-based neighborhoods alongside vertex and class relocation operators, such as chain moves that shift conflicting vertices or empty-refill procedures that redistribute color classes to minimize violations. Applied to DIMACS benchmark instances, this approach achieves near-optimal colorings; for example, it colors the DSJC500.5 graph (500 vertices) with 49 colors (best known upper bound is 48) and the flat300_28 instance (300 vertices) with 31 colors (chromatic number is 28), demonstrating competitive results against tabu search variants like Tabucol. These outcomes highlight VNS's effectiveness in reducing the chromatic number for dense random graphs. For the (VRP), VNS incorporates insert and remove operators to adjust routes, often in hybrid setups with route elimination phases to minimize fleet usage. A reactive VNS variant applied to and Gehring-Homberger benchmarks (up to 400 customers) reduces fleet sizes by 10-20% compared to initial heuristics, while in real cases like those from Russell's instances, it achieves new best-known solutions by optimizing time windows and distances, lowering operational costs through fewer vehicles. This makes VNS particularly suitable for practical distribution networks. A notable case study involves the p-median problem, where parallel VNS uses interchange neighborhoods and shaking via random swaps to select facility locations minimizing total distance. This method solves large instances (e.g., up to 5000 demand points) in minutes, significantly faster than exact branch-and-bound approaches, while maintaining solution gaps under 1% of optima on ORLIB benchmarks; for instance, it handles p=100 medians across expansive networks more efficiently than sequential solvers. Empirically, VNS yields average improvements of 5-15% in solution quality over local search alone across combinatorial problems like TSP and VRP, as the variable neighborhoods enable better exploration of the solution space, reducing stagnation in local optima. These gains are consistent in high-impact studies, underscoring VNS's role in enhancing performance for real-world applications. Recent applications as of 2025 include sustainable optimization and berth allocation in container ports.

Continuous and Global Optimization

Continuous Variable Neighborhood Search (CVNS) extends the VNS metaheuristic to continuous and real-valued search spaces by defining neighborhoods through continuous transformation operators, such as perturbations along variable numbers of dimensions or using different metric norms. These neighborhoods are typically constructed as balls or shells centered at the current solution x, with radius \rho_k increasing across neighborhood indices k. The shaking phase generates a new candidate solution y by sampling uniformly within the k-th neighborhood, enabling diversification from local optima. For instance, under the L_\infty norm, the shaking can be expressed as y_i = x_i + 2\rho_k (u_i - 0.5), \quad u_i \sim \text{Uniform}[0,1], for each dimension i, where \rho_k controls the perturbation scale. In contexts, CVNS effectively addresses multimodal objective functions, such as the defined as f(x) = 10n + \sum_{i=1}^n (x_i^2 - 10\cos(2\pi x_i)) for x \in [-5.12, 5.12]^n, which features $11^n local minima. By progressively enlarging the perturbation radius \rho_k during shaking, CVNS escapes attraction basins of suboptimal local minima and explores broader regions of the search space. Empirical evaluations on such functions demonstrate that L_1 and L_2 neighborhood shapes outperform L_\infty in higher dimensions, achieving better to the global minimum due to more isotropic exploration. CVNS finds applications in engineering design, exemplified by truss structure optimization, where continuous variables like member cross-sections and nodal coordinates are adjusted to minimize weight subject to stress and constraints. In machine learning, it facilitates hyperparameter tuning for models like support vector machines, optimizing continuous parameters such as the and regularization coefficient to enhance classification accuracy on benchmark datasets. A prominent involves applying CVNS to approximate , formulated as minimizing a simplified molecular function with continuous angles as decision variables. This approach systematically varies neighborhood structures to navigate the rugged energy landscape, yielding solutions for molecules with up to 200 that compare favorably to across benchmark suites. Key challenges in CVNS include scaling to high-dimensional problems, where the volume of neighborhoods grows exponentially, leading to inefficient sampling and increased computational demands. Neighborhood shapes like L_1 mitigate this somewhat by promoting directed perturbations, but performance degrades beyond moderate dimensions without additional mechanisms.

Comparative Analysis

Variable neighborhood search (VNS) distinguishes itself from through its inherent simplicity and reduced reliance on complex memory structures. While TS employs tabu lists to avoid and manage , often requiring careful tuning of parameters such as tabu tenure and criteria, VNS operates with minimal parameters—typically just the number of neighborhoods—and avoids explicit memory mechanisms, making it less memory-intensive and easier to implement. On the traveling salesman problem (TSP), VNS has demonstrated faster in practice due to its systematic neighborhood exploration, achieving competitive solutions with lower computational overhead compared to TS implementations on benchmark instances. In contrast to genetic algorithms (GA), which excel at maintaining population diversity for global exploration but often struggle with fine-grained local refinement, VNS emphasizes structured local search across multiple neighborhoods, enabling efficient intensification once promising regions are identified. Hybrids combining VNS for local improvement with GA for diversification have shown superior performance, outperforming standalone GA or VNS in approximately 75% of benchmark instances across scheduling and routing problems, as evidenced by statistical comparisons on standard test sets. Compared to (), VNS offers a more deterministic and systematic approach to escaping local optima by systematically varying neighborhoods, obviating the need for probabilistic acceptance criteria and temperature schedule tuning that can be sensitive to initial conditions in . Empirical evaluations on scheduling problems indicate VNS achieves faster computation times on smaller to medium-scale instances while providing comparable or better convergence on functions, though may yield marginally superior solution quality on highly dimensional cases without hybridization. Key strengths of VNS include its parameter-light design, which facilitates adaptability across diverse problem domains, and its ability to exploration and exploitation without extensive user intervention. However, VNS can be sensitive to the choice and ordering of neighborhood structures, potentially underperforming in very high-dimensional spaces unless augmented with extensions. Reviews of performance up to 2023 position VNS among the top five frameworks for on standard benchmarks like those in the OR-Library, including TSP and vehicle routing instances, due to its consistent near-optimality and efficiency.

References

  1. [1]
    Variable neighborhood search - ScienceDirect.com
    Systematic change of neighborhood within a local search algorithm yields a simple and effective metaheuristic for combinatorial optimization.
  2. [2]
    [PDF] Chapter 8 VARIABLE NEIGHBORHOOD SEARCH
    Variable Neighborhood Search (VNS) is a recent metaheuristic, or frame- work for building heuristics, which exploits systematically the idea of neigh ...
  3. [3]
    (PDF) Variable neighbourhood search: Methods and applications
    Aug 9, 2025 · Variable neighbourhood search (VNS) is a metaheuristic, or a framework for building heuristics, based upon systematic changes of neighbourhoods.
  4. [4]
    Variable neighborhood search: Principles and applications
    Variable neighborhood search: Principles and applications. Author links open overlay panel. Pierre Hansen , Nenad Mladenović. Show more. Add to Mendeley. Share.
  5. [5]
    An Introduction to Variable Neighborhood Search - SpringerLink
    Hansen and N. Mladenović. Variable Neighborhood Search for Minimum Sum-of-Squares Clustering, Les Cahiers du GERAD (1998), Montréal, Canada, (forthcoming).
  6. [6]
    A Survey on Variable Neighborhood Search for Sustainable Logistics
    VNS is a metaheuristic proposed by Nenad Mladenović in 1995 [3], whose key ... Hansen, P.; Mladenović, N. Variable neighborhood search for the p-median ...
  7. [7]
    Variable Neighborhood Search Based on Reinforcement Learning ...
    Apr 30, 2024 · This study introduces a new hybrid approach combining variable neighborhood search (VNS) with the reinforcement learning (RL) paradigm to effectively resolve ...
  8. [8]
    Hybrid genetic algorithm with variable neighborhood search for ...
    Apr 1, 2023 · This paper proposes an improved hybrid genetic algorithm with variable neighborhood search (HGA-VNS) for addressing the flexible job shop scheduling problem ...
  9. [9]
    [PDF] A variable neighborhood search for graph coloring - GERAD
    Variable neighborhood search (VNS) uses various neighborhoods during search. This paper adapts VNS to graph coloring, which aims to find the smallest k for a ...
  10. [10]
    General variable neighborhood search for the continuous optimization
    Dec 16, 2008 · We suggest a new heuristic for solving unconstrained continuous optimization problems. It is based on a generalized version of the variable neighborhood search ...Missing: early | Show results with:early<|control11|><|separator|>
  11. [11]
    Combinatorial GVNS (General Variable Neighborhood Search ...
    General variable neighborhood search (GVNS) is a well known and widely used metaheuristic for efficiently solving many NP-hard combinatorial optimization
  12. [12]
    (PDF) A Tutorial on Variable Neighborhood Search - ResearchGate
    Steepest descent heuristic. Observ ... After partial optimizations, a local optimum is found for all users using first improvement heuristic (Algorithm 9).
  13. [13]
    First-improvement or best-improvement? An in-depth local search ...
    On the other hand, in first improvement local search, the ... search, randomized steepest descent method, and other variants of genetic algorithms.
  14. [14]
    A GRASP algorithm with Tabu Search improvement for solving the ...
    Feb 14, 2022 · ... first improvement. In the context of kMIS, we select ... tabu list limits the local search exploration, reducing the computational effort.
  15. [15]
    Fast and eager k-medoids clustering: O(k) runtime improvement of ...
    TD , the best such change is then applied, in the spirit of a steepest-descent ... By eagerly executing the first improvement found, we also reduced the number of ...Missing: combinatorial | Show results with:combinatorial
  16. [16]
    Variable neighborhood search: Principles and applications
    May 1, 2001 · A simple and effective metaheuristic for combinatorial and global optimization, called variable neighborhood search (VNS).
  17. [17]
  18. [18]
    Bi-Level Optimization for De-Icing Position Allocation and ...
    Jan 3, 2024 · Table 6 presents the optimal de-icing position scheduling solution obtained using the Hybrid Variable Neighborhood Search Genetic Algorithm.
  19. [19]
    变邻域遗传算法在车间物流调度中的应用 - 计算机系统应用
    A multi-objective hybrid Variable Neighborhood Search Genetic Algorithm (VNSGA-II) based on Pareto optimization is designed. Based on the genetic algorithm ...
  20. [20]
    A Hybrid Variable Neighborhood Tabu Search for the Long-Term ...
    Mar 21, 2019 · The Variable neighborhood Search procedure consists of a main loop considering each neighborhood in turns. A local optimum is obtained from the ...
  21. [21]
    A hybrid variable neighborhood search approach for the multi-depot ...
    We formulate MDGVRP as a mixed integer linear programming model and develop a hybrid General Variable Neighborhood Search and Tabu Search approach by proposing ...
  22. [22]
    A hybrid variable neighborhood tabu search heuristic for the vehicle ...
    This paper presents a new hybrid variable neighborhood-tabu search heuristic for the Vehicle Routing Problem with Multiple Time windows.
  23. [23]
    A parallel variable neighborhood search approach for the obnoxious ...
    Jan 30, 2018 · In this paper, we propose the application of a variable neighborhood search (VNS) method to effectively tackle this problem.
  24. [24]
    A hybrid CPU-GPU parallelization scheme of variable neighborhood ...
    In this paper, we study various parallelization schemes for the Variable Neighborhood Search (VNS) metaheuristic on a CPU-GPU system via OpenMP and OpenACC.
  25. [25]
    Topological variable neighborhood search - Journal of Big Data
    Dec 20, 2024 · We propose a new VNS-based metaheuristic method called Topological Variable Neighborhood Search (TVNS) and explain the motivation behind it.Missing: early | Show results with:early
  26. [26]
    Gaussian variable neighborhood search for continuous optimization
    Variable Neighborhood Search (VNS) has shown to be a powerful tool for solving both discrete and box-constrained continuous optimization problems.
  27. [27]
    A skewed general variable neighborhood search algorithm with ...
    Jul 14, 2017 · To overcome this problem, the skewed variable neighborhood search (SVNS) enhances the exploration of the set X by visiting distant valleys ( ...
  28. [28]
    Solving the Capacitated Dispersion Problem with variable ...
    Namely, General Skewed Variable Neighborhood Search also accepts non-improving solutions that are “slightly” worse than the incumbent one, but at the same “ ...
  29. [29]
    [PDF] A General Variable Neighborhood Search Heuristic for the Traveling ...
    Jun 19, 2019 · We present in this paper a two-phase heuristic based on variable neighborhood search (VNS). (Mladenovic and Hansen, 1997; Hansen et al., 2008).<|separator|>
  30. [30]
    A Reactive Variable Neighborhood Search for the Vehicle-Routing ...
    The purpose of this paper is to present a new deterministic metaheuristic based on a modification of the variable neighborhood search of Mladenovic and Hansen ...
  31. [31]
    The Parallel Variable Neighborhood Search for the p-Median Problem
    The Variable Neighborhood Search (VNS) is a recent metaheuristic that combines series of random and improving local searches based on systematically change.
  32. [32]
    [PDF] INFLUENCE OF A NEIGHBORHOOD SHAPE ON THE EFFICIENCY ...
    Abstract: The efficiency of a Variable neighborhood search metaheuristic for contin- uous global optimization problems greatly depends on geometric shape of ...
  33. [33]
    Discrete Optimization of Truss Structures Using Variable ...
    Jul 10, 2021 · Variable neighborhood search (VNS) is a metaheuristic approach for solving combinatorial and global optimization problems in discrete search ...
  34. [34]
    [PDF] Variable Neighborhood Search for parameter tuning in Support ...
    Sep 24, 2012 · In this paper we customize a continuous Variable Neighborhood Search to tune the parameters of the SVM. Our framework is general enough to allow ...
  35. [35]
  36. [36]
    A hybrid algorithm with a new neighborhood structure for job shop ...
    To sum up, the proposed GA-VNS shows the best performance on 30 instances out of 40 instances, while the superior stability has also been proved by statistical ...
  37. [37]
    [PDF] Hybrid Meta-Heuristic Algorithms for Independent Job Scheduling in ...
    May 30, 2018 · For both sizes, GA-VNS shows a better improvement percentage over all the compared methods. ACO-VNS achieved the second-best improvement ...
  38. [38]
    A Comparative Analysis of Simulated Annealing and Variable ...
    The idea underlying variable neighborhood search (VNS) is to successively explore a set of predefined neighborhoods to find better solutions [11]. VNS explores ...
  39. [39]
    Variable Neighborhood Search: The power of change and simplicity
    Variable Neighborhood Search (VNS) is a metaheuristic framework for heuristic design, introduced by Professor Nenad Mladenović, that emphasizes simplicity and ...
  40. [40]
    Variable Neighborhood Search - SpringerLink
    Mar 28, 2025 · Hansen, P., and Mladenović, N., 2001a, Variable neighborhood search: Principles and applications, Eur. J. Oper. Res. 130:449–467. Article ...
  41. [41]
    [PDF] Metaheuristics in Combinatorial Optimization - LIA
    ”A metaheuristic is an iterative master process that guides and modifies the operations of subordinate heuristics to efficiently produce high-quality solutions.