Fact-checked by Grok 2 weeks ago

Optimization problem

An optimization problem, also referred to as a mathematical programming problem, is the task of selecting the best solution from a finite or infinite set of feasible alternatives, typically by minimizing or maximizing a specified objective function subject to given constraints. This process involves decision variables that represent the choices to be made, an objective function that quantifies the performance or cost of those choices, and constraints that define the allowable region for the variables, ensuring the solution remains practical and feasible. Optimization problems arise across diverse fields, from and to , where they enable efficient , design improvement, and under limitations. The core components of an optimization problem include the objective function, which is a mathematical expression (often linear or nonlinear) to be optimized; the decision variables, which are the unknowns adjusted to achieve optimality; and the constraints, which can be or conditions bounding the variables. Problems are classified as constrained if restrictions apply or unconstrained if variables can take any value in their domain, with the latter often focusing on continuous real-valued functions. Maximization tasks can be reformulated as minimization by negating the objective function, a standard technique that unifies the framework. Optimization problems encompass several major types, including linear programming, where both the objective and constraints are linear, solvable via methods like the simplex algorithm developed by George Dantzig in 1947; nonlinear programming, addressing curved relationships; integer programming, restricting variables to whole numbers for discrete scenarios like scheduling; and convex optimization, which guarantees global optima for convex functions and includes applications in least-squares fitting. Discrete or combinatorial optimization deals with finite, often computationally challenging sets of options. These categories highlight the field's breadth, with convex problems being particularly tractable and widely used in engineering design and data analysis. Historically, systematic optimization emerged in the mid-20th century with linear programming's resolution in the late , but roots trace to earlier calculus-based methods for unconstrained problems. Today, optimization drives advancements in fields like network design, , and , where algorithms minimize errors in models or maximize efficiency in systems. Despite its power, many problems—especially nonconvex or large-scale ones—remain computationally intensive, spurring ongoing research into approximation techniques and specialized solvers.

Definition and Formulation

Basic Definition

In mathematics and engineering, an optimization problem is fundamentally a decision-making process that involves selecting inputs, known as decision variables, from a predefined set to extremize—either minimize or maximize—an objective function that quantifies the performance or desirability of those inputs. The objective function can be scalar, providing a single measure of outcome, or vector-valued in cases involving multiple conflicting goals. This formal approach contrasts with everyday usage of "optimization," where the term often describes informal efforts to improve efficiency in personal routines, business operations, or resource allocation without the structured analysis of mathematical models. Central to any optimization problem are the concepts of feasible solutions and optimal solutions. A feasible solution consists of values for the decision variables that satisfy all specified conditions, such as resource limits or operational requirements, forming the allowable from which selections are made. An optimal , in turn, is the feasible solution that yields the best possible value of the objective function, representing the "extremum" sought in the problem. The search space outlines the overall for these variables, while constraints delineate the subset of feasible options within it. The roots of optimization problems as a formal field trace back to the 1940s in , emerging from efforts to solve logistical challenges during and after , with George Dantzig's 1947 invention of the simplex method for marking a pivotal advancement in systematizing such problems. This conceptual framework establishes the abstract structure essential for exploring specific instances, such as those in various domains or under particular constraints, without delving into algorithmic resolutions.

Standard Formulation

An optimization problem is generally formulated as the task of finding a point x in a set X that minimizes (or maximizes) an objective function f: X \to \mathbb{R}. This standard structure encompasses both minimization and maximization problems, where maximization of f is equivalent to minimization of -f. The objective function f maps elements from the X (often \mathbb{R}^n) to numbers, representing the quantity to be optimized. In the multi-objective case, the objective is a F: X \to \mathbb{R}^k, where trade-offs between objectives are analyzed via the —the set of nondominated points where no feasible point improves all components of F without worsening at least one. Constraints define the feasible set within X and are classified as equality constraints h_j(x) = 0 for j = 1, \dots, p, inequality constraints g_i(x) \leq 0 for i = 1, \dots, m, or simple bounds l \leq x \leq u. The full standard formulation for a constrained minimization problem is thus \begin{align*} \min_{x \in X} \quad & f(x) \\ \text{subject to} \quad & g_i(x) \leq 0, \quad i = 1, \dots, m, \\ & h_j(x) = 0, \quad j = 1, \dots, p. \end{align*} Unconstrained problems arise as a special case when m = p = 0, reducing to \min_{x \in X} f(x). Solutions are denoted using \arg\min or for the optimizing point(s), which may not exist if the infimum (or supremum) is not attained; in such cases, the optimal value is the infimum \inf_{x \in X} f(x) (or supremum \sup_{x \in X} f(x)).

Search Space

General Search Space

In an optimization problem, the search space, also referred to as the , is defined as the set X comprising all possible values that the decision variables x can assume. This set X serves as the universal collection of candidate solutions over which the objective function is evaluated, independent of any specific constraints that might further restrict feasible choices. For instance, in , X = \mathbb{R}^n represents the n-dimensional real , allowing decision variables to take any real-valued coordinates. In discrete settings, X might be the \mathbb{Z}^n or the \{0,1\}^n, where variables are restricted to or values, respectively. The properties of the search space significantly influence the nature of optimization. It can be bounded, such as when confined to a compact like a box [x_{\min}, x_{\max}] \subset \mathbb{R}^n, or unbounded, as in the full \mathbb{R}^n or \mathbb{Z}^n. Connectivity varies: continuous spaces like \mathbb{R}^n are path-connected, enabling smooth traversals, while discrete spaces like \mathbb{Z}^n are disconnected, consisting of isolated points. Many search spaces possess a structure to quantify distances between points; for example, the d(x, y) = \|x - y\|_2 applies naturally in \mathbb{R}^n, facilitating measures of proximity during exploration. The search space provides the foundational "universe" for optimization algorithms to explore potential solutions. Its size and complexity directly impact solvability: in high-dimensional settings, the of the space leads to of dimensionality, where the volume increases dramatically with dimension n, making exhaustive search computationally infeasible even for simple uniform sampling. This phenomenon, first articulated by Bellman, arises because the number of points needed to cover the space adequately scales exponentially, complicating and increasing sensitivity to initial conditions. Examples of search spaces abound across problem types. Real vector spaces \mathbb{R}^n are common in engineering design, where variables represent continuous parameters like material densities. lattices \mathbb{Z}^n appear in scheduling tasks with whole-number allocations. More abstract sets include the space of all permutations of n elements for problems like the traveling salesman, where solutions are orderings of cities, or the set of all subs of a given for optimization, encompassing all connected acyclic subsets of edges. Distinguishing between infinite and finite search spaces has profound implications for solution methods. Finite spaces, such as the n! permutations in combinatorial problems, allow in principle for complete enumeration, though this is impractical for large n due to exponential growth. Infinite spaces, like \mathbb{R}^n, preclude enumeration entirely, necessitating approximation techniques or convergence guarantees based on the space's topology and metric properties.

Feasible Search Space

In optimization problems, the feasible set, also referred to as the , is the subset of the overall search space that consists of all points satisfying the given . It is formally defined as the set \{ x \in X \mid g_i(x) \leq 0 \ \forall i = 1, \dots, m, \ h_j(x) = 0 \ \forall j = 1, \dots, p \}, where X is the search space, g_i represent functions, and h_j represent functions. This set delineates the valid solutions, excluding any points that violate the problem's restrictions. The properties of the feasible set depend on the form of the constraints. If the inequality constraints g_i are functions and the constraints h_j are affine, the feasible set is , meaning the between any two feasible points lies entirely within the set. ensures desirable geometric properties, such as the absence of local distinct from global ones in problems. The feasible set is compact if it is closed and bounded, guaranteeing the existence of optimal solutions for continuous objective functions over it, as per the . If the feasible set is empty, the problem is deemed infeasible, indicating no valid solution exists that meets all constraints simultaneously. Constraints shaping the feasible set are categorized as hard or soft. Hard constraints must be strictly satisfied, directly defining the boundaries of the feasible set, while soft constraints allow violations but incorporate penalties into the to encourage compliance. They further divide into simple bounds, such as l \leq x \leq u on individual variables, and general constraints involving nonlinear or multivariable relations. The feasible set plays a central role in optimization, as candidate solutions are evaluated for optimality solely within its confines; can arise at interior points or on the boundary. Detecting whether the feasible set is nonempty is a preliminary step in solving constrained problems; for instance, in , a conceptual I approach introduces auxiliary variables to ascertain feasibility without optimizing the original objective. In settings with , such as parameter variations, robust feasibility extends the by requiring solutions to satisfy constraints across an entire uncertainty set, often via worst-case formulations to ensure reliability. This approach, prominent in , addresses real-world variability absent in deterministic models.

Continuous Optimization Problem

Definition and Features

Continuous optimization involves finding values for real-valued decision variables that minimize or maximize a continuous function, subject to constraints defining a , typically over real numbers in \mathbb{R}^n. Unlike , the search space is infinite and continuous, allowing variables to take any value within their domain, often enabling the use of derivative-based methods for solution. Key features include the objective function f(x), which is often smooth (differentiable) and may be linear, quadratic, or nonlinear; equality or inequality constraints like c_i(x) = 0 or c_i(x) \geq 0; and bounds on variables. Problems can be unconstrained (no restrictions beyond the domain) or constrained, with convex formulations—where the objective is convex and the feasible set is convex—guaranteeing global optima via local searches. Algorithms exploit gradients or Hessians for efficiency, contrasting with enumeration in discrete cases, and include interior-point methods for large-scale problems. Common subtypes are linear programming (linear functions), quadratic programming (quadratic objectives with linear constraints), and nonlinear programming (general smooth functions). Challenges arise in nonconvex problems, where multiple local optima exist, requiring techniques like branch-and-bound or heuristics. The field emphasizes scalability for high-dimensional applications, with regularization terms often added to prevent in data-driven contexts.

Common Examples

Continuous optimization problems often model real-world scenarios with smooth relationships, such as or parameter estimation. Least squares regression minimizes the sum of squared residuals \min_x \frac{1}{2m} \|Ax - y\|_2^2 to fit a to data points (a_j, y_j), widely used in and for . Ridge regression extends least squares by adding a penalty \min_x \frac{1}{2m} \|Ax - y\|_2^2 + \lambda \|x\|_2^2, where \lambda > 0 regularizes to handle in applications like predictive modeling. Portfolio optimization, as in Markowitz's mean-variance model, minimizes risk \min_w w^T \Sigma w subject to expected return constraints and w^T 1 = 1, balancing return and volatility in . Support vector machines (SVM) solve \min_{x, \beta} \frac{1}{m} \sum_j \max(1 - y_j (a_j^T x - \beta), 0) + \frac{\lambda}{2} \|x\|_2^2 for , maximizing margins in tasks. Nonlinear examples include minimizing in design via aerodynamic simulations or optimizing systems in , where objectives involve nonlinear .

Combinatorial Optimization Problem

Definition and Features

constitutes a subfield of focused on selecting the best element from a of discrete objects, where the feasible solutions correspond to structures such as subsets, permutations, or graphs. The search space X is inherently finite and discrete, often comprising configurations that grow exponentially in with the problem size n, such as |X| = n! for permutations or $2^n for subsets. Many canonical problems in this domain, including the traveling salesman problem and , are NP-hard, implying that exact solutions cannot be guaranteed in polynomial time unless P=. Key features of combinatorial optimization include the absence of differentiability in the objective function, necessitating algorithms that avoid gradient-based methods and instead employ exact techniques like for small instances or branch-and-bound for structured problems. Approximations and heuristics, such as metaheuristics or relaxations, are commonly used for scalability, with integrality constraints implicitly enforced by the discrete domain rather than explicit continuous bounds. The field draws from , algorithms, and to navigate these challenges. Prominent subtypes encompass , where decision variables are constrained to integers x \in \mathbb{Z}^n to model discrete choices; the set covering problem, which minimizes the number of selected sets to cover a ; and the matching problem, which maximizes pairings in a while avoiding shared vertices. These formulations highlight the reliance on finite enumerative structures, with the feasible set often representing discrete polyhedra that admit continuous relaxations for bounding purposes. Central challenges arise from the exponential growth of |X|, rendering exact solutions intractable for large n, a difficulty underscored by the P versus NP implications for decision versions of these problems. Solution concepts center on identifying an optimal subset, permutation, or graph configuration that extremizes the objective, while for NP-hard cases, approximation algorithms guarantee solutions within a bounded ratio of optimality, such as the 2-approximation for vertex cover, a covering problem. Emerging quantum-inspired approaches, like the Quantum Approximate Optimization Algorithm (QAOA), leverage variational quantum circuits to approximate solutions for combinatorial instances on near-term quantum devices. Understanding these requires familiarity with basic combinatorics, including measures like binomial coefficients \binom{n}{k} to quantify search space scales.

Common Examples

Combinatorial optimization problems frequently arise in discrete settings where decisions involve selecting or arranging finite sets of elements. Classic examples illustrate the challenge of optimizing over combinatorial structures, such as permutations, subsets, or assignments, within finite search spaces. These problems are typically NP-hard, highlighting their computational complexity. The Traveling Salesman Problem (TSP) requires finding a minimum-cost Hamiltonian cycle in a complete graph, where the vertices represent cities and the edge weights denote travel distances or costs, such that each city is visited exactly once before returning to the starting point. The decision variables correspond to permutations of the cities, emphasizing the discrete nature of sequencing in a finite set of possible tours. This problem exemplifies routing challenges in logistics and manufacturing. The 0-1 involves selecting a of items, each with a value v_i and weight w_i, to maximize the total value \sum v_i x_i subject to the capacity constraint \sum w_i x_i \leq W, where x_i \in \{0,1\} indicates whether item i is included. The decisions reflect the combinatorial choice of packing resources without fractions, common in applications like and budgeting. Graph Coloring seeks the minimum number of colors needed to assign to the vertices of a such that no two adjacent vertices share the same color, with the decision space consisting of color to vertices under adjacency constraints. This discrete captures scheduling and resource partitioning tasks, where the finite palette of colors limits feasible configurations. The Maximum Matching Problem in a aims to select the largest possible set of edges with no two edges sharing a , often applied to scenarios like pairing workers to tasks. The objective is to maximize the of the edge set, with the search space exploring subsets of edges that respect vertex disjointness in the discrete structure. The (VRP) generalizes TSP to multiple serving a set of from a depot, minimizing total travel cost while satisfying capacity and time constraints; variants include the capacitated VRP where loads are limited. In , post-2020 demand surges have amplified VRP's relevance for last-mile , involving discrete route assignments across customer locations and fleets.

References

  1. [1]
    Mathematical Optimization - an overview | ScienceDirect Topics
    Mathematical optimization is defined as the process of selecting the "best" solution from a set of alternatives based on a specified criterion, involving an ...Introduction to Mathematical... · Types and Categories of...
  2. [2]
    Introduction | SpringerLink
    Jan 11, 2024 · Optimization is the way of achieving the best possible outcome given the degrees of freedom and the constraints. By degrees of freedom we mean ...Missing: sources | Show results with:sources
  3. [3]
    [PDF] Introduction to Mathematical Optimization
    Mathematical optimization is making something 'best', which can involve maximizing or minimizing, and is a branch of applied mathematics.
  4. [4]
    Optimization - Calculus I - Pauls Online Math Notes
    Nov 16, 2022 · Optimization problems seek the largest or smallest value of a function, often subject to a constraint, which is a condition that must be true.
  5. [5]
    Linear Programming Basics
    ... optimal solution and its objective function ... (For a maximization problem, it is unbounded if one can find feasible solutions who objective function value is ...
  6. [6]
    Optimization
    Mar 5, 2011 · "In mathematics, the term optimization, or mathematical programming, refers to the study of problems in which one seeks to minimize or maximize ...<|separator|>
  7. [7]
    [PDF] 1. WHAT IS OPTIMIZATION?
    Optimization is maximizing or minimizing a function, often comparing choices to determine the 'best' option, such as minimal cost or maximal profit.
  8. [8]
    [PDF] Linear Programming
    an optimal solution is the feasible solution that produces the best objective function ... components: decision variables, objective function, and constraints.
  9. [9]
    Optimization Characteristics - Analytica Docs
    An optimization problem is defined by four parts: a set of decision variables, an objective function, bounds on the decision variables, and constraints.<|control11|><|separator|>
  10. [10]
    George Dantzig: Operations research phenom - Berkeley Engineering
    George Dantzig, one of the founding fathers of industrial engineering and operations research, developed the simplex algorithm and was the first person to ...
  11. [11]
    George Dantzig (1914 - 2005) - Biography - MacTutor
    George Dantzig was an American mathematical scientist who worked in operations research, computer science, economics and statistics. He is best known for ...
  12. [12]
    The Do's and Don'ts of choosing an Optimization Problem – OMSCS ...
    Feb 19, 2024 · The fundamental components of an optimization problem include the objective function, decision variables, and constraints. The challenge in ...
  13. [13]
    Optimization | Department of Mathematics - UCSD Math
    An optimization problem begins with a set of independent variables, and often includes conditions or restrictions that define acceptable values of the variables ...
  14. [14]
    [PDF] Numerical Optimization
    Page 1. Numerical Optimization. Jorge Nocedal. Stephen J. Wright. Springer. Page 2. Springer Series in Operations Research. Editors: Peter Glynn Stephen M ...
  15. [15]
    None
    Below is a merged summary of the standard formulation of convex optimization problems based on the provided segments from "Convex Optimization" by Boyd & Vandenberghe and related sections from https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf. To retain all information in a dense and organized manner, I will use a combination of narrative text and a table in CSV format for detailed comparisons across the segments. The narrative will provide an overview, while the table will capture specific details such as general form, objective function, constraints, and other aspects for each segment.
  16. [16]
    [PDF] Multicriteria Optimization and Decision Making - LIACS
    Here the search space is defined by a set of intervals, that restrict the range of variables, so called bounds or box constraints. Besides this, variables can ...
  17. [17]
    None
    Below is a merged response that consolidates all the information from the provided summaries into a single, comprehensive summary. To maximize detail and clarity, I will use a table in CSV format to organize the key elements (Definition, Examples, Relevant Sections, and Useful URLs) while retaining all unique details from each segment. Following the table, I will provide a narrative summary that integrates the information cohesively.
  18. [18]
    [PDF] The generic optimization problem • Properties of functions and sets
    Typically the constraint set is defined by in- equality and equality constraints, as well as the domain of the values of the decision variables, for e.g..
  19. [19]
    [PDF] Numerical Optimization - UCI Mathematics
    This is a book for people interested in solving optimization problems. Because of the wide. (and growing) use of optimization in science, engineering ...<|control11|><|separator|>
  20. [20]
    [PDF] Solving Problems with Hard and Soft Constraints Using a Stochastic ...
    In the language of operations research, the hard constraints specify the set of feasible solutions, and the soft constraints specify a function to be optimized ...
  21. [21]
    [PDF] Solving Linear Programs - MIT
    All decision variables are constrained to be nonnegative. 2. All ... optimal solution; or shows that the objective function is unbounded over the ...
  22. [22]
    [PDF] A Practical Guide to Robust Optimization - arXiv
    Jan 12, 2015 · There are two approaches to deal with data uncertainty in optimization, namely robust and stochastic optimization. Stochastic optimization (SO) ...
  23. [23]
    Robust and Data-Driven Optimization: Modern Decision Making U
    [20] and Bertsimas and Sim [14, 15] have proposed a robust optimization approach based on polyhedral uncertainty sets, which preserves the class of problems ...
  24. [24]
    [PDF] Introduction to Combinatorial Optimization
    Combinatorial optimization aims to find an optimal object from a finite set of objects, where feasible solutions are candidate objects.
  25. [25]
    What is Combinatorial Optimization?
    Combinatorial optimization is the process of searching for maxima (or minima) of an objective function F whose domain is a discrete but large configuration ...
  26. [26]
    [PDF] COMPUTERS AND INTRACTABILITY A Guide to the Theory of NP ...
    The book should be suitable for use as a supplementary text in courses on algorithm design, computational complexity, operations research, or combinatorial ...
  27. [27]
    18.433 Combinatorial Optimization
    This subject covers combinatorial optimization which deals with optimization problems defined on discrete structures.
  28. [28]
    [PDF] Combinatorial Optimization - University of Waterloo
    Sep 18, 1997 · Combinatorial optimization is a lively field of applied mathematics, combining techniques from combinatorics, linear programming, and the ...
  29. [29]
    Integer Programming and Combinatorial Optimization
    The course is a comprehensive introduction to the theory, algorithms and applications of integer optimization and is organized in four parts.
  30. [30]
    Set covering problem - Optimization Wiki
    Dec 21, 2020 · The set covering problem, which aims to find the least number of subsets that cover some universal set, is a widely known NP-hard combinatorial ...Problem formulation · Integer linear program... · Approximation via LP...
  31. [31]
    [PDF] 1. Lecture notes on bipartite matching
    Feb 2, 2013 · Matching problems are among the fundamental problems in combinatorial optimization. In this set of notes, we focus on the case when the ...
  32. [32]
    Design, Evaluation and Analysis of Combinatorial Optimization ...
    Jul 7, 2012 · Unfortunately, many combinatorial optimization problems are NP-hard which usually means that they are unsolvable in practice. However, it is ...
  33. [33]
    Approximation algorithms for combinatorial problems - ScienceDirect
    Simple, polynomial-time, heuristic algorithms for finding approximate solutions to various polynomial complete optimization problems are analyzed with respect ...
  34. [34]
    [1411.4028] A Quantum Approximate Optimization Algorithm - arXiv
    Nov 14, 2014 · We introduce a quantum algorithm that produces approximate solutions for combinatorial optimization problems.
  35. [35]
    Beyond fifty years of vehicle routing: Insights into the history and the ...
    Jun 26, 2025 · Vehicle routing problems (VRPs) are a large class of well-studied and computationally hard combinatorial optimization problems.
  36. [36]
    [PDF] The Traveling Salesman Problem: An overview of exact and ...
    The Traveling Salesman Problem (TSP) is one of the most widely studied combinatorial opti- mization problems. Its statement is deceptively.
  37. [37]
    (PDF) Knapsack Problems - ResearchGate
    The multiple-choice knapsack problem (MCKP) is a generalization of the ordinary knapsack problem, where the set of items is partitioned into classes. The binary ...
  38. [38]
    Graph Coloring Problems | Wiley Online Books
    Dec 3, 1994 · Graph Coloring Problems cover image. Graph Coloring Problems. Author(s):. Tommy R. Jensen, Bjarne Toft, ... PDF · References · Request permissions.
  39. [39]
    The Truck Dispatching Problem | Management Science - PubsOnLine
    The paper is concerned with the optimum routing of a fleet of gasoline delivery trucks between a bulk terminal and a large number of service stations ...
  40. [40]
    (PDF) Optimized Vehicle Routing Problem for The Last Mile E ...
    Feb 22, 2025 · This research addresses the Optimized Vehicle Routing Problem (VRP) for last-mile e-commerce parcel delivery using e-cargo bikes in a Central London case study ...