Fact-checked by Grok 2 weeks ago

Stochastic programming

Stochastic programming is a framework designed to make optimal decisions in the presence of , where some or all problem parameters are modeled as random variables following known probability distributions. Unlike deterministic optimization, it incorporates probabilistic elements to account for real-world variability in data such as demands, prices, or yields, aiming to minimize expected costs or maximize expected profits over possible scenarios. The field originated in the early 1950s as an extension of to handle uncertain coefficients, pioneered by George B. Dantzig through concepts like for solving large-scale problems. A fundamental distinction in stochastic programming is between two-stage and multistage models. In two-stage stochastic programs, decisions are divided into "here-and-now" choices made before is revealed (first stage) and "wait-and-see" recourse actions taken afterward (second stage), with the objective minimizing first-stage costs plus the expected value of second-stage recourse costs. This approach approximates more complex dynamics by collapsing future uncertainties into a single revelation point, often using scenario trees or sample average approximations for computation. Multistage extensions generalize this to sequential decision-making across multiple periods, where uncertainties unfold gradually and decisions adapt at each stage, better capturing non-anticipativity constraints that prevent foresight of future outcomes. These models are typically formulated as large-scale linear or mixed-integer programs, solved via methods like , stochastic decomposition, or progressive hedging. Stochastic programming finds broad applications in industries facing , including planning for under variable and fuel prices, financial amid market volatility, and with fluctuating supplies. In process , it optimizes designs for chemical plants accounting for raw material variability, while in , it supports and under stochastic travel times. Recent advances emphasize measures beyond , such as conditional value-at-risk, to address downside potential, and integration with for scenario from data. Despite computational challenges for high-dimensional uncertainties, the field's growth reflects its essential role in decision support for complex, real-time systems.

Introduction

Definition and Basic Concepts

Stochastic programming is a mathematical optimization paradigm designed to make optimal decisions in the presence of , where some or all problem parameters are modeled as random variables following known probability distributions. Unlike deterministic optimization, which assumes fixed parameters, stochastic programming incorporates probabilistic elements to evaluate decisions based on expected outcomes or risk-adjusted measures, enabling robust solutions for real-world applications such as , , and financial planning. The general formulation of a stochastic program seeks to minimize the of an objective function involving uncertain parameters, expressed as \min_x \mathbb{E}[f(x, \xi)], where x represents the decision variables, \xi is a random vector with a specified , and \mathbb{E}[\cdot] denotes the operator. This expectation integrates over all possible realizations of \xi, weighted by their probabilities, to capture the average performance of the decision x. Probabilistic constraints may also be included, such as \mathbb{P}[g(x, \xi) \leq 0] \geq 1 - \alpha, ensuring that constraints hold with high probability $1 - \alpha. These elements replace fixed parameters in deterministic models with distributional information, shifting the focus from point estimates to optimization that accounts for variability. Central to stochastic programming are key concepts like here-and-now decisions, which are first-stage choices made before the uncertainty \xi is realized, and wait-and-see decisions, or recourse actions, implemented after observing \xi. Non-anticipativity constraints enforce that decisions at each stage depend only on information available up to that point, preventing the use of future revelations in earlier choices and ensuring feasible, implementable policies. For instance, in a two-stage setting, here-and-now decisions commit resources upfront, while wait-and-see adjustments adapt to realized outcomes. Uncertainty in stochastic programming is modeled using distributions, represented by finite scenarios each with associated probabilities, or continuous distributions approximated via sampling methods. Beyond , risk measures such as conditional value-at-risk (CVaR) provide introductory tools to quantify tail risks by focusing on the exceeding a value-at-risk threshold, promoting more conservative decisions in volatile environments.

Historical Development

The origins of stochastic programming trace back to the mid-1950s, when researchers began addressing the inherent uncertainties in real-world optimization problems. George B. Dantzig, often regarded as the father of , highlighted the stochastic nature of practical applications in his seminal 1955 paper, "Linear Programming under Uncertainty," where he proposed initial approaches to incorporate probabilistic elements into . Independently, E. M. L. Beale contributed foundational work in the same year through his paper "On Minimizing a Subject to Linear Inequalities," which explored methods for handling stochastic by approximating expected values and recourse actions. These efforts marked the field's emergence as a distinct extension of deterministic optimization, focusing on under . Key milestones in the late 1950s and further solidified the theoretical framework. Dantzig and Albert Madansky formalized the two-stage recourse problem in their 1961 paper, "On the Solution of Two-Stage Linear Programs under Uncertainty," presented at the Fourth Symposium on Mathematical Statistics and Probability, introducing supporting hyperplanes for the recourse function to enable practical computation. Chance-constrained programming, a subclass emphasizing probabilistic feasibility, was introduced by Abraham Charnes and William W. Cooper in 1959, allowing constraints to hold with a specified probability rather than deterministically. During the and , decomposition methods gained prominence for tackling larger-scale problems; notably, the L-shaped method, an adaptation of for stochastic programs, was developed by Richard Van Slyke and Roger J.-B. Wets in 1969, facilitating iterative solution of two-stage models by generating optimality and feasibility cuts. The and 1990s saw significant growth in computational techniques and modeling sophistication, driven by advances in hardware and algorithms. Scenario trees emerged as a critical tool for representing multistage , with early developments in the enabling non-anticipative decisions in dynamic settings, as explored in works like those by John R. Birge and François Louveaux in their 1997 book, which built on prior scenario-based approximations from the decade. Mixed-integer stochastic programming also advanced, with theoretical and computational progress starting in the late , allowing integration of discrete decisions under . The formation of the Stochastic Programming Society in 2012, evolving from the earlier Committee on Stochastic Programming (established in 1982) and the triennial International Conference on Stochastic Programming (initiated in 1986), fostered collaboration and dissemination of these innovations. In recent years, up to 2025, stochastic programming has integrated with for enhanced scenario generation, such as using generative models to create realistic uncertainty representations from data, as reviewed in studies on machine learning applications to . Distributionally robust extensions have also proliferated, addressing ambiguity in probability distributions to improve solution robustness, with key formulations appearing in works like those on two-stage distributionally robust stochastic programs since 2020. These trends are particularly evident in applications to planning, where multistage models optimize capacity expansion under variable and outputs, as demonstrated in recent frameworks combining stochastic operations with expansion decisions.

Problem Formulations

Two-Stage Stochastic Programs

Two-stage stochastic programs represent a core paradigm in stochastic programming, modeling decision-making under where first-stage decisions must be made before the realization of random variables, followed by second-stage corrective actions. Introduced by in his seminal 1955 paper, this framework captures "here-and-now" choices in the first stage and "wait-and-see" adjustments in the second stage once uncertainty is revealed. Such models are widely applied in areas like and , where initial commitments cannot be easily revised without incurring additional costs. The standard mathematical formulation of a two-stage stochastic linear program with recourse is given by \min_{x, y} \ \mathbf{c}' x + \mathbb{E}_\xi [Q(x, \xi)] subject to A x = b, \quad x \geq 0, T(x, \xi) y = h(\xi), \quad y \geq 0, where x denotes the first-stage decision , y the second-stage decision , \xi the random representing , \mathbf{c} the first-stage cost coefficients, and Q(x, \xi) = \min_y \{ \mathbf{q}(\xi)' y \mid T(x, \xi) y = h(\xi), \, y \geq 0 \} is the recourse function capturing the minimum expected cost of second-stage adjustments for a given first-stage decision and uncertainty realization. The \mathbb{E}_\xi integrates over the of \xi, reflecting the probabilistic of the recourse costs. This ensures that first-stage decisions against potential future scenarios while minimizing total expected costs. Recourse in two-stage models refers to the feasibility and cost of second-stage actions to address deviations caused by . Complete recourse assumes that for every feasible first-stage x and every realization \xi, there exists a second-stage y satisfying the constraints, ensuring Q(x, \xi) is finite and well-defined everywhere. Relatively complete recourse relaxes this by requiring feasibility only for first-stage decisions in the original feasible set. Simple recourse, a special case, models second-stage decisions as non-negative surplus and shortage variables to balance supply-demand imbalances, often with linear costs. Economically, recourse costs \mathbf{q}(\xi) interpret as penalties for corrective measures, such as overtime production or lost sales, incentivizing first-stage decisions that minimize expected adjustments. The random vector \xi is typically assumed to have a known , often with finite discrete support for tractability—representing scenarios \xi_s with probabilities p_s > 0, \sum_s p_s = 1, so the expectation becomes \sum_s p_s Q(x, \xi_s). For continuous distributions, the is approximated by generating a of scenarios via sampling methods like , enabling reformulation as a large-scale deterministic equivalent. This scenario-based preserves the two-stage structure while facilitating computational solution. A representative example is inventory planning under random demand. In the first stage, a retailer decides an x \geq 0 at c x, subject to a x \leq b. \xi follows a with scenarios \xi_s (e.g., low, medium, high demand) and probabilities p_s. In the second stage, if \xi_s > x, a incurs penalty p (\xi_s - x); if \xi_s < x, excess costs h (x - \xi_s). The recourse function is Q(x, \xi_s) = p \max(\xi_s - x, 0) + h \max(x - \xi_s, 0), and the objective minimizes c x + \sum_s p_s Q(x, \xi_s). For instance, with c=1, b=100, p=2, h=0.5, and scenarios \xi_1=80 (p_1=0.4), \xi_2=120 (p_2=0.6), due to the specific parameters the expected total cost is constant at 128 for any x in [80, 120]; with the budget constraint, the optimal x = 100 yields an expected total cost of 128, balancing overstock and shortage risks.

Multistage Stochastic Programs

Multistage stochastic programs extend the framework of stochastic optimization to sequential decision-making over multiple time periods, where uncertainty unfolds gradually and decisions are made adaptively based on partial information revelation. In this setting, decisions at each stage t are taken before the full realization of future uncertainties, leading to a dynamic structure that captures the value of information and recourse actions over time. This contrasts with simpler models by allowing for a richer representation of real-world planning problems, such as resource allocation under evolving market conditions or environmental factors. The general formulation of a multistage stochastic program involves minimizing a nested sequence of expectations over decision variables x_t at each stage t = 1, \dots, T, conditional on information available up to that point. Specifically, the objective is to solve \min_{x_1} \mathbb{E} \left[ \sum_{t=1}^T c_t' x_t \mid \mathcal{F}_0 \right], where the expectation is taken with respect to the filtration \{\mathcal{F}_t\} representing the information structure, subject to constraints like A_t x_t = b_t(\omega_t) for realizations \omega_t of the random process \xi_t. Non-anticipativity constraints ensure that x_t is \mathcal{F}_{t-1}-measurable, meaning decisions at stage t depend only on past observations and not on future outcomes, preventing the use of unattainable foresight. This recursive structure defines the value function at each stage via dynamic programming, where the cost-to-go from stage t onward is the conditional expectation of the remaining costs given the history up to t. For instance, the two-stage model emerges as a special case when T=2, reducing to a single here-and-now decision followed by wait-and-see recourse. To operationalize this formulation, uncertainties are typically discretized into scenario trees, which provide a discrete approximation of the underlying . A scenario tree is constructed as a lattice with nodes representing partial histories of the random outcomes, where branches from a node denote possible evolutions at the next stage, incorporating both fan-out (divergence into multiple futures) and fan-in (convergence of paths to shared nodes) structures. Decisions are node-based, with the same action prescribed for all scenarios passing through a given node to enforce non-anticipativity; for example, the root node corresponds to the initial decision x_1, while leaf nodes terminate at stage T with associated probabilities summing to one along each full path (scenario). Construction methods, such as backward reduction or forward clustering of initial scenarios, aim to balance approximation accuracy with computational feasibility, often using distance metrics like the L_r-norm to bound errors in the tree's representation of the original distribution. A key simplifying assumption in many multistage models is stagewise independence, where the random increments \xi_t at stage t are conditionally independent of the past history \xi_{[t-1]} given the information up to t-1. This Markovian property decouples the conditional expectations in the dynamic program, allowing the cost-to-go function Q_t(x_{t-1}, \xi_t) to depend only on the current state rather than the entire path, which facilitates recursive computation and reduces the dimensionality of scenario dependencies. Without this assumption, the full history must be tracked, complicating tree structures and increasing storage requirements. Despite these advancements, multistage stochastic programs face significant computational challenges, primarily the curse of dimensionality arising from the exponential growth in the number of nodes and scenarios as stages T and branching factors increase. For S scenarios per stage, the tree size scales as O(S^T), rendering exact solutions intractable for large horizons; approximation techniques must thus be employed to prune or aggregate nodes while preserving solution quality. A representative application is long-term hydropower planning, where operators must allocate water releases over multiple periods (e.g., weeks or months) amid uncertain inflows and electricity prices, balancing immediate generation against future reservoir levels—problems where even modest T=7 stages can yield tens of thousands of nodes due to hydrological variability.

Chance-Constrained Programs

Chance-constrained programs address optimization problems under uncertainty by requiring that constraints hold with a specified probability, rather than deterministically, to model reliability in the presence of random parameters. The standard formulation minimizes a linear objective subject to probabilistic constraints on linear inequalities involving uncertain parameters, expressed as \min_{x} c^\top x subject to \mathbb{P}\{A(x, \xi) \leq b\} \geq 1 - \alpha, where x is the decision vector, \xi is a random vector with known distribution, A(x, \xi) is an affine function in x and \xi, b is a deterministic vector, and \alpha \in (0,1) is the risk tolerance level. Chance constraints can be classified as individual or joint based on how multiple constraints interact probabilistically. Individual chance constraints apply separately to each constraint, requiring \mathbb{P}\{A_i(x, \xi) \leq b_i\} \geq 1 - \alpha for each row i, which simplifies computation but may overestimate feasibility since violations can occur simultaneously across constraints. In contrast, joint chance constraints ensure simultaneous satisfaction of all constraints with the desired probability, \mathbb{P}\{A(x, \xi) \leq b\} \geq 1 - \alpha, providing a more conservative and realistic reliability guarantee but increasing computational complexity. Programs are further categorized as static or dynamic depending on the decision structure and uncertainty resolution. Static chance-constrained programs involve here-and-now decisions x made before observing \xi, with \xi drawn from a fixed known distribution, leading to single-stage optimization. Dynamic variants extend this to sequential decisions over time, where uncertainty unfolds gradually, often incorporating feedback or recourse actions, though the core probabilistic constraint framework remains similar. Solving general chance-constrained programs is challenging due to non-convexity arising from the probabilistic nature of the constraints, which can result in non-convex feasible regions even for linear deterministic counterparts. For arbitrary distributions, these problems are NP-hard, particularly with discrete or non-elliptical continuous distributions, complicating exact solutions. However, exact convex reformulations exist when \xi follows a normal distribution, transforming individual chance constraints into deterministic second-order cone constraints via the inverse cumulative distribution function of the standard normal. To address non-convexity, conservative approximations replace the chance constraint with a convex surrogate, such as one based on conditional value-at-risk (CVaR), which bounds the expected shortfall beyond the \alpha-quantile and yields a tractable semidefinite program for certain distributions. This CVaR approach provides a feasible but potentially suboptimal solution, with tightness improving as \alpha decreases. A representative application appears in reservoir operation, where the goal is to determine release policies that minimize costs while ensuring the probability of reservoir overflow (spill) remains below a threshold, such as \mathbb{P}\{\text{storage level} > \text{capacity}\} \leq \alpha, accounting for uncertain inflows.

Solution Methods

Scenario-Based Methods

Scenario-based methods in stochastic programming approximate the continuous probability distribution of uncertain parameters, denoted as \xi, by a finite discrete set of scenarios \omega = 1, \dots, S, where each scenario \omega is assigned a probability p_\omega > 0 such that \sum_{\omega=1}^S p_\omega = 1. This discretization process transforms the original stochastic program into a deterministic equivalent problem that can be solved using standard optimization techniques, enabling practical computation for problems where the uncertainty space \Xi is infinite or continuous. The choice of scenarios is guided by the need to capture the essential features of the distribution, such as moments or support points, often derived from historical data or analytical approximations. For multistage stochastic programs, scenario-based methods employ non-anticipative trees to model the evolution of uncertainty over time, ensuring that decisions at each stage depend only on information available up to that point. In a tree, nodes represent decision points, and branches from a common ancestor node share the same history, enforcing non-anticipativity through constraints that identical paths lead to identical decisions. For instance, a three-period tree might branch into multiple paths at each stage, with probabilities propagating along the branches to reflect conditional distributions of \xi. This structure preserves the sequential nature of decisions under partial information revelation. The resulting deterministic equivalent for a two-stage stochastic program has a size that grows linearly with the number of scenarios S, featuring n + S m variables and m_1 + S m_2 constraints, where n and m denote first- and second-stage dimensions, respectively. For a two-stage formulation \min_x c^\top x + \mathbb{E}[Q(x, \xi)], the equivalent is \min_{x, \{y_\omega\}} \sum_{\omega=1}^S p_\omega (c^\top x + q_\omega^\top y_\omega) subject to A x = b and T_\omega x + W y_\omega = h_\omega for each \omega, with non-negativity. In multistage settings, the extensive form expands similarly across tree nodes, leading to exponential growth in S for deep trees, which poses computational challenges. To mitigate this, scenario reduction techniques such as fast-forward selection iteratively select a subset of scenarios by maximizing probabilistic similarity, using metrics like the Kantorovich distance to preserve distribution properties while reducing S (e.g., from thousands to dozens), thereby trading minor approximation error for significant decreases in solve time. A representative example is under uncertainty, where are generated to model varying over planning periods. In a two-stage model, first-stage decisions set capacities, while second-stage recourse adjusts or backorders for each \omega with probability p_\omega; for instance, discretizing a into five equiprobable allows solving the deterministic equivalent to balance expected costs. can be generated via sampling methods to approximate the underlying .

Decomposition Techniques

Decomposition techniques in stochastic programming exploit the of the problem to divide large-scale formulations into smaller, more manageable subproblems, facilitating efficient while preserving optimality guarantees. These methods are particularly valuable for two-stage and multistage programs where the function introduces nonlinearity that can be approximated through iterative cut generation or updates. By solving subproblems corresponding to scenarios—which represent possible realizations of the —these algorithms generate supporting to refine the first-stage decisions progressively. A foundational approach is adapted to two-stage stochastic programs, known as the L-shaped method, which separates the here-and-now decisions x from the wait-and-see recourse decisions y. The two-stage problem is formulated as minimizing c^\top x + \mathbb{E}[Q(x, \xi)], where Q(x, \xi) = \min \{ q^\top y : W y = h - T x, y \geq 0 \} is the recourse function for \xi. The algorithm iteratively solves a master problem approximating the expected recourse via linear cuts and subproblems for each scenario to generate dual information. Optimality cuts are derived from extreme points \pi^i of the dual feasible set \{ \pi : W^\top \pi \leq q \}, taking the form \theta \geq \sum_k p_k (h_k - T_k x)^\top \pi_k^i, added to bound the recourse from below. Feasibility cuts, generated when subproblems are infeasible, use extreme rays \sigma^r of the dual to enforce \sum_k p_k (h_k - T_k x)^\top \sigma_k^r \geq 0. The process converges to the optimal solution under finite scenarios, with the master providing lower bounds and subproblem evaluations yielding upper bounds. Stochastic decomposition extends this framework by incorporating sampling to handle large or continuous uncertainty sets, generating sample-based cuts that approximate the expected recourse statistically. In the two-stage setting, the algorithm alternates between solving a master problem with cuts from a growing sample of scenarios and updating the sample to reduce variance in the approximation, ensuring almost sure to the true optimum as the sample size increases. Variants for multistage programs build on this by sequentially sampling across stages, using forward and backward passes to propagate cuts and value functions, which bridges stochastic programming with approximate dynamic programming techniques. These multistage extensions incorporate variance reduction strategies, such as or , to accelerate in high-dimensional settings with endogenous uncertainties. For instance, the generates stagewise cuts based on empirical distributions, allowing parallel subproblem solves while maintaining statistical consistency. Progressive hedging provides an alternative dual strategy suited for scenario-based multistage programs, enforcing the non-anticipativity constraints—requiring decisions to be scenario-independent up to the information revelation—through penalty terms and aggregation. The algorithm solves scenario subproblems independently in each , then updates Lagrange multipliers to penalize deviations from the average (hedged) policy, converging to a feasible non-anticipative under convexity assumptions. This approach is particularly effective for computation, as subproblems can be distributed across processors, and it handles variables via relaxations or bundling. Recent enhancements include adaptive penalty updates to balance feasibility and optimality progress. As an illustrative example, consider a two-stage newsvendor (inventory) model where the first-stage decision x is the order quantity, and the second-stage recourse accounts for overage and underage costs under uncertain demand \xi. The objective is to minimize \mathbb{E}[|\xi - x|] (assuming unit costs for excess and shortage), with $0 \leq x \leq 10 and scenarios \xi = 1, 2, 4 each with probability $1/3. The L-shaped method proceeds iteratively, solving the master problem and subproblems to generate optimality and feasibility cuts that tighten the approximation of the recourse function. The process adds cuts based on dual information from scenario subproblems, with upper bounds from subproblem costs and lower bounds from the master, converging after several iterations to the optimal solution at x = 2, where Q(x) = 1.

Approximation and Sampling Methods

In stochastic programming, approximation methods rely on sampling techniques to estimate expectations involving random variables, enabling the solution of otherwise intractable problems. sampling serves as a foundational approach, where a set of N independent and identically distributed (i.i.d.) samples \xi^k, k=1,\dots,N, drawn from the underlying , approximates the function \mathbb{E}[Q(x,\xi)] by the empirical mean \frac{1}{N} \sum_{k=1}^N Q(x,\xi^k). This method is particularly useful for high-dimensional or complex distributions, as it requires minimal assumptions on the probability measure beyond the ability to generate samples. The sample average approximation (SAA) method builds directly on sampling by formulating and solving a deterministic using the sample-based estimate as a for the original program. In SAA, the true problem \min_x \mathbb{E}[Q(x,\xi)] is replaced by \min_x \frac{1}{N} \sum_{k=1}^N Q(x,\xi^k), which can be solved using standard optimization solvers, with the resulting solution serving as an approximation to the optimal stochastic decision. To mitigate the high variance inherent in crude estimates, reweights the sampling distribution to focus on regions of high contribution to the , reducing the variance of the without introducing when properly adjusted. For instance, samples are drawn from a distribution q(\xi) and reweighted by the likelihood ratio \frac{p(\xi)}{q(\xi)}, where p is the target , leading to more efficient approximations in problems with rare events or skewed uncertainties. Beyond basic Monte Carlo, specialized scenario construction methods enhance approximation quality by generating structured samples that better capture distributional properties. Quasi-Monte Carlo (QMC) methods employ low-discrepancy sequences, such as Sobol or Halton points, to produce samples that are more uniformly distributed than pseudorandom ones, yielding faster convergence rates—often O((\log N)^d / N) in dimension d—for smooth integrands typical in recourse functions. Latin hypercube sampling (LHS) divides the probability space into strata and samples one point from each, ensuring marginal coverage and reducing clustering, which is effective for scenario generation in multidimensional settings. Moment-matching techniques construct discrete scenarios by optimizing weights and support points to replicate specified statistical moments (e.g., mean, variance, skewness, and correlations) of the original distribution, providing a calibrated approximation suitable for multistage programs. These methods can be used as inputs to decomposition algorithms like Benders decomposition to improve subproblem evaluations. A representative application of SAA arises in portfolio selection under random asset returns, where the objective is to maximize subject to constraints. Samples of return scenarios are generated via or , and the SAA problem is solved to approximate the ; however, smaller sample sizes introduce bias toward overly optimistic , while larger sizes reduce variance but increase computational cost, necessitating a analysis to balance estimation error and solvability.

Theoretical Foundations

Deterministic Equivalents

In stochastic programming, a deterministic equivalent reformulates the original problem into a large-scale deterministic that preserves the optimality conditions of the stochastic formulation, allowing it to be solved using standard optimization solvers. This approach is particularly useful for scenario-based models where is discretized into a of scenarios, each with an associated probability. The resulting equivalent is often a linear or mixed-integer program, depending on the original problem structure. For a two-stage stochastic linear program with recourse, the deterministic equivalent takes the form of minimizing the expected total cost over scenarios. Specifically, consider a problem where first-stage decisions x are made before is revealed, followed by second-stage recourse decisions y_\omega for each scenario \omega \in \Omega. The equivalent is given by \min_{x, \{y_\omega\}_{\omega \in \Omega}} \quad c^\top x + \sum_{\omega \in \Omega} p_\omega q_\omega^\top y_\omega subject to Ax = b, \quad T_\omega x + W y_\omega = h_\omega \quad \forall \omega \in \Omega, \quad x \geq 0, \quad y_\omega \geq 0 \quad \forall \omega \in \Omega, where p_\omega is the probability of scenario \omega, and the variables y_\omega are scenario-indexed to reflect recourse adapted to the realized . This formulation expands the problem size linearly with the number of scenarios |\Omega|, making it tractable for moderate uncertainty but challenging for large sets. In the multistage case, the deterministic equivalent adopts an extensive form that represents the of evolving uncertainty over T stages. Here, decisions at stage t are replicated across all paths up to that point, with non-anticipativity enforced either through shared variables for decisions made under the same or via explicit linking constraints that ensure identical decisions for indistinguishable histories. The resulting problem size grows exponentially as O(S^T), where S is the number of per stage, leading to a that renders direct solution impractical for more than a few stages without reduction techniques. To mitigate this, scenario aggregation methods bundle similar into representative nodes, iteratively refining the approximation while preserving key distributional properties, as developed in progressive aggregation algorithms. A concrete example illustrates the two-stage deterministic equivalent for a simple linear program with three equally probable scenarios. Consider a farmer deciding on planting acres x_1 (), x_2 (corn), and x_3 (sugar beets) on 500 acres to minimize costs while meeting uncertain yield-based feed requirements in stage. The first-stage objective includes planting costs: $150x_1 + 230x_2 + 260x_3. For each scenario—good (\omega=1), fair (\omega=2), and bad (\omega=3) yields—the second-stage recourse minimizes the expected costs of purchases (y, to cover feed requirements if production is insufficient) and (w, of excess production), such as -170w_1 -150w_2 -36w_3 -10w_4 + 238y_1 +210y_2 per (where negative coefficients reflect revenues from sales and positive reflect purchase costs), subject to yield-dependent constraints like y_1 - w_1 \geq 200 - 3x_1 (good), y_1 - w_1 \geq 200 - 2.5x_1 (fair), and y_1 - w_1 \geq 200 - 2x_1 (bad) for wheat, with analogous forms for corn (requiring 240 tons) and beets (sales constraints without purchases), plus x_1 + x_2 + x_3 \leq 500 and non-negativity. The full equivalent integrates these into a single linear program with expected second-stage costs weighted by p_\omega = 1/3, yielding an optimal solution of 108,390 total cost with x_1 = 170, x_2 = 80, x_3 = 250.

Convergence and Statistical Inference

In stochastic programming, the sample average approximation (SAA) method replaces the expected value in the objective function with an empirical mean based on a sample of size N from the underlying probability distribution. Under suitable regularity conditions, such as the feasible set being compact and the objective function being continuous with finite moments, the SAA optimal value v_N converges almost surely to the true optimal value v^* as N \to \infty. Similarly, the SAA optimal solution x_N converges almost surely to the true optimal solution set, provided the latter is nonempty and the sample average function converges uniformly on the feasible set. The asymptotic distribution of the SAA optimal value is characterized by a : \sqrt{N} (v_N - v^*) converges in distribution to a normal random variable with zero and variance equal to the variance of the objective function evaluated at the optimal , \sigma^2 = \mathrm{Var}[f(x^*, \xi)], assuming a unique minimizer and of the objective with respect to the decision variables. For more general cases without , the limiting distribution involves the infimum of a over the optimal set. Variance estimation can be achieved using the sample variance of the objective function values at the SAA or, more precisely, through influence functions that for the optimization on the . Statistical inference in SAA leverages these asymptotic properties to construct confidence intervals for the optimal value, typically by standardizing the centered and scaled estimator with its estimated variance and applying normal quantiles; for instance, an approximate (1-\alpha)-level confidence interval is v_N \pm z_{\alpha/2} \hat{\sigma} / \sqrt{N}, where z_{\alpha/2} is the standard normal quantile and \hat{\sigma}^2 is a consistent estimator of the asymptotic variance. Hypothesis testing can assess model stability, such as comparing optimal values across competing distributions via Wald-type tests on the difference of SAA estimators, ensuring the selected model is robust to distributional assumptions. These tools provide quantifiable uncertainty measures essential for decision-making in applications with limited data. A representative example is the two-stage newsvendor problem, where the first-stage decision is the order quantity before demand realization, and the second-stage recourse adjusts for shortages or overstock. Applying SAA, the asymptotic of \sqrt{N} (v_N - v^*) holds under standard assumptions like continuous demand distribution, with the limiting variance capturing the variability in profit due to demand uncertainty; numerical studies confirm that confidence intervals tighten rapidly with increasing N, often achieving reasonable precision at N \approx [1000](/page/1000).

Applications

Financial and Portfolio Optimization

Stochastic programming plays a pivotal role in financial and by modeling in asset returns, market conditions, and economic factors through probabilistic scenarios or distributions, enabling robust under . In contexts, it extends classical mean-variance frameworks to account for dynamic environments where future returns are random, allowing for sequential decisions that adapt to unfolding information. This approach is particularly valuable for institutional investors managing large-scale assets, as it incorporates real-world frictions like constraints and regulatory limits. Multistage stochastic programs facilitate dynamic by representing investment horizons as a sequence of stages, where decisions such as buying or selling assets are made at each stage based on evolving random returns, while incorporating costs and rebalancing to maintain target exposures. For instance, in asset-liability management, these models optimize portfolios over multiple periods by hedging against fluctuations and equity volatility using scenario trees to approximate distributions. Rebalancing in such frameworks minimizes deviation from strategic allocations, often solved via methods to handle the curse of dimensionality in large scenario sets. Risk-averse extensions of portfolio models integrate coherent risk measures like conditional value-at-risk (CVaR) into the objective function to penalize tail risks beyond simple variance, promoting portfolios that limit severe losses under adverse scenarios. CVaR, defined as the expected loss exceeding the value-at-risk threshold, can be optimized linearly within programs, allowing investors to balance expected returns against downside exposure while respecting budget and diversification constraints. These measures ensure coherence properties such as , making them suitable for multi-asset s where correlations amplify risks. A representative example is the two-stage mean-variance model, where the first-stage decision allocates initial weights to assets under , and the second-stage recourse adjusts holdings based on realized returns to minimize variance while targeting a . Scenario-based returns are generated from historical or simulated data, such as daily constituent prices. Post-2020 developments have integrated (ESG) factors into multistage stochastic frameworks, treating ESG scores as constraints or objectives to align portfolios with sustainability goals amid volatile markets. For high-volatility assets like cryptocurrencies, chance-constrained stochastic programs optimize allocations by enforcing probabilistic bounds on losses, as seen in models for and other altcoins that incorporate CDaR to manage extreme drawdowns, yielding risk-adjusted s superior to naive diversification. These advancements reflect growing demand for ethical investing under .

Supply Chain and Process Systems

Stochastic programming plays a pivotal role in and within , particularly through two-stage models that address random uncertainties. In these models, first-stage decisions involve determining initial production quantities or order levels without full knowledge of realization, while second-stage recourse actions adjust for or excess based on observed scenarios. The objective is typically to minimize the expected total cost, encompassing holding costs for surplus and shortage penalties for unmet . For instance, in perishable goods supply chains like products, two-stage stochastic programs optimize periodic review policies by generating scenarios for variability. Global leverages two-stage programming to enhance against disruptions such as supplier failures or transportation delays. These models employ scenario trees to represent evolving uncertainties over multiple planning periods, enabling decisions on facility locations, capacities, and flows that balance expected costs with risk mitigation. By incorporating probabilistic scenarios for disruption events, firms can networks that maintain levels during adverse conditions, as demonstrated in four-echelon supply chains where formulations reduced by up to approximately 4% relative to static designs. Decomposition techniques, such as , are occasionally applied to solve these large-scale problems efficiently. A representative application in process systems is the optimization of chemical production networks under feedstock volatility, where stochastic programming integrates expansion decisions with . In such models, uncertain s are captured via scenario-based formulations, allowing for robust strategies that against fluctuations while minimizing long-term costs. For example, stochastic pooling problems in operations use two-stage approaches to optimize blending and under and variability. Recent advances in stochastic programming for supply chains emphasize , particularly incorporating carbon emission uncertainties in network design. Multistage models now account for volatile carbon pricing and regulatory scenarios, optimizing for low-carbon pathways in global while balancing economic and environmental objectives. These frameworks have been applied to supply chains, where scenario trees model uncertainties in and carbon prices alongside emissions, achieving reductions of over 3 ktons CO2-eq without sacrificing profitability.

Energy and Other Engineering Applications

Stochastic programming plays a crucial role in power generation , particularly in multistage hydro- scheduling, where uncertainties in inflows and electricity demand necessitate to minimize costs and ensure reliable supply. In these models, random inflows to reservoirs and fluctuating loads are represented through trees or sampling, allowing for the coordination of hydroelectric and generation over medium-term horizons, such as monthly . For instance, a multistage stochastic programming formulation using nested has been applied to optimize usage in hydro- systems, reducing reliance on expensive during periods by efficiently handling up to 350,000 in simulations spanning 12 months. This approach solves large-scale problems with millions of variables in under 20 minutes on standard hardware, demonstrating significant computational efficiency for real-world utility operations. The integration of renewable energy sources like wind and solar into power systems further leverages stochastic programming through chance-constrained unit commitment formulations, which account for the variability in renewable output while maintaining reliability constraints. These models incorporate probabilistic guarantees, such as ensuring that supply meets demand with a specified probability (e.g., 75% tolerance for wind deviations modeled via Weibull distributions), to schedule thermal units and energy storage alongside intermittent renewables. A mixed-integer linear programming-based stochastic unit commitment problem for wind-battery-thermal systems has shown that incorporating can reduce daily generation costs by approximately 3.5% (from $548,436 to $527,070 in a base case), while energy storage enables meeting up to 15% higher demand levels that would otherwise be infeasible with renewables alone. Such chance constraints provide a brief mechanism to enforce reliability in energy systems under variability, balancing operational risks without overly conservative deterministic approximations. Beyond energy, stochastic programming addresses uncertainties in other engineering domains, such as biological processes and telecommunications infrastructure. In vaccine production, dynamic stochastic models optimize the composition of influenza vaccines under random yield uncertainties, where production success rates for new strains can vary widely, leading to potential shortages if early decisions favor unproven formulations. The optimal policy derived from these models uses threshold-based rules to decide between retaining established compositions (with lower yield risk but possible mismatch to evolving strains) or updating them, improving overall social welfare by adapting to both yield variability and strain effectiveness risks in historical applications. Similarly, in telecommunications network design, stochastic programs model traffic demand uncertainty across multi-commodity flows, enabling capacity planning that minimizes costs while satisfying probabilistic service levels for links and nodes. These formulations distinguish design, dimensioning, and routing decisions under stochastic loads, providing robust topologies that handle variability from user behavior to network failures. A representative example of stochastic programming in energy applications is the two-stage formulation for electricity market bidding, where first-stage decisions commit bids in day-ahead markets under price uncertainty, and second-stage recourse adjusts for realized scenarios. In a stochastic bi-level model for virtual energy storage merchants, price and renewable output scenarios (e.g., 7 scenarios including extreme ramps) are used to maximize profits while ensuring feasibility across thousands of sampled outcomes; case studies from Gansu Province, China, in the early 2020s demonstrate that strategic bidding increases merchant profits to over 219,000 yuan while reducing wind curtailment from 6.9% to 4.5%. This approach highlights the practical impact of stochastic methods in enhancing market efficiency and renewable utilization in modern power systems. Recent advances as of 2025 include multi-timescale stochastic programming frameworks for power systems decision-making under uncertainty.

Software Tools

Modeling Languages

Algebraic modeling languages (AMLs) such as GAMS and AMPL provide robust for formulating programs by extending deterministic modeling paradigms to handle through and probabilistic elements. In GAMS, the Extended Mathematical Programming () framework enables modeling via annotations that define random variables with discrete or continuous distributions, such as normal or uniform, allowing for automatic generation and evaluation. This includes for loops over multiple realizations of , where the solver processes each independently before aggregating results. operators, like the ExpectedValue keyword, facilitate the computation of objective functions as weighted averages over , directly incorporating recourse decisions in multi-stage formulations. AMPL similarly supports stochastic programming through its algebraic syntax, emphasizing sample average approximation (SAA) for handling uncertainty by defining scenario-specific parameters, such as demand realizations drawn from a distribution. Users can loop over scenarios using indexing sets and compute expectations via summation, e.g., averaging profits or costs across sampled outcomes to approximate the stochastic objective. For recourse functions, AMPL allows declarative specification of second-stage variables contingent on first-stage decisions and scenario outcomes, enabling models like two-stage inventory problems with overage and underage costs. Both GAMS and AMPL integrate with external scenario generators for sampling from empirical or parametric distributions, streamlining the transition from deterministic prototypes to stochastic variants without altering core model structure. Python-based tools like Pyomo offer flexible extensions for stochastic programming, particularly through the now-deprecated PySP (Python Stochastic Programming) or its active successor mpi-sppy (MPI-based Stochastic Programming in Python), which builds on Pyomo's object-oriented algebraic modeling to define multi-stage problems using scenario trees. These extensions allow users to specify stages explicitly, with first-stage decisions independent of uncertainty and subsequent stages adapting via recourse variables indexed by scenarios. Distributions are modeled via scenario probabilities in a tree structure, supporting discrete approximations of continuous uncertainties, while recourse functions are declared declaratively within the base model, extended per scenario. Integration with scenario generators is achieved through data files or Python scripts that populate the tree, enabling compatibility with sampling methods like Latin hypercube for efficient uncertainty representation. A key advantage of these languages is their declarative syntax, which separates model logic from data and , facilitating recourse modeling without explicit of all scenarios in the formulation. For instance, in Pyomo, a two-stage —where the first stage decides order quantity under uncertain demand, and the second stage handles sales and salvage—can be expressed as follows, using an extensive form approximation with discrete demand scenarios:
python
from pyomo.environ import *

model = ConcreteModel()

# First-stage decision: order quantity
model.Q = Var(bounds=(0, None))

# Scenarios for demand (e.g., low=80, medium=100, high=120 with probs 0.3, 0.4, 0.3)
model.S = Set(initialize=['low', 'medium', 'high'])
model.demand = [Param](/page/PARAM)(model.S, initialize={'low':80, 'medium':100, 'high':120})
model.prob = [Param](/page/PARAM)(model.S, initialize={'low':0.3, 'medium':0.4, 'high':0.3})
model.p = [Param](/page/PARAM)(initialize=5)  # selling price
model.c = [Param](/page/PARAM)(initialize=3)  # [cost](/page/Cost)
model.s = [Param](/page/PARAM)(initialize=1)  # salvage value

# Second-stage: sales and leftover
model.sales = Var(model.S, bounds=(0, None))
model.left = Var(model.S, bounds=(0, None))

# Constraints
def sales_rule(m, s):
    return m.sales[s] <= m.demand[s]
model.sales_con = Constraint(model.S, rule=sales_rule)

def left_rule(m, s):
    return m.left[s] == m.Q - m.sales[s]
model.left_con = Constraint(model.S, rule=left_rule)

# Objective: expected profit
def obj_rule(m):
    return sum(m.prob[s] * (m.p * m.sales[s] - m.c * m.Q + m.s * m.left[s]) for s in m.S)
model.obj = Objective(rule=obj_rule, sense=maximize)

# Solve with a solver like GLPK
solver = SolverFactory('glpk')
results = solver.solve(model)
This snippet illustrates declarative definition of stages and recourse, with the objective minimizing expected cost or maximizing over scenarios; it can be extended using decomposition-based solving in extensions like mpi-sppy.

Specialized Solvers and Frameworks

Specialized solvers and frameworks for stochastic programming are designed to exploit the structure of stochastic models, such as scenario trees and multistage decisions, to handle computational challenges that general-purpose solvers face with large-scale equivalents. These tools implement decomposition methods like progressive hedging, L-shaped algorithms, and nested decompositions, often integrating with modeling languages to generate and solve extensive forms or decomposed subproblems efficiently. mpi-sppy, an open-source extension to the Pyomo optimization modeling package and successor to the deprecated PySP, provides dedicated support for formulating and solving stochastic programs through scenario-based decomposition and progressive hedging algorithms. It allows users to specify a deterministic base model and a scenario tree with uncertain parameters, enabling the generation of extensive forms for standard solvers or the application of progressive hedging—a heuristic that decomposes the problem by scenarios, solves subproblems independently, and iteratively enforces non-anticipativity constraints to approximate solutions for multi-stage mixed-integer problems. mpi-sppy has been applied in various domains, demonstrating scalability for problems where the extensive form exceeds memory limits of general solvers. For mixed-integer stochastic linear programs, the framework within the project serves as an abstract base layer for implementing branch-and-bound algorithms, extended in tools like PIPS-SBB to solve two-stage stochastic mixed-integer programs via dual-block angular decomposition. PIPS-SBB distributes data across processors for LP relaxations using PIPS-NLP and employs branch-and-cut on scenario subproblems, achieving finite convergence to optimality while exploiting parallelism at multiple levels, such as node processing and subproblem solves. This approach is particularly effective for large-scale stochastic where deterministic equivalents are impractical due to size. SCIP extensions enhance its branch-and-cut capabilities for stochastic programs by incorporating frameworks, as demonstrated in solving stochastic capacitated facility location problems through scenario-based subproblem generation and master problem updates. These extensions allow SCIP to handle mixed-integer stochastic LPs by iteratively refining cuts on the deterministic equivalent, improving performance over naive extensive-form solving for structured uncertainties. Julia's StochasticPrograms.jl offers an efficient, open-source framework for multistage stochastic programming, featuring nested decomposition algorithms that parallelize across stages and scenarios for recourse problems. It supports distributed computing environments, from single machines to clusters, with implementations of L-shaped, progressive hedging, and quasi-gradient methods, emphasizing user-friendly modeling and extensibility for research and industrial applications. Benchmarks show strong scaling, solving large multistage problems in seconds on multicore systems where equivalent serial approaches take minutes. As an example of progressive hedging in practice, consider a multistage stochastic portfolio optimization model for stock investments incorporating conditional value-at-risk (CVaR) constraints under uncertainty in returns, solved on 200 scenarios generated via moment-matching heuristics. This approach yielded annualized returns of approximately 13% over an eight-year period in the stock market, outperforming buy-and-hold strategies by 1-2%. For such financial problems with many scenarios, progressive hedging reduces solve times significantly compared to generating and solving the extensive form with CPLEX; while CPLEX handles small equivalents quickly (e.g., seconds for 10 scenarios), it becomes memory-intensive and slow (hours or infeasible) for hundreds of scenarios, whereas progressive hedging converges in minutes on decomposed subproblems.

References

  1. [1]
    Introduction to Stochastic Programming - SpringerLink
    $$49.99 In stockThis textbook provides a first course in stochastic programming suitable for students with a basic knowledge of linear programming, elementary analysis, and ...
  2. [2]
    A Review of Stochastic Programming Methods for Optimization of ...
    A generalization of two-stage stochastic programming to the sequential realization of uncertainties is called “multistage stochastic programming”.<|control11|><|separator|>
  3. [3]
    Stochastic Programming: Optimization When Uncertainty Matters
    Abstract Stochastic programming (SP) was first introduced by George Dantzig in the 1950s. Since that time, tremendous progress has been made toward an ...
  4. [4]
    [PDF] Stochastic Programming
    The observation that some data in real life optimization problems could be random, i.e. the origin of stochastic programming, dates back to the 1950s.
  5. [5]
    [PDF] Two-stage stochastic integer programming: A brief introduction
    Two-stage stochastic integer programming combines stochastic and integer programming, minimizing first-stage costs and expected second-stage costs. It has ...
  6. [6]
    [PDF] Stochastic Programming - SINTEF
    The concept of two-stage (linear) stochastic programming prob- lem with recourse. Min x∈X c. T x + E[Q(x,ξ)],. (17) where X = {x : Ax = b, x ≥ 0} and Q(x, ξ) ...
  7. [7]
    A Review of Stochastic Programming Methods for Optimization of ...
    Aug 8, 2025 · In this paper, we review the basic concepts and recent advances of a risk-neutral mathematical framework called “stochastic programming” and its applications
  8. [8]
    Stochastic Programming Models - ScienceDirect.com
    In this section we briefly discuss some basic concepts and definitions from probability and optimization theories, needed for the development of stochastic ...
  9. [9]
    Stochastic Programming - an overview | ScienceDirect Topics
    In this framework, the first-stage, or here-and-now, decisions are taken before the uncertainty is revealed, while the second-stage, or wait-and-see ...
  10. [10]
    [PDF] Optimization of Conditional Value-at-Risk - UW Math Department
    Sep 5, 1999 · Similar measures as CVaR have been earlier introduced in the stochastic programming literature, although not in financial mathematics context.
  11. [11]
    [PDF] Stochastic Programming - Stanford University
    As he tells the story, the decomposition principle was discovered with Philip Wolfe on a plane flight from Texas to Santa Monica, after visiting an oil company ...
  12. [12]
    L-Shaped Linear Programs with Applications to Optimal Control and ...
    This paper gives an algorithm for L-shaped linear programs which arise naturally in optimal control problems with state constraints and stochastic linear ...
  13. [13]
    [PDF] Recent progress in two-stage mixed-integer stochastic programming ...
    Since its beginnings in the late 1980s mixed-integer stochastic programming has undergone a considerable development both in theory and computations.
  14. [14]
    ABOUT SPS - Stochastic Programming Society
    The Stochastic Programming Society (SPS) is a world-wide group of researchers who are developing models, methods, and theory for decisions under uncertainty.
  15. [15]
    Recent developments in machine learning methods for stochastic ...
    This paper provides an introduction to these methods and summarizes the state-of-the-art works at the crossroad of machine learning and stochastic control and ...
  16. [16]
    Distributionally Robust Two-Stage Stochastic Programming
    Sep 28, 2020 · We study both unbounded and bounded support sets, and provide guidance regarding which models are meaningful in the sense of yielding robust ...Missing: trends | Show results with:trends
  17. [17]
    Capacity planning of renewable energy systems using stochastic ...
    Apr 16, 2025 · We show how to combine a multistage stochastic operational model of the hydro system with a capacity expansion model to create a single model ...
  18. [18]
    Linear Programming under Uncertainty | Management Science
    Measuring the Goodness of Orthogonal Array Discretizations for Stochastic Programming and Stochastic Dynamic Programming ... Dantzig, (1955) Linear Programming ...
  19. [19]
    [PDF] MULTISTAGE STOCHASTIC PROGRAMS - Kybernetika
    The paper gives a brief introduction into the problems of multistage stochastic program ... The general formulation of stochastic programming problems, cf. [155], ...
  20. [20]
    Multistage Stochastic Programming with Recourse: The Equivalent ...
    To each stochastic program corresponds an equivalent deterministic program. The purpose of this paper is to compile and extend the known properties for the ...Missing: seminal | Show results with:seminal
  21. [21]
    [PDF] Scenario tree modelling for multistage stochastic programs
    Starting from ˆξ, a process ξtr in tree form is constructed by deleting and bundling scenarios recursively.<|separator|>
  22. [22]
    Scenario tree modeling for multistage stochastic programs
    Nov 6, 2007 · In this paper, we develop (stability) theory-based heuristics for generating scenario trees out of an initial set of scenarios.
  23. [23]
    [PDF] Stochastic Programming
    stochastic programming ... To- gether with the nonanticipativity constraints (53) the considered problem becomes equivalent to the original formulation.
  24. [24]
    [PDF] Short-term hydropower production planning by stochastic ...
    Within the framework of multi-stage mixed-integer linear stochastic programming we develop a short-term production plan for a price-taking hydropower plant op-.
  25. [25]
    Chance-constrained optimization under limited distributional ...
    Next, we introduce the taxonomy of CCPs. Constraint (1a) is said to be an individual chance constraint for m = 1 , and a joint chance constraint for m > 1 .
  26. [26]
    [PDF] 4 Chance constrained programming
    If r > 1, then we speak about joint chance constraint, whereas r = 1 corresponds to individual chance constraint. For a given x ∈ X, the probability of ξ ...
  27. [27]
    Chance Constrained Programming with Joint Constraints
    This paper considers the mathematical properties of chance constrained programming problems where the restriction is on the joint probability of a multivariate ...Missing: individual | Show results with:individual
  28. [28]
    [PDF] Chance-Constrained Optimization under Limited Distributional ...
    Most of the work in CCP can be seen as single-stage (i.e., static) decision-making problems where the decisions are made here and now, and there are no recourse ...
  29. [29]
    [PDF] Strong Inequalities for Chance–Constrained Programming
    The chance–constrained programs with discrete distributions has many applications, such as probabilistic lot-sizing [6, 32], health care [5, 25], probabilistic.
  30. [30]
    [PDF] Chance Constraints Within Linear Programming by Anna Jenkins ...
    Within linear programming, chance constraints offer a method for solving problems that contain random variables within the objective or constraints.
  31. [31]
    [PDF] Chance Constrained Process Optimization and Control under ...
    It means holding the inequality constraints with a predefined probability level (reliability of being feasible). Page 13. 2. Optimization Problems under ...
  32. [32]
    Convex Approximations of Chance Constrained Programs
    We consider a chance constrained problem, where one seeks to minimize a convex objective over solutions satisfying, with a given close to one probability, ...
  33. [33]
    (PDF) Reservoir operation for hydropower optimization: A chance ...
    Aug 9, 2025 · This paper presents a chance-constrained linear programming formulation for reservoir operation of a multipurpose reservoir.
  34. [34]
  35. [35]
    An Algorithm for Two-Stage Linear Programs with Recourse
    Higle, Suvrajeet Sen, (1991) Stochastic Decomposition: An Algorithm for Two-Stage Linear Programs with Recourse. Mathematics of Operations Research 16(3): ...Missing: Higle Sen 1991
  36. [36]
    [PDF] x - User pages - The University of Iowa
    “L-Shaped linear programs with applications to optimal control and stochastic programming.” SIAM Journal of Applied Mathematics 17: 638-663. L-Shaped (Benders') ...
  37. [37]
    Multistage Stochastic Decomposition - SIAM Publications Library
    In this paper, we propose such a sequential sampling method which is applicable to multistage stochastic linear programs, and we refer to it as the multistage ...
  38. [38]
  39. [39]
    Scenarios and Policy Aggregation in Optimization Under Uncertainty
    ... Progressive Hedging approach with scenario clustering. 2 September 2025 ... Wets, (1991) Scenarios and Policy Aggregation in Optimization Under Uncertainty.Missing: URL | Show results with:URL
  40. [40]
    [PDF] 1 The L-Shaped Algorithm
    The L-shaped algorithm approximates the non-linear term in the objective, named for its L-shaped structure, and uses outer linearization to approximate Q.Missing: seminal paper
  41. [41]
    Monte Carlo Sampling Methods - ScienceDirect.com
    In this chapter we discuss Monte Carlo sampling methods for solving large scale stochastic programming problems. We concentrate on the “exterior” approach ...
  42. [42]
    The Sample Average Approximation Method for Stochastic Discrete ...
    The basic idea of such methods is that a random sample is generated and the expected value function is approximated by the corresponding sample average function ...
  43. [43]
    The Sample Average Approximation Method Applied to Stochastic ...
    The sample average approximation (SAA) method is an approach for solving stochastic optimization problems by using Monte Carlo simulation.
  44. [44]
    Importance Sampling in Stochastic Programming: A Markov Chain ...
    Apr 24, 2015 · In this work, we introduce an importance sampling framework for stochastic programming that can produce accurate estimates of the recourse ...
  45. [45]
  46. [46]
    Quasi-Monte Carlo methods for linear two-stage stochastic ...
    Mar 28, 2015 · Quasi-Monte Carlo (QMC) algorithms are studied for generating scenarios to solve two-stage linear stochastic programming problems.
  47. [47]
    [PDF] The Impact of Sampling Methods on Bias and Variance in Stochastic ...
    We find that both sampling methods are effective at reducing mean squared error, with Latin Hypercube sampling outperforming antithetic variates. Whether the ...
  48. [48]
    A Heuristic for Moment-Matching Scenario Generation
    We present an algorithm that produces a discrete joint distribution consistent with specified values of the first four marginal moments and correlations.
  49. [49]
    [PDF] A Guide to Sample-Average Approximation - Cornell University
    Oct 12, 2011 · Abstract. We provide a review of the principle of sample-average approximation (SAA) for solving simulation- optimization problems.
  50. [50]
    Application of the scenario aggregation approach to a two-stage ...
    We explore the new possibilities made available by the scenario aggregation technique recently proposed by Rockafeller and Wets in work at the International ...
  51. [51]
    Dynamic stochastic programmingfor asset-liability management
    Multistage stochastic programming - in contrast to stochastic control - has found wideapplication in the formulation and solution of financial problems ...
  52. [52]
    Fifty years of portfolio optimization - ScienceDirect
    In this paper, we give a historically grounded overview of portfolio optimization which, as a field within operational research with roots in finance, is vast.
  53. [53]
    [PDF] Dynamic Asset Allocation by Stochastic Programming Methods.
    Within ALM models, we look at the limited setting of a multi-stage asset allo- cation problem and restrict ourselves to a multi-stage stochastic programming ...
  54. [54]
    Multistage stochastic portfolio optimisation in deregulated electricity ...
    In this paper, we present a multistage mean–variance model for the management of a hedging portfolio of electricity derivatives from the viewpoint of a price- ...
  55. [55]
    [PDF] A hybrid combinatorial approach to a two-stage stochastic portfolio ...
    In this work, we propose a two-stage stochastic portfolio optimization model with a comprehensive set of real-world trading constraints to address this issue.
  56. [56]
    [PDF] Optimal Portfolio Allocation with ESG Dynamics - Tilburg University
    This thesis develops a stochastic portfolio optimization model that integrates ESG preferences using a Hamilton-Jacobi-. Bellman framework. The model seeks to ...
  57. [57]
    [PDF] Chance-constrained programming for Cryptocurrency portfolio ...
    This paper utilizes a new risk measurement approach called. Conditional Drawdown at Risk (CDaR) in constructing portfolios within high-risk financial markets.
  58. [58]
  59. [59]
    Two-Stage Stochastic Program for Supply Chain Network Design ...
    In this paper, this paper proposed a two-stage stochastic programming model for a four-echelon global supply chain network design problem considering possible ...
  60. [60]
  61. [61]
    Sustainable supply chain design under correlated uncertainty in ...
    Aug 15, 2023 · This paper aims to provide an improvement in the modeling of supply chain designs by incorporating correlated uncertainty among multiple parameters.
  62. [62]
    [PDF] Medium Term Production Scheduling of a Hydro-thermal System ...
    We present a multistage stochastic programming formulation for monthly production plan- ning of a hydro-thermal system. The formulation explicitly considers ...
  63. [63]
    Stochastic Unit Commitment Problem, Incorporating Wind Power ...
    This paper presents a modified formulation for the wind-battery-thermal unit commitment problem that combines battery energy storage systems with thermal units.
  64. [64]
    The Optimal Composition of Influenza Vaccines Subject to Random ...
    Oct 2, 2009 · We derive an optimal dynamic policy for this decision. Because of the greater uncertainty in production yields of new vaccines, the optimal ...
  65. [65]
    The role of stochastic programming in communication network design
    This paper introduces the stochastic programming (SP) methodology to characterize the stochastic traffic. A multi-commodity network model is proposed.
  66. [66]
  67. [67]
    Stochastic Programming - GAMS
    Stochastic programs are mathematical programs that involve data that is not known with certainty. Deterministic programs are formulated with fixed parameters.
  68. [68]
    [PDF] Stochastic Programming in - GAMS
    1217 Potomac Street, NW. Washington, DC 20007. USA. Phone: +1 202 342 0180. Fax: +1 202 342 0181 sales@gams.com support@gams.com http://www.gams.com.
  69. [69]
    Solving simple stochastic optimization problems with AMPL
    Solving simple stochastic optimization problems with AMPL# · Manual solution · Maximize expected returns · Maximize worst case · Maximize worst alpha percentile.
  70. [70]
    2 stage stochastic problems - Support - AMPL Discourse
    Feb 7, 2024 · Exercise 4-5 in the AMPL book describes how to formulate a two-stage stochastic program with recourse. It is easy to work through, and when you ...
  71. [71]
    [PDF] PySP: Modeling and Solving Stochastic Programs in Python
    Sep 6, 2010 · Pyomo is written in the Python high-level programming language [Python, 2010a], which possesses several features that enable the development of ...
  72. [72]
    Stochastic Programming in Pyomo — Pyomo 6.8.0 documentation
    There are two extensions for modeling and solving Stochastic Programs in Pyomo. Both are currently distributed as independent Python packages.Missing: newsvendor | Show results with:newsvendor
  73. [73]
    PySP: modeling and solving stochastic programs in Python
    Mar 7, 2012 · PySP has been used by a number of research groups, including our own, to rapidly prototype and solve difficult stochastic programming problems.
  74. [74]
    Pyomo/pysp - Stochastic Programming in Python - GitHub
    Sep 4, 2024 · PySP is an extension to the Pyomo optimization modeling package for formulating and solving stochastic programming optimization problems.
  75. [75]
    coin-or/CHiPPS-ALPS: This is the Abstract Library for ... - GitHub
    Alps is a framework for implementing parallel graph search algorithms. Its methodology generalizes many of the notions of an LP-based branch-and-bound ...
  76. [76]
    multi-level parallelism for stochastic mixed-integer programs ...
    In this paper, we present two different parallelizations of Branch & Bound (B&B), implementing both as extensions of PIPS-SBB, thus adding an additional layer ...
  77. [77]
    [PDF] Parallel Solvers for Mixed Integer Linear Optimization
    May 4, 2017 · was introduced and implemented in PIPS-SBB [64], a distributed-memory parallel solver for. Stochastic Mixed Integer Programs (SMIPs).<|separator|>
  78. [78]
    Stochastic Capacitated Facility Location Problem - SCIP
    This is an example of using the Benders' decomposition framework of SCIP to solve the stochastic capacitated facility location problem (abbreviated to SCFLP).Missing: ALPS | Show results with:ALPS
  79. [79]
    Efficient Stochastic Programming in Julia - PubsOnLine
    Mar 1, 2022 · This paper presents StochasticPrograms.jl, an open-source framework for stochastic programming implemented in the Julia programming language.
  80. [80]
    Home · StochasticPrograms.jl
    StochasticPrograms.jl is a general purpose modeling framework for stochastic programming. The framework includes both modeling tools and structure-exploiting ...Summary · Citing
  81. [81]
    [PDF] Stock Investment with Stochastic Programming∗
    stock portfolio management. In multistage ... For solving the stochastic programming, we utilize the software python-based stochastic programming (PySP).