Fact-checked by Grok 2 weeks ago

Minimax

Minimax is a decision rule employed in , , , and optimization to minimize the maximum potential loss—or equivalently, maximize the minimum gain—in situations characterized by adversarial opponents or . The foundational , proved by in 1928, asserts that in any finite two-player , the maximin value (the highest payoff the row player can guarantee regardless of the column player's choice) equals the minimax value (the lowest payoff the column player can force), ensuring the existence of a game value v achievable through optimal mixed strategies. This result, derived using fixed-point theorems, underpins rational equilibrium in such games and extends to broader contexts via duality principles. In , the minimax algorithm recursively explores game trees to select moves that maximize utility under the assumption of perfect opponent play, powering early successes in games like and , though computational limits necessitate enhancements such as alpha-beta pruning for deeper searches in complex domains like chess. In statistical , minimax criteria, formalized by , prioritize estimators or rules that bound the worst-case risk across parameter spaces, favoring robustness over average performance in ill-specified models. These applications highlight minimax's enduring role in fostering conservative, worst-case-resilient strategies devoid of probabilistic assumptions.

History and Mathematical Foundations

Origins in Game Theory


Émile Borel initiated systematic study of strategic interactions in two-person games through a series of papers published between 1921 and 1927, focusing on deterministic and probabilistic elements in games like poker. He introduced the modern concept of mixed strategies, where players randomize actions to obscure intentions, and calculated explicit minimax solutions—strategies maximizing the minimum guaranteed payoff—for small finite games, such as those involving three or five pure strategies per player. Borel asserted that in symmetric three-by-three games, the minimax value could be achieved, anticipating the general theorem, though he expressed skepticism about its universality and did not provide a complete proof, partly due to counterexamples he constructed for non-symmetric cases.
John von Neumann advanced Borel's ideas decisively in his 1928 paper "Zur Theorie der Gesellschaftsspiele," proving the minimax theorem for all finite two-person zero-sum games: the maximum over a player's strategies of the minimum payoff against the opponent's best response equals the minimum over the opponent's strategies of the maximum payoff the player can secure. This equivalence, \underline{v_i} = \overline{v_i}, guarantees the existence of optimal mixed strategies yielding a game value independent of the opponent's play, resolving adversarial decision-making under perfect information and complete rationality. Von Neumann's proof relied on algebraic topology via the Brouwer fixed-point theorem but emphasized combinatorial arguments suitable for finite strategy sets, distinguishing it from Borel's heuristic approaches. The minimax principle thus emerged as a tool for modeling real-world conflicts, such as bluffing in poker or in engagements, where agents causally anticipate opponents' countermeasures to secure robust outcomes without relying on chance events external to strategy. Borel's poker analyses highlighted randomization's role in preventing , while von Neumann's formalized this for broader economic and tactical domains, privileging guaranteed minima over optimistic maxima in zero-sum settings.

Von Neumann's Minimax Theorem

Von Neumann's asserts that in any finite two-player , the maximin value—defined as the maximum over the row player's mixed strategies of the minimum expected payoff against the column player's best response—equals the minimax value, which is the minimum over the column player's mixed strategies of the maximum expected payoff against the row player's best response. This common value, denoted as the game's value v, is achieved by optimal mixed strategies for both players, ensuring that neither can improve their expected payoff unilaterally. The theorem applies to games represented by an m \times n payoff A, where the row player seeks to maximize payoff and the column player to minimize it, with payoffs summing to zero. The theorem was first proved by in his 1928 paper "Zur Theorie der Gesellschaftsspiele," published in Mathematische Annalen. 's original proof established the inequality \underline{v} \leq \overline{v} for pure strategies, where \underline{v} is the maximin and \overline{v} the minimax, then extended it to mixed strategies by demonstrating that allows players to achieve indifference in the opponent's payoffs, equalizing the bounds through a combinatorial argument involving convex hulls of payoff distributions. This result was later translated into English and elaborated upon in von Neumann and Oskar Morgenstern's 1944 book Theory of Games and Economic Behavior, which formalized its role in game-theoretic foundations. Subsequent interpretations linked the proof to duality: the row player's maximization problem \max_p \min_q p^T A q (with p, q probability vectors) dualizes to the column player's minimization \min_q \max_p p^T A q, and theorem ensures equality of optima. Though von Neumann's work predated explicit formulations, the equivalence underscores the theorem's alignment with optimization principles. From first principles, the implies that adversarial interactions admit determinate outcomes under rational play, as the coincidence of security levels (maximin for the maximizer, for the minimizer) derives from the zero-sum structure's inherent opposition, independent of informational asymmetries or sequential moves. This causal realism manifests in solved games, such as rock-paper-scissors, where uniform mixed strategies converge to a value, empirically confirming attainment without divergence or instability claims. The result privileges empirical verifiability over interpretive overlays, as computational resolution of payoff matrices routinely yields the equated , refuting assertions of theoretical impracticality in finite settings.

Early Algorithmic Developments

The transition from as a theoretical construct to a computational occurred in the context of early computer game-playing programs during the mid-20th century. In March 1950, detailed this application in his paper "Programming a Computer for Playing Chess," published in . proposed using minimax to traverse the game tree by recursively evaluating positions from the perspective of both players, assuming perfect play, but truncated the search depth due to exponential complexity and approximated terminal values with functions assessing material, mobility, and positional factors. This framework addressed the infeasibility of exhaustive search in games like chess, where branching factors exceeded millions of positions per ply on 1950s hardware such as vacuum-tube computers with kilobyte-scale memory. Building on Shannon's ideas, adapted for practical implementation in starting around 1952 on the , one of the first commercial stored-program computers with 4,096 words of core memory and ferrite-core drum storage. 's program employed full-width search to a fixed depth, typically 4-6 plies, followed by selective extensions and alpha-beta pruning precursors to mitigate the 10^12 possible game states, proving feasible despite processing speeds of about 10,000 . By 1959, formalized these techniques in "Some Studies in Machine Learning Using the Game of ," integrating with self-play learning to iteratively refine evaluation parameters, enabling the program to defeat amateur human players and foreshadowing adaptive computation under uncertainty. These efforts highlighted 's viability for mechanized adversarial , though constrained by hardware to simplified domains and shallow searches.

Core Principles in Game Theory

Application to Zero-Sum Games

In two-player zero-sum games, where the payoff to one player equals the negative of the payoff to the other, the approach prescribes that the maximizing player selects actions to maximize the minimum payoff they can guarantee, while the minimizing player selects to minimize the maximum payoff conceded. This reciprocal assumption models perfect antagonism, yielding strategies that secure the game's value against an optimal opponent. For sequential zero-sum games with , evaluates the tree recursively from terminal states upward. Terminal utilities are defined objectively as +1 for a win by the maximizer, 0 for a draw, and -1 for a loss, reflecting verifiable outcomes without probabilistic elements. At maximizing nodes, the value is the maximum over child nodes' values; at minimizing nodes, the minimum. This alternating maximization and minimization propagates backward, identifying moves that optimize the worst-case scenario at each depth. The resulting tree value equals the game's minimax value, which both players can force: the maximizer achieves at least this amount, and the minimizer concedes no more. In symmetric perfect-information zero-sum games, such as on a 3x3 board, mutual adherence to minimax yields a (value 0), as the first player's advantage is neutralized by optimal responses, with the full comprising 255,168 leaf nodes after symmetries. This outcome underscores minimax's role in establishing forced equilibria through exhaustive adversarial reasoning.

Distinction from Maximin Strategies

The minimax strategy in two-player zero-sum games prescribes that the maximizing player selects an a_i to achieve \underline{v_i} = \max_{a_i} \min_{a_{-i}} v_i(a_i, a_{-i}), where v_i denotes the player's payoff and a_{-i} ranges over the minimizing opponent's feasible s, assuming the opponent rationally seeks to minimize this value. In contrast, the maximin strategy in under directs the decision-maker to choose an that maximizes the minimum payoff over a set of possible states of nature, \max_a \min_s u(a, s), where u is the and s represents non-strategic outcomes without assuming agency or adversarial selection by those states. This distinction hinges on : minimax incorporates an active opponent's strategic response, whereas maximin treats uncertainty as passive, akin to fixed scenarios rather than contingent counterplay. In finite zero-sum games, von Neumann's minimax theorem establishes equivalence between the maximin value \underline{v_i} and the minimax value \overline{v_i} = \min_{a_{-i}} \max_{a_i} v_i(a_i, a_{-i}), yielding \underline{v_i} = \overline{v_i} and guaranteeing the existence of optimal mixed strategies that secure this value regardless of the opponent's play. This saddle-point property holds precisely because payoffs are strictly opposed (v_{-i} = -v_i), enforcing adversarial minimization; without zero-sum structure or opponent rationality, maximin may overestimate security by ignoring responsive minimization, as seen in empirical strategic interactions like poker where bluffs exploit non-worst-case assumptions. Popular expositions often conflate the terms, applying maximin logic to adversarial contexts and yielding suboptimal predictions, such as in non-zero-sum settings where unilateral worst-case maximization neglects mutual incentives. Empirical analyses, including chess endgames or bids, underscore that opponent agency causally amplifies effective minimization beyond static , privileging minimax for robust computation over maximin's conservative but non-interactive pessimism. This oversight in non-academic accounts stems from overlooking the theorem's zero-sum precondition, leading to misapplications in broader decision frameworks.

Role in Equilibrium Analysis

In two-player zero-sum games represented by payoff matrices, minimax strategies identify pure-strategy equilibria at , where the maximin value for the row equals the minimax value for the column , rendering unilateral deviations unprofitable for either participant. A occurs precisely when \underline{v} = \max_{a_i} \min_{a_{-i}} v_i(a_i, a_{-i}) = \min_{a_{-i}} \max_{a_i} v_i(a_i, a_{-i}) = \overline{v}, ensuring the selected actions form mutual best responses without requiring probabilistic mixing. This stability holds empirically in solved matrix games, as deviations by one player allow the opponent to enforce at least the equilibrium value through optimal counterplay. Von Neumann's 1928 minimax theorem extends this to cases lacking pure-strategy saddle points by proving the existence of mixed strategies—probability distributions over actions—such that the game's value remains \underline{v} = \overline{v}, with these strategies constituting a equilibrium. In matrix form, the row player's mixed strategy p satisfies p^T M q \geq v for all opponent pure strategies q, while the column player's q ensures p^T M q \leq v for all p, verifying best-response invariance. This equivalence holds uniquely for zero-sum settings, where all equilibria yield the same value v, distinguishing minimax from broader equilibrium concepts in non-zero-sum games. From first principles, such equilibria emerge as fixed points of rational anticipation: each player's minimizes vulnerability to the opponent's maximization of harm, converging to verifiable stability testable via exhaustive in finite matrix games or duality. In variants of coordination dilemmas adapted to zero-sum frameworks, minimax confirms equilibria absent in cooperative originals, underscoring its role in isolating causal incentives for or without exogenous probabilities.

The Minimax Algorithm

Structure in Combinatorial Games

In combinatorial games such as chess or , the minimax algorithm structures decision-making around a , where each node denotes a unique board position and directed edges represent legal moves available to the current player. The root node corresponds to the initial position, with branches expanding to successor positions; levels alternate between maximization nodes, where the algorithm selects the move yielding the highest value for the maximizing player, and minimization nodes, where it selects the lowest value assuming optimal opposition. This alternating structure encodes the adversarial nature of zero-sum games, propagating values bottom-up from leaves to the root via recursive computation. Full exhaustive search to terminal positions proves infeasible due to the game's combinatorial , prompting depth-limited traversal to a fixed horizon, beyond which non-terminal leaves receive values from a static approximating positional advantage. In chess, such functions typically quantify material imbalance (e.g., at 1 unit, at 9), positional factors like safety, and mobility, though exact implementations vary by engine. Terminal nodes at shallower depths assign definitive scores, such as +1 for maximizer win, -1 for loss, and 0 for , ensuring the backed-up value reflects the worst-case outcome under perfect play within the searched subtree. The tree's exponential expansion, driven by the average branching factor—the mean number of legal moves per position—renders deep searches computationally prohibitive without heuristics or pruning. Chess exhibits an average branching factor of approximately 35, implying roughly 35^d nodes must be evaluated at depth d, a scale that exceeds practical limits beyond depth 10-12 on standard hardware absent optimizations. This quantification, rooted in empirical analysis of move generation across vast position databases, underscores minimax's reliance on bounded exploration for real-time applicability in complex combinatorial domains.

Pseudocode and Basic Implementation

The basic algorithm employs a recursive to compute the optimal value for a decision in a , alternating between maximization for the current player and minimization for the opponent. At each maximizer , the selects the maximum value among the minimax values of its successors; at minimizer nodes, it selects the minimum. Terminal states, where the game ends (e.g., win, loss, or draw), return fixed values, such as +1 for a win by the maximizer, -1 for a loss, and 0 for a draw, determined by the game's rules. To mitigate horizon effects—where non-terminal states near the search depth appear deceptively favorable due to unexamined future moves—basic implementations often incorporate a quiescence check in the base case, continuing evaluation only for "quiet" positions (e.g., no ongoing threats like captures) or applying a for non-terminal leaves. The recursion assumes perfect information and deterministic outcomes, propagating values bottom-up from leaves to the root.
pseudocode
function minimax(node, depth, isMaximizingPlayer) -> value:
    if depth == 0 or isTerminal(node):
        if isQuiescent(node):  // Optional quiescence check
            return evaluate(node)  // Static [heuristic](/page/Heuristic) for non-terminal quiet states
        return utility(node)     // Fixed value for true terminals (e.g., +1 win, -1 loss, 0 draw)
    
    if isMaximizingPlayer:
        maxValue = -∞
        for each child in successors(node):
            value = minimax(child, depth - 1, false)
            maxValue = max(maxValue, value)
        return maxValue
    else:
        minValue = +∞
        for each child in successors(node):
            value = minimax(child, depth - 1, true)
            minValue = min(minValue, value)
        return minValue
This returns the value; to select an , the root call iterates over possible moves, invoking on each successor and choosing the one yielding the optimal value. The is O(b^d), where b is the average (number of legal moves per state) and d is the search depth, as it fully expands the game tree, with empirical verification in small games like (b ≈ 5-9 early, d=9) confirming exponential growth rendering deeper searches infeasible without optimizations. is O(d) for the in depth-first traversal.

Example in Simple Games

In , players alternate marking spaces on a 3×3 grid, with the first player (maximizer) using X and the second (minimizer) using O; a player wins by completing three marks in a row, column, or diagonal, or the game if all spaces fill without a winner. The algorithm constructs a from the initial empty board, evaluating terminal leaves as +1 for an X win, -1 for an O win, and 0 for a . proceeds depth-first: at each maximizer node, the highest value among possible moves (child nodes) is chosen, assuming the minimizer will respond to force the lowest subsequent value; minimizer nodes select the lowest value among children. To determine the optimal first move, traverses the starting from the (empty board). For instance, placing X in the center yields subtrees where O's best responses (e.g., corner placements) lead to positions evaluable further; reveals this branch's value as 0. Similar traversals for edge or corner first moves also yield 0, confirming no first-player win is possible against optimal opposition. The of 0 demonstrates that perfect play forces a draw, as neither player can force a positive value for themselves. The game's maximum depth of 9 plies and average of around 4–5 result in roughly 10^5 nodes in the unpruned tree, permitting hand calculation via systematic enumeration and symmetry reduction, though the scaling (O(b^d) where b is and d is depth) renders it impractical for deeper games without optimizations like alpha-beta pruning. This example highlights minimax's recursive nature in perfect-information, zero-sum settings, where exhaustive search guarantees the optimal strategy despite in larger instances.

Decision-Making Under Uncertainty

Minimax Criterion in Individual Choices

In under without assigned probabilities to states of , the minimax criterion guides a single to choose the action that maximizes the minimum attainable payoff across all possible states or, dually, minimizes the maximum possible loss, modeling as selecting the state to minimize the 's . This worst-case orientation ensures robustness by safeguarding against adversarial outcomes rather than averaging over assumed likelihoods. Abraham Wald introduced a foundational formulation of the minimax criterion in his 1950 book Statistical Decision Functions, where it serves to select rules minimizing the supreme risk over parameter spaces, emphasizing empirical resilience in the face of unknown distributions. Wald's approach, developed further from his 1945 paper on statistical decision functions minimizing maximum risk, treats uncertainty causally as potential states without probabilistic weighting, prioritizing decisions that perform adequately in the least favorable conditions. In practical settings like inventory planning, the minimax criterion—often applied via minimizing maximum regret—directs firms to set order quantities that limit the largest deviation in costs from the ex-post optimal under varying demand scenarios. For example, with discrete demand possibilities and asymmetric costs for excess or shortage, the agent computes regrets as the excess cost relative to the state-specific best action, then selects the quantity yielding the smallest such maximum regret, thereby hedging against extreme demand realizations without distributional assumptions. This method suits environments prone to rare but severe disruptions, as it enforces preparedness for causal extremes over expected-value optimizations.

Integration with Statistical Decision Theory

In statistical decision theory, the minimax criterion addresses decision-making under uncertainty by selecting a rule that minimizes the maximum possible risk over an uncertainty set, such as a class of parameter values or prior distributions. formalized this approach in his foundational work beginning with a 1939 paper and culminating in his 1950 book Statistical Decision Functions, where he proved the existence of minimax solutions under relatively weak conditions, including convexity of the risk set. A minimax decision rule δ* satisfies sup_θ R(θ, δ*) ≤ sup_θ R(θ, δ) for any other rule δ, where R(θ, δ) denotes the risk function, typically the expected loss E_θ [L(θ, δ(X))]. Equivalently, it minimizes the maximum Bayes risk over a class Γ of priors π, i.e., inf_δ sup_π ∫ R(θ, δ) dπ(θ), providing a robust safeguard against adversarial or unknown distributional assumptions. This framework yields specific minimax estimators and tests in parametric problems. For estimating a location parameter θ in i.i.d. samples from a location family f(x - θ) under absolute error loss |δ - θ|, the sample median is when the density f satisfies certain symmetry or tail conditions, such as in the , as it equalizes risks across worst-case shifts. In hypothesis testing, rules often coincide with uniformly most powerful (UMP) tests in one-sided problems within exponential families; for instance, the for H_0: θ ≤ 0 vs. H_1: θ > 0 minimizes the maximum type II error risk under size α constraints, leveraging the Neyman-Pearson lemma's optimality extended to composite hypotheses. These solutions are invariant under group transformations, ensuring admissibility in finite samples. Despite their robustness, rules face critiques for excessive conservatism, prioritizing worst-case scenarios that may rarely occur, potentially yielding suboptimal average performance compared to Bayes rules tuned to plausible priors. Empirical evaluations in finite-sample settings, such as randomized tests, confirm their value in controlling error rates under distributional ambiguity, though simulations show they can inflate sample sizes by 20-50% relative to empirical Bayes alternatives in low-adversity regimes. This trade-off underscores 's role in high-stakes applications like , where verifiable worst-case guarantees outweigh average-case efficiency.

Non-Probabilistic Frameworks

In under complete uncertainty, where no probabilities can be reliably assigned to states of , the non-probabilistic minimax criterion prescribes selecting the action that maximizes the minimum attainable payoff across all possible outcomes. This approach treats as an adversarial entity, prioritizing robustness against the worst-case without invoking subjective probabilities or expected values. Unlike probabilistic frameworks, it derives from first-principles considerations of guaranteed , avoiding assumptions about event likelihoods that may prove unverifiable or misleading in novel situations. Savage's for subjective expected utility, while influential, has faced critiques for embedding through axioms like the , which impose probabilistic coherence even amid deep . These foundations can lead to decisions overly sensitive to elicited probabilities that fail to capture true , favoring instead normalized utilities that undervalue extreme tails. Non-probabilistic sidesteps this by eschewing such axioms, directly targeting undiluted worst-case safeguards, which align better with causal in environments where probabilistic models collapse under sparse data. In theory, non-probabilistic minimax principles underpin H-infinity (H∞) methods, introduced by George Zames in 1981, which optimize controllers to minimize the maximum deviation from desired performance against bounded disturbances. These techniques frame as a two-player between the system and worst-case perturbations, ensuring stability margins without statistical assumptions on . Empirical validations in and servo systems demonstrate H∞ controllers outperforming traditional designs in unmodeled dynamics, as seen in electromechanical applications where adversarial modeling yields verifiable robustness gains. Expected utility's empirical shortcomings in tail events further underscore minimax's appeal; laboratory experiments reveal systematic violations of linearity in probabilities for low-probability, high-impact outcomes, with subjects exhibiting inverse-S shaped probability weighting that EU cannot reconcile without adjustments. Real-world failures, such as underestimation of fat-tailed risks in financial crises, highlight how EU's reliance on historical frequencies falters against events, where minimax's worst-case focus has preserved solvency in stress-tested portfolios by design.

Extensions and Variants

Handling Repeated and Stochastic Games

In repeated zero-sum games, minimax strategies extend to multi-stage settings by evaluating payoffs over histories, with optimal play computed via dynamic programming or for finite horizons. For infinitely repeated discounted zero-sum games, the per-period value equals the single-stage minimax value, and stationary Markov strategies achieve it, as non-stationary deviations do not improve the guaranteed payoff due to the zero-sum structure. This preserves the core minimax principle of securing the worst-case outcome against adversarial responses across stages, though computing history-dependent equilibria requires solving extensive-form representations equivalent to the full . For stochastic games incorporating chance moves, such as dice rolls or card draws, the expectiminimax algorithm adapts by introducing chance nodes where values are computed as probability-weighted averages of successor states rather than pure min or max. At max nodes, the algorithm selects the action maximizing the ; at min nodes, it minimizes it; and at chance nodes, it sums probabilities times recursive values. This yields an optimal in for perfect-information zero-sum stochastic games, generalizing the to probabilistic environments. The time complexity scales as O(b^e), where b is the effective including chance outcomes and e is the effective depth, exponentially worsening search compared to deterministic due to the need to average over all probabilistic paths. In imperfect-information stochastic games like poker, minimax approximations via algorithms such as counterfactual minimization (CFR) produce strategies with near-zero exploitability, ensuring no profitable deviation exists against them, as verified in solvers for heads-up no-limit Texas hold'em where play guarantees the minimax value in the long run. For instance, CFR iterations converge to strategies where the average approaches zero, empirically achieving unexploitable performance with exploitability below 0.001 big blinds per hand in abstracted models. These extensions maintain minimax's adversarial robustness but deviate in requiring abstractions or sampling to handle vast state spaces, as exact computation remains intractable beyond small games.

Alpha-Beta Pruning and Optimizations

Alpha-beta pruning enhances the algorithm by systematically eliminating branches of the game tree that cannot affect the value of the root node, thereby reducing the number of nodes evaluated without altering the optimal decision. The technique tracks two bounds during the search: alpha, the highest value guaranteed for the maximizer along the path from the root, and , the lowest value guaranteed for the minimizer. Traversal of a subtree ceases if the tentative value exceeds for a maximizer (beta cutoff) or falls below alpha for a minimizer (alpha cutoff), as such branches are provably irrelevant to improving the current bounds. This bounding approach, formalized in the context of early research on combinatorial games, originated in the late 1950s with developments in programming, such as the 1958 NSS program that integrated it with to manage computational limits. In terms of , alpha-beta pruning achieves a best-case time bound of O(b^{d/2}), where b is the average and d is the search depth, starkly contrasting the O(b^d) of unpruned ; this quadratic root reduction in effective branching allows approximately twice the depth for equivalent resources, assuming optimal ordering. Real-world performance approaches this ideal through careful branch ordering, which maximizes early cutoffs by evaluating moves likely to tighten bounds first—such as principal variations or captures in chess-like domains. Poor ordering reverts toward costs, underscoring that pruning efficacy depends causally on heuristic-guided exploration rather than the bounding logic alone. Key optimizations include move-ordering heuristics like the killer heuristic, which stores and prioritizes non-capturing moves that induced beta cutoffs at prior similar depths, as these often recur as strong refutations across transpositions. Empirical validation in chess engines shows such techniques boost cutoff rates, with studies confirming 20-50% node reductions beyond basic alpha-beta, directly correlating to superior tournament play by enabling deeper, more accurate searches. For instance, IBM's employed alpha-beta with refined ordering and extensions to routinely probe 12-14 plies principally, extending selectively to over 40 plies in critical lines, evaluating 200 million positions per second to outperform human grandmasters in 1997. These enhancements preserve minimax optimality while scaling to practical depths unattainable otherwise.

Applications in Robust Optimization

In robust optimization, the minimax principle addresses decision-making under by seeking solutions that minimize the worst-case outcome over a bounded uncertainty set, formulated as \min_x \max_{u \in \mathcal{U}} f(x, u), where x denotes decision variables, u captures uncertain parameters within set \mathcal{U}, and f represents the objective function. This approach, pioneered in Ben-Tal and Nemirovski's 2000 analysis of linear programs with contaminated data, ensures tractability for polyhedral and ellipsoidal uncertainty sets by reformulating problems into tractable convex programs, avoiding distributional assumptions inherent in methods. Such formulations prioritize causal robustness against non-probabilistic perturbations, modeling uncertainty as potentially adversarial rather than merely random, which aligns with scenarios involving deliberate disruptions or competitive maneuvering over benign variability. Applications in leverage to hedge against disruptions like facility failures or demand shocks, minimizing maximum or deviation from optimal performance. For instance, a 2022 two-stage relative model for resilient supply chain design determines facility locations and protection investments in the first stage, then adjusts flows under worst-case scenarios, reducing vulnerability to budgeted uncertainties in disruption probabilities. Numerical experiments in this framework demonstrate that solutions yield lower worst-case costs compared to nominal planning, with bounded by 15-20% in tested networks under polyhedral uncertainty sets representing adversarial supply interruptions. Empirical evaluations highlight minimax robust optimization's superiority in stress-tested environments over stochastic counterparts, which average outcomes but falter in tail events. In under demand and lead-time , minimax models for finite-horizon lot-sizing achieve up to 25% better worst-case performance in simulations of extreme variability, as stochastic methods over-optimize for expected values that rarely materialize under adversarial conditions. Supply chain case studies, such as distribution network design with polyhedral in continuous parameters, confirm that minimax approaches maintain feasibility and near-optimality across the uncertainty set, outperforming by 10-30% in maximum deviation metrics during disruption simulations. This edge stems from explicit worst-case safeguards, making it apt for causal realities like competitor-induced shortages rather than probabilistic forecasts.

Modern Applications and Impact

Role in Artificial Intelligence and Game AI

The minimax algorithm served as the cornerstone of early game-playing , enabling systematic evaluation of adversarial decisions in perfect-information games. In 1997, IBM's defeated world chess champion in a six-game match by employing minimax search augmented with alpha-beta pruning to explore millions of positions per second, backed by custom hardware evaluating up to 200 million positions annually through selective extensions and . This empirical success demonstrated minimax's capacity for superhuman tactical precision in chess, where the algorithm recursively maximizes the player's score while assuming the opponent minimizes it, terminating at leaf nodes with static evaluations of material and positional factors. Minimax also underpinned the solution of in 2007 by Jonathan Schaeffer's program, which used —an exhaustive akin to minimax over endgame databases—to prove that perfect play by both sides results in a , after exploring over 500 billion billion positions. This milestone highlighted minimax's effectiveness for solving finite, deterministic games with manageable state spaces, as Chinook's early versions relied on alpha-beta minimax for forward search before database completion. However, pure minimax's exponential , scaling with the game's raised to search depth, limits its direct applicability to high-branching-factor games without optimizations, as deeper searches become computationally prohibitive even on modern hardware. In contemporary game AI, minimax persists in hybrid forms rather than isolation, integrating with (MCTS) and evaluations to address scaling limitations; for instance, AlphaZero's 2017 architecture used MCTS for selective exploration guided by policy and value networks, outperforming traditional minimax-based engines like in chess self-play, though the underlying adversarial maximization-minimization remains implicit in MCTS's upper confidence tree selection. Empirical tests show such hybrids achieve superior win rates in complex domains by approximating minimax values through sampling, avoiding full tree expansion. Recent implementations, including 2023-2024 notebooks for and connect-four solvers, illustrate minimax's ongoing role in educational AI tools, where it trains agents to enforce optimal play in toy environments with full enumeration, fostering understanding of recursive decision trees without the opacity of . These applications confirm minimax's foundational impact but underscore its conservatism: without stochastic extensions like MCTS, it risks overemphasizing worst-case scenarios, potentially underperforming in games where probabilistic pruning or learned heuristics yield faster convergence, as evidenced by MCTS's edge in high-variance simulations.

Broader Uses in Economics and Philosophy

In economics, the minimax criterion informs robust auction design by enabling mechanisms that maximize the seller's revenue against worst-case bidder information structures and correlations. For example, in informationally robust optimal auctions, the designer selects a mechanism to achieve the highest guaranteed revenue, treating uncertainty over value distributions as adversarial; this yields efficient outcomes where virtual values equalize across bidders under the minimax correlation. Such approaches extend to empirical settings like spectrum auctions, where regulators apply minimax-inspired robustness to handle interdependent values and strategic uncertainty, as evidenced in U.S. Federal Communications Commission designs post-1994 that prioritize revenue stability amid varying bidder behaviors. Philosophically, minimax contrasts with John Rawls' maximin rule in social contract theory, where Rawls advocates maximizing the minimum primary goods for the least advantaged under a veil of ignorance, assuming non-adversarial uncertainty. Minimax, derived from von Neumann's 1928 theorem for zero-sum games, instead demands strategies resilient to deliberate opposition, emphasizing causal threats from agents over static inequalities; this shifts focus from egalitarian redistribution to defensive robustness against exploitation, critiquing utilitarian averaging that may overlook tail risks in interpersonal conflicts. In 2020s policy debates, minimax frameworks address by minimizing maximum regret in abatement strategies under deep uncertainty, prioritizing worst-case damages like high scenarios over expected-value projections. Models applying minimax regret to policies recommend stringent targets for discount rates below 2%, reducing potential regret from under-action by up to 50% compared to cost-benefit analyses in integrated assessment models. This aligns with IPCC Working Group III's emphasis on tail-risk evaluations in AR6 (2022), where low-probability, high-impact outcomes—such as 5°C+ warming—influence robust pathway assessments, countering biases toward probabilistic optimism in utilitarian frameworks.

Empirical Evidence of Effectiveness

In finite, solved zero-sum games of perfect information, such as , exhaustive application of the minimax criterion via determines the exact game-theoretic value, guaranteeing a for the second player under perfect play, as computed across approximately $5 \times 10^{20} positions in 2007. Similarly, in , minimax ensures at least a for either player against optimal opposition, with full state-space traversal confirming no winning strategy deviation yields better outcomes. These results empirically validate minimax's optimality in deterministic settings, achieving 100% effectiveness in securing the value of the game against adversarial perfection. Empirical evaluations in chess-like domains further demonstrate that incremental deepening of search trees, enhanced by , consistently improves decision quality and win rates against strong opponents; for instance, programs extending search depth by even one ply often elevate evaluation accuracy by orders of magnitude in positional assessments. In practical benchmarks, such as the King-Rook-King , variants with knowledge-based heuristics outperform pure search baselines, attaining near-perfect win rates (over 95%) in exhaustive tests against suboptimal but challenging foes. In financial applications, minimax-based robust has empirically outperformed classical mean-variance approaches during extreme downturns; backtests spanning the 2008 crisis show robust minimax formulations reducing maximum drawdowns by 20-30% relative to nominal models, preserving capital through worst-case scenario hedging against correlated asset shocks. For example, discrete worst-case robust models applied to constituents from 2007-2009 yielded cumulative returns superior by factors of 1.5-2.0 in tail-risk periods, with Sharpe ratios maintained above 0.5 amid market volatility exceeding 50%. These outcomes underscore minimax's efficacy in adversarial market simulations, where regret relative to hindsight optima remains bounded below 10% in stress-tested datasets.

Criticisms and Limitations

Computational and Scalability Issues

The minimax algorithm's is exponential, scaling as O(b^d), where b denotes the average (number of legal moves per position) and d the search depth, necessitating evaluation of up to b^d nodes in the game . In chess, with b ≈ 35 and typical game lengths exceeding 40 plies, exhaustive minimax search becomes computationally infeasible on contemporary hardware, as even depths beyond 10-15 plies overwhelm available resources without optimizations. Alpha-beta mitigates this by eliminating branches provably irrelevant to the final decision, reducing the effective complexity to O(b^{d/2}) in optimistic scenarios with ideal move ordering, yet practical limits persist at similar depths for high-branching games due to imperfect and evaluation overhead. In Go, the state space comprises approximately 2.08 × 10^{170} legal positions on a 19×19 board, rendering full minimax traversal impossible irrespective of pruning, as this exceeds the estimated 10^{80} atoms in the by orders of magnitude. Hardware improvements, governed by trends like offering roughly doubling of computational power every 18-24 months, provide only gains insufficient to counter the exponential explosion, compelling reliance on approximations such as or neural network-guided evaluations for scalable play in deep games. Nonetheless, minimax remains indispensable for exact solutions in finite zero-sum perfect-information games where exhaustive or near-exhaustive search is viable, as it guarantees the optimal value per the , irreplaceably verifying strategies in solved instances like , computed via retrograde analysis in after evaluating 5 × 10^{20} positions. Claims of obsolescence overlook this foundational role, as no alternative yields provably optimal play without equivalent causal exploration of adversarial outcomes in such settings.

Assumptions of Perfect Information and Rationality

The minimax criterion in relies on the assumption that players have about the current game state, including all prior moves and available actions, and that opponents will act rationally to minimize the focal player's maximum achievable payoff. This framework, formalized by in 1928, prescribes selecting the action that guarantees the highest possible payoff against an adversarial opponent's best response, presupposing complete and utility maximization under . However, these premises falter in real-world settings with incomplete information, such as poker variants involving hidden cards, where "fog-of-war" elements prevent full state knowledge, rendering standard minimax inapplicable without probabilistic extensions that dilute its guarantees. Empirical studies in behavioral game theory reveal systematic deviations from assumed rationality, as humans often prioritize heuristics over optimal play, creating exploitable patterns. For instance, in zero-sum coordination games like , experimental participants fail to achieve the prescribed uniform randomization, instead displaying predictable biases such as over-reliance on recent outcomes or cognitive load-induced predictability, which strategies—designed for rational adversaries—cannot anticipate or counter effectively. Colin Camerer's analyses of experiments across datasets show that such irrationalities lead to payoff losses for rigid adherents, with adaptive players gaining up to 20-30% higher returns by exploiting deviations like or fairness norms absent in the model's causal structure. Herbert Simon's framework, developed from 1955 onward, causally attributes these failures to inherent limits in human cognition, information processing, and computational capacity, contrasting with 's expectation of exhaustive search for global optima. Simon's observational and experimental evidence from organizational demonstrates that agents "satisfice"—settle for adequate outcomes under constraints—rather than pursue the unbounded optimization demands, as seen in chess-like tasks where time-bounded players prune suboptimal branches suboptimally compared to the algorithm's ideal. This bounded approach better predicts empirical outcomes in imperfect-information environments, where 's rationality ideal ignores causal factors like or incomplete beliefs, leading to over-conservative strategies vulnerable to non-rational exploitation.

Debates on Conservatism and Alternatives

Minimax strategies have been critiqued for inherent , prioritizing the avoidance of worst-case losses over maximization, which can lead to suboptimal outcomes when adversarial assumptions do not fully materialize. Economist , in his 2001 analysis, highlighted pitfalls of minimax approaches in modeling uncertainty, including failure to distinguish normative policy from descriptive behavior, undue emphasis on model misspecification unlikely to affect key outcomes, and violation of the by implying irrational choices under ambiguity. This conservatism manifests in decision rules that hedge against implausible extremes, potentially sacrificing gains in scenarios where probabilities favor moderate risks. In financial applications, empirical portfolio constructions using minimax criteria often yield lower average returns compared to mean-variance optimization during periods without extreme downturns, as the strategy reallocates heavily to defensive assets to cap maximum drawdowns, underperforming benchmarks like the in bull markets from 2000–2020 data analyses. For instance, minimax regret models in long-term investment simulations demonstrate fixed-regret properties but trade off higher volatility-adjusted returns for robustness, with backtests showing annualized returns 2–4% below equal-weighted portfolios over multi-decade horizons absent tail events. Such underperformance underscores the tension between theoretical robustness and realized efficiency when worst-case probabilities remain low, as observed in post-2008 recovery phases where conservative allocations lagged market averages by up to 15% cumulatively. Alternatives emphasize probabilistic or adaptive frameworks to mitigate minimax's pessimism. Bayesian decision making incorporates prior beliefs and updates via evidence, enabling risk assessment aligned with empirical frequencies rather than absolute worst cases, as contrasted in statistical theory where Bayes rules minimize average risk under known distributions while minimax targets maximum risk without priors. In imperfect-information settings like poker, counterfactual regret minimization (CFR) supplants direct minimax by iteratively reducing cumulative regrets across information sets, converging to Nash equilibria; empirical benchmarks show CFR-based agents like achieving positive win rates of over 1,000 mbb/g (milli-big-blinds per game) against professional players in no-limit Texas Hold'em heads-up matches from tournaments, outperforming static adversarial assumptions. Debates center on frequentist robustness versus subjectivist updating, with minimax appealing in high-stakes domains like or where tail risks dominate—e.g., aircraft design standards prioritizing failure-proofing over average-case —but Bayesian advocates argue it better reflects causal flows, avoiding overpreparation for low-probability adversaries. Critics of probabilistic , often amplified in narratives favoring expected-value bets, contend it underweights causal asymmetries in adversarial games, yet simulations in hybridize minimax-Bayes hybrids to balance conservatism with adaptability, yielding convergence rates improved by 20–30% in uncertain MDPs over pure minimax. These tensions persist without , as empirical validity hinges on domain-specific threat realism rather than universal optimality.

References

  1. [1]
    Mini-Max Algorithm in Artificial Intelligence - GeeksforGeeks
    Apr 7, 2025 · Mini-Max algorithm is a decision-making algorithm used in artificial intelligence, particularly in game theory and computer games.
  2. [2]
    [PDF] GAME THEORY AND THE MINIMAX THEOREM - UChicago Math
    Arguably the most important result in game theory, the Minimax Theorem was stated in 1928 by mathematician John von Neumann in his paper Zur Theorie Der.Missing: original | Show results with:original
  3. [3]
    Von Neumann and the Development of Game Theory
    In his 1928 article, "Theory of Parlor Games," Von Neumann first approached the discussion of game theory, and proved the famous Minimax theorem. From the ...
  4. [4]
    [PDF] John von Neumann's Conception of the Minimax Theorem
    The first purpose of this paper is to tell the history of John von Neumann's devel- opment of the minimax theorem for two-person zero-sum games from his first ...
  5. [5]
    On von Neumann's minimax theorem - MSP
    Introduction. It was J. von Neumann [ 7], [8] who first proved the minimax theorem under quite general conditions. A little later, in establishing a general ...
  6. [6]
    [PDF] ϵ-Minimax Solutions of Statistical Decision Problems via the Hedge ...
    Dec 5, 2024 · Under Wald (1950)'s minimax criterion different statistical decision rules are ranked based on their worst possible expected loss. Searching ...
  7. [7]
    Minimax decision rules for planning under uncertainty
    Dec 1, 2023 · It is common to use minimax rules to make planning decisions when there is great uncertainty about what may happen in the future.
  8. [8]
    [PDF] Emile Borel and the foundations of game theory - Knowledge Base
    1 So in 1921, Borel correctly asserted that the minimax theorem, which von Neumann would prove in 1928, was true for 3x3 symmetric games, but Borel also ...
  9. [9]
    [PDF] On a Mistranslation of a Mistake about Minimax
    Jun 5, 2023 · In 1921, Emile Borel, a pioneer of modern probability theory, published a short paper introducing the concept later called a “minimax solution” ...Missing: Émil | Show results with:Émil
  10. [10]
    John von Neumann's Minimax Theorem (1928) - Privatdozent
    Mar 26, 2021 · The purpose of this essay is to explain the context of von Neumann's momentous 1928 minimax theorem to a general audience.Missing: original | Show results with:original
  11. [11]
    [PDF] The Minimax Theorem and Algorithms for Linear Programming
    Feb 4, 2016 · Theorem 1.1 was originally proved by John von Neumann in the 1920s, using fixed-point- style arguments. Much later, in the 1940s, von Neumann ...
  12. [12]
    [PDF] THE MINIMAX THEOREM Let G = ({1,2},(S 1,S2),(u1 ... - UC Berkeley
    We say that G is a zero-sum game if u1 + u2 ... John von neumann's conception of the minimax theorem: A journey through different mathematical contexts.
  13. [13]
    [PDF] VON NEUMANN MINIMAX THEOREM
    Theorem: Let A be a m × n matrix representing the payoff matrix for a two-person, zero- sum game. Then the game has a value and there exists a pair of mixed ...Missing: formal | Show results with:formal
  14. [14]
    Zur Theorie der Gesellschaftsspiele | Mathematische Annalen
    Cite this article. v. Neumann, J. Zur Theorie der Gesellschaftsspiele. Math. Ann. 100, 295–320 (1928). https://doi.org/10.1007/BF01448847. Download citation.
  15. [15]
    [PDF] Zero-Sum Games and Linear Programming Duality - arXiv
    Oct 22, 2023 · The minimax theorem for zero-sum games is easily proved from the strong duality theorem of linear programming, and LP duality helps prove many ...
  16. [16]
    Game Theory and Von Neumann's Minimax Theorem - Spark
    The most important result of game theory is the Minimax Theorem, proved by John von Neumann. He showed that in a game with two players, such that the loss of ...
  17. [17]
    [PDF] 16.1 L.P. Duality Applied to the Minimax Theorem - cs.wisc.edu
    Oct 22, 2007 · In this lecture, we first conclude our discussion of LP-duality by applying it to prove the Minimax theorem. Next we introduce vector ...
  18. [18]
    Claude Shannon - Chessprogramming wiki
    In 1949 Shannon published a groundbreaking paper on computer chess entitled Programming a Computer for Playing Chess . It describes how a machine or computer ...
  19. [19]
    Minimax - Chessprogramming wiki
    Claude Shannon (1950). Programming a Computer for Playing Chess, Philosophical Magazine, Ser.7, Vol. 41, No. 314 - March 1950 · Hermann Weyl (1950). Elementary ...
  20. [20]
    11.2 Samuel's Checkers Player
    ... (1950). In particular, Samuel's program was based on Shannon's minimax procedure to find the best move from the current position. Working backward through ...
  21. [21]
    The games that helped AI evolve - IBM
    Two early game-playing programs, Samuel Checkers and TD-Gammon, led to breakthroughs in artificial intelligence.
  22. [22]
    [PDF] Some Studies in Machine Learning Using the Game of Checkers
    Using the Game of Checkers. Arthur L. Samuel. Abstract: Two machine-learning procedures have been investigated in some detail usi!Jg the game of checkers.
  23. [23]
    Strategies of Play - Stanford Computer Science
    Minimax for Two-Person Games. In a two-person, zero-sum game, a person can win only if the other player loses. No cooperation is possible. Andrew Colman's Game ...
  24. [24]
    [PDF] Zero Sum Games and the Minimax Thoerem - UPenn CIS
    Feb 11, 2021 · A zero sum game is a two-player game where utilities sum to zero, and it is related to the minimax theorem.
  25. [25]
    Tic-Tac-Toe: Understanding the Minimax Algorithm
    The paper presents an algorithm to predict the win cases or draw cases of the Tic-Tac-Toe game the algorithm is implemented in Java using min max algorithm. The ...
  26. [26]
    Tic Tac Toe: Understanding the Minimax Algorithm
    Dec 13, 2013 · The key to the Minimax algorithm is a back and forth between the two players, where the player whose turn it is desires to pick the move with the maximum score.Missing: payoff symmetric
  27. [27]
    [PDF] CMSC 474, Introduction to Game Theory Maxmin and Minmax ...
    Maximin and Minimax via LP. ○ Similarly by writing an LP for minimax value v. 2. = miny max. x. u(x,y) , we can obtain the second player strategy min v. 2.
  28. [28]
    Decision Theory: Maximin and Minimax strategy - BrainKart
    May 6, 2019 · Maximin criteria. This criterion is the decision to take the course of action which maximizes the minimum possible pay-off. Since this decision ...
  29. [29]
    Minimax or Maximin?
    Feb 22, 2019 · “Maximin” is a term commonly used for non-zero-sum games to describe the strategy which maximizes one's own minimum payoff. The minimax ...
  30. [30]
    [PDF] Minimax regret and strategic uncertainty - University of Leicester
    Apr 22, 2008 · Indeed, the maximin criterion frequently often leads to unsatisfactory predictions in strategic situations.
  31. [31]
    [PDF] Game Theory: Minimax, Maximin, and Iterated Removal - People
    Mar 14, 2017 · If player 1 plays strategy C then their minimum gain is -2. Take the maximum of the minimum gains, i.e. the maximum of row minima (maximin), ...
  32. [32]
    [PDF] Zero Sum Games and the MinMax Theorem - CIS UPenn
    Definition 1 A two player zero sum game is any two player game such that for every a ∈ A1 × A2, u1(a) = −u2(a).(i.e. at every action profile, the ...
  33. [33]
    [PDF] 6.S890: - MIT
    • thus von Neumann's theorem establishes the existence of a Nash equilibrium in two-player zero-sum games von Neumann: “As far as I can see, there could be ...
  34. [34]
    [PDF] Matrix Games - Brown CS
    Apr 18, 2024 · – When should we abandon minimax? • Minimax solutions for 2-player zero-sum games can always be ... best response -> equilibrium strategy? • ...
  35. [35]
    Minimax Algorithm in Game Theory | Set 1 (Introduction)
    Jun 13, 2022 · Minimax is a backtracking algorithm used in game theory to find the optimal move for a player, assuming the opponent plays optimally. It is ...
  36. [36]
    Minimax Algorithm for Game AI: Alpha-Beta Pruning, Cutoff Search ...
    Feb 25, 2025 · Minimax constructs a game tree, where nodes represent game states and edges represent actions. The algorithm assigns a minimax value to each ...Missing: combinatorial | Show results with:combinatorial
  37. [37]
    CS 540 Lecture Notes: Game Playing
    Minimax computes the optimal playing strategy but does so inefficiently because it first generates a complete tree and then computes and backs up static- ...<|control11|><|separator|>
  38. [38]
    Evaluation - Chessprogramming wiki
    Evaluation is a heuristic function to determine the relative value of a chess position, i.e., the chances of winning.Evaluation Function · Analog Evaluation · Asymmetric Evaluation · Lazy EvaluationMissing: limited | Show results with:limited
  39. [39]
    Branching Factor - Chessprogramming wiki
    Average Branching Factor​​ In chess, in average there are about 35-38 moves per position. One additional cycle of growth expands each leaf so far accordantly. ...
  40. [40]
    How come the branching factor of chess is 35?
    Sep 6, 2016 · The branching factor of a chess game is around average 35. Meaning a player can move about 35 legal moves per position.
  41. [41]
    3.2 Minimax | Introduction to Artificial Intelligence
    The resulting pseudocode for minimax is both elegant and intuitively simple, and is presented below. Note that minimax will return an action, which corresponds ...
  42. [42]
    Pseudocode for minimax - CS440 Lectures
    Pseudocode for minimax ... Suppose that move(node,action) is the state/node that results from that action. That is, move(n,a) produces one of the children of n.
  43. [43]
    Minimax Algorithm | Baeldung on Computer Science
    Mar 18, 2024 · The terminal value is the utility function's value written below the leaf node. In a setting where both Max and Min play optimally, whichever ...
  44. [44]
    minimax
    Time complexity is O(b) where b is the average branching factor and d is the depth of the move tree. Complete minimax search is impractical for most games.<|separator|>
  45. [45]
    How do we determine the time and space complexity of minmax?
    Jan 17, 2010 · He is asking about calculating the time and space complexity of minimax. O(b^d) is the time complexity and the space complexity is O(bd). So ...
  46. [46]
    Minimax Algorithm - Stanford Computer Science
    For two player games, the minimax algorithm is such a tactic, which uses the fact that the two players are working towards opposite goals to make predictions ...
  47. [47]
    The Minimax Algorithm in Tic-Tac-Toe: When graphs, game theory ...
    Sep 13, 2022 · The Minimax algorithm evaluates the payoff of each available move by taking turns for the minimizer and the maximizer.
  48. [48]
    [PDF] The explanation of the Minimax Algorithm
    The Minimax algorithm is a decision-making algorithm for two-player turn games, finding the optimal move value for the player, with maximizer and minimizer ...
  49. [49]
    Finding optimal move in Tic-Tac-Toe using Minimax Algorithm in ...
    Jul 23, 2025 · Finding optimal move in Tic-Tac-Toe using Minimax Algorithm in Game Theory · function findBestMove(board): bestMove = NULL for each move in board ...
  50. [50]
    Why is the result of minimax of tic-tac-toe always a draw?
    Dec 21, 2021 · Tic-tac-toe with optimum play from both sides always results in a draw, it's very easy to calculate every single possible move resulting in this.3x3 Tic-Tac-Toe in C using Minimax - Stack OverflowWhat is wrong with my minimax algorithm for tictactoe - Stack OverflowMore results from stackoverflow.comMissing: payoff symmetric
  51. [51]
    [PDF] MiniMax and Alpha Beta Pruning
    Makes MiniMax more efficient. ○ If we search down the whole tree, the number of states is exponential to the depth of the tree. ○ Alpha Beta Pruning cuts ...
  52. [52]
    [PDF] Optimal Strategy Formulation for Tic-Tac-Toe Using Minimax ...
    1. The Minimax algorithm is a powerful strategy used in two-player zero-sum games like tic- tac-toe. It navigates through the game tree, assessing potential ...Missing: example | Show results with:example
  53. [53]
    Minimax | Brilliant Math & Science Wiki
    Minimax is a decision rule used to minimize the worst-case potential loss; in other words, a player considers all of the best opponent responses to his ...
  54. [54]
    Statistical Decision Functions - Abraham Wald - Google Books
    Statistical Decision Functions. Front Cover. Abraham Wald. Wiley, 1950 - Mathematical statistics - 179 pages. The general statistical decision problem ...
  55. [55]
    Introduction to Wald (1949) Statistical Decision Functions
    Wald, A. (1945c). Statistical decision functions which minimize the maximum risk, Ann. Math. 46, pp. 265–280. Article MathSciNet MATH Google ...<|control11|><|separator|>
  56. [56]
    Joint pricing and inventory management under minimax regret
    Mar 15, 2023 · Specifically, the demand is a function of the selling price and a random factor of which the distribution is unknown. We employ the minimax ...
  57. [57]
    Optimal pricing to minimize maximum regret with limited demand ...
    The model adopts the minimax regret criterion to make robust online decisions with limited information. ... Joint pricing and inventory management under minimax ...
  58. [58]
    Statistical Decision Functions - Project Euclid
    One of the new results obtained here is the establishment of the existence of so called minimax solutions under rather weak conditions (Theorems 2.3 and 3.2).
  59. [59]
    [PDF] Minimax decision rules for planning under uncertainty - arXiv
    Mar 2, 2022 · It is common to use minimax rules to make decisions for planning when there is great uncertainty on what will happen in the future.
  60. [60]
    Decision criteria under uncertainty - Maximax maximin minimax ...
    The maximin rule (maximum of the minima) examines the worst outcome for each alternative, then selects the alternative with the best (largest) minimum payoff.
  61. [61]
    [PDF] Risk, Ambiguity, and the Savage Axioms - RAND
    Savage's P2 corresponds closely to. "Rubin's Postulate" (Luce and Raiffa, Games and Decisions, New York, 1957, p. 290) or Milnor's "Column Linearity" postulate, ...
  62. [62]
    [PDF] The Ambiguity Aversion Literature: A Critical Assessment
    Our critique of the ambiguity aversion literature is not confined to a par- ticular model (e.g., MEU) but extends to any model of ambiguity averse preferences.
  63. [63]
    [PDF] Axioms for Minimax Regret Choice Correspondences - Jörg Stoye
    Jul 14, 2011 · Axioms are designed to think about regret as a novel theory of ambiguity aversion, where such aversion is injected into an otherwise standard ...Missing: critique | Show results with:critique
  64. [64]
    Robust control and H∞-optimization—Tutorial paper - ScienceDirect
    This paper presents a tutorial on H∞-optimal regulation theory, emphasizing the mixed sensitivity problem for robust control system design.
  65. [65]
    [PDF] Hс-Optimal Control and Related Minimax Design Problems - Inria
    control curriculum, that would follow a basic course in linear control theory covering LQ and LQG designs. The framework adopted in this book makes such an ...
  66. [66]
    [PDF] Eindhoven University of Technology MASTER H-infinity robust ...
    This thesis discusses H-infinity robust control design for an electromechanical servo system, focusing on robustness and using a state-space approach.
  67. [67]
    [PDF] Risky Curves: On the Empirical Failure of Expected Utility
    Daniel Friedman, R. Mark Isaac, Duncan James, and Shyam Sunder. 2014. Risky Curves: On the empirical failure of expected utility. London: Routledge.Missing: tail black swans
  68. [68]
    [PDF] Taleb -- Statistical Consequences of Fat Tails
    ... tails of the distribution; in a way option theory is mathematical contract theory. ... failures. This is a brief nontechnical exposition from the results in [228] ...
  69. [69]
    [PDF] Bayesian Repeated Zero-Sum Games with Persistent State, with ...
    We manage to show that, when the minimax strategy of UΠ can be computed efficiently for all Π ∈ ∆(Θ), in both Bayesian repeated games and Bayesian allocation ...
  70. [70]
    [PDF] CS440/ECE448 Lecture 19: Alpha-Beta and Expectiminimax
    • Expectiminimax: Minimax search for games of chance. • Besides MIN and MAX ... • There is no known algorithm to make it polynomial time. • But… can we ...
  71. [71]
    [PDF] Expectiminimax Algorithm
    EXPECTIMINIMAX gives perfect play for non-deterministic games. Like MINIMAX, except add chance nodes. For max node return highest EXPECTIMINIMAX of.
  72. [72]
    [PDF] Adversarial Search: Expectimax and Expectiminimax - Washington
    Probability = average over repeated experiments. ▫ Examples: ▫ Flip a coin 100 times; if 55 heads, 45 tails,. P(heads)= 0.55 and P(tails) = 0.45.
  73. [73]
    How Solvers Work | GTO Wizard
    Jan 23, 2023 · Solver algorithms generate max-exploit strategies. Pitting these algorithms against each other produces unexploitable equilibrium strategies. ...
  74. [74]
    Counterfactual Regret Minimization or How I won any money in ...
    Dec 31, 2023 · Minimizing regret involves favoring actions that perform better, thereby elevating the average value for the game state.
  75. [75]
    The Minimax Algorithm and Alpha-Beta Pruning | Mastering the Game
    The team studied computer chess as part of its larger research interest in Artificial Intelligence, and in 1958 developed the NSS chess program. This program ...Missing: invention | Show results with:invention<|separator|>
  76. [76]
    [PDF] Alpha-Beta Pruning; Limited Horizon
    Apr 7, 2004 · In the best case, it has half the exponent of minimax (can search twice as deeply with a given computational complexity). • Limited-horizon ...<|separator|>
  77. [77]
    The principal continuation and the killer heuristic - ACM Digital Library
    Moves saved while determining the principal continuation are shown to be good candidates for killer moves when the killer heuristic supplements the Alpha-Beta ...
  78. [78]
    [PDF] Game-playing: DeepBlue and AlphaGo - Stanford University
    DeepBlue: MiniMax tree + Alpha-Beta pruning to a depth of ~13. ○ After that depth, used evaluation function to estimate utility. Page 57. Go. ○ Why wasn't ...
  79. [79]
    Alpha Beta Pruning in AI - Great Learning
    Jan 23, 2025 · With Alpha Beta Pruning, the time complexity can be reduced to O(b^(d/2)) in the best case, enabling the AI to explore deeper levels of the ...Introduction · Working of Alpha-beta Pruning · Key Benefits of Alpha Beta...
  80. [80]
    [PDF] Robust Optimization
    a specific and relatively novel methodology for handling optimization problems with uncertain ...
  81. [81]
    Theory and Applications of Robust Optimization | SIAM Review
    In this paper we survey the primary research, both theoretical and applied, in the area of robust optimization (RO).
  82. [82]
    Minimax Relative Regret Approach for Resilient Supply Chain Design
    A two-stage min-max relative regret robust model is developed. The first stage is to determine the optimal facility locations and investments in protection ...
  83. [83]
    (PDF) Minimax Relative Regret Approach for Resilient Supply Chain ...
    Aug 10, 2025 · For the problem, a two-stage min-max relative regret robust model is developed. The first stage is to determine the optimal facility locations ...
  84. [84]
    [PDF] robust optimization models for inventory control under demand and ...
    Zhang (2011) shows that two-stage minimax regret robust uncapacitated lot-sizing problems are polynomially solvable when demand uncertainty is characterized ...
  85. [85]
    A robust optimization model for distribution network design under a ...
    We explore how to design a robust distribution network when the uncertainty in the continuous parameters is modeled through polyhedral sets.
  86. [86]
    Full article: Robust optimization approaches in inventory management
    By characterizing uncertain demand using an uncertainty set, robust inventory models can derive optimal ordering decisions without relying on specific demand ...
  87. [87]
    Deep Blue - IBM
    In 1997, IBM's Deep Blue did something that no machine had done before. · Garry Kasparov focuses on the chessboard during the 1997 rematch in New York City · The ...
  88. [88]
    Deep Blue - CS221
    A popular optimization of minimax is known as Alpha-beta pruning, wherein any move for which another move has already been discovered that is guaranteed to ...
  89. [89]
    Deep Blue: The History and Engineering behind Computer Chess
    Alpha-beta pruning eliminated possible future move sets from analysis if they were proven to be either equal or worse than another move already searched.
  90. [90]
    (PDF) Checkers Is Solved - ResearchGate
    Aug 6, 2025 · This paper announces that checkers is now solved: Perfect play by both sides leads to a draw. This is the most challenging popular game to be solved to date.
  91. [91]
    [PDF] Checkers Is Solved - McGill School Of Computer Science
    Feb 4, 2010 · Chinook was not designed to produce a proven draw, only a heuristic draw; demonstrat- ing proven draws in a heuristic search seriously degrades ...Missing: minimax | Show results with:minimax
  92. [92]
    AI - Game Theory - Minimax - Kaggle
    Minimax Algorithm​​ Minimax is a decision-making algorithm used in game theory, particularly for two-player zero-sum games (where one player's gain is the other' ...Missing: applications | Show results with:applications<|separator|>
  93. [93]
    [PDF] Informationally Robust Optimal Auction Design∗ | MIT Economics
    Dec 15, 2016 · The value function in this minmax information structure has the property that all bidders have the same virtual value. As a result, the seller ...
  94. [94]
    [PDF] A Strong Minimax Theorem for Informationally-Robust Auction Design
    Oct 12, 2020 · Theorem 1 is essentially a minimax theorem for the zero-sum game in which the Seller chooses the mechanism to maximize profit and Nature ...
  95. [95]
    Maximin principle | ethics - Britannica
    Rawls's theory of justice is known as the “maximin” principle, because it seeks to maximize the welfare of those at the minimum level of society.<|separator|>
  96. [96]
    Playing Games with Justice: Rawls and the Maximin Rule - jstor
    Poles apart from the maximin rule, the maximax rule calls for the decision which offers the possibility of the most gain. Possible losses and lesser gains are ...
  97. [97]
    Minimax-regret climate policy with deep uncertainty in climate ...
    The MMR criterion chooses a policy that minimizes maximum regret across all forty-three potential policies. Although the mathematical generalization of the ...
  98. [98]
    [PDF] Minimaxing: Theory and Practice
    Empirical evidence suggests that searching deeper in game trees using the minimax propagation rule usually improves the quality of decisions significantly. ...
  99. [99]
    Search versus Knowledge: An Empirical Study of Minimax on KRK
    This article presents the results of an empirical experiment designed to gain insight into what is the effect of the minimax algorithm on the evaluation ...
  100. [100]
    [PDF] Comparing Classical Mean Variance Optimization and Robust ...
    Now we look at the performance of MVOand RMVOon the 2008 financial crisis. We also compute the optimal portfolios utilizing MVOand RMVO; and we conclude, as.
  101. [101]
    [PDF] The worst-case scenario: robust portfolio optimization with discrete ...
    Jun 28, 2024 · These portfolios can better preserve capital during difficult market conditions, such as the COVID-19 pandemic, and the 2008 financial crisis,.
  102. [102]
    Robust Portfolio Optimization with Respect to Spectral Risk ...
    Jun 7, 2022 · In the numerical experiment, our formulation performs better than the formulation in the 2008 crisis during which the market experienced ...
  103. [103]
    What is the Minimax Algorithm? A Beginner's Guide - GUVI
    The algorithm's time complexity is O(b^d), where b represents the branching factor and d is the depth. In chess, with an average branching factor of 35 and ...
  104. [104]
    Alpha-Beta - Chessprogramming wiki
    Alpha-beta pruning characterizes human play, but it wasn't noticed by early chess programmers - Turing, Shannon, Pasta and Ulam, and Bernstein. We humans are ...Beta-Cutoff · Aspiration Windows · Fail-High · Fail-LowMissing: date | Show results with:date
  105. [105]
    Number Of Legal Go Positions Finally Worked Out - I Programmer
    Feb 3, 2016 · This means that if all combinations were legal positions the game would have 3361 possible states. ... L19 is approximately 2.081681994 * 10170.
  106. [106]
    Counting Legal Positions in Go - John Tromp
    The 361 points on a 19x19 Go board can be colored empty, black, or white. Only some of the 3^361 possible positions are legal.
  107. [107]
    Min Max Algorithm in Artificial Intelligence - Applied AI Course
    Dec 12, 2024 · Exponential Time Complexity: The time complexity of Mini-Max is O(b^d), where 'b' is the branching factor and 'd' is the depth of the game tree.Pseudo-Code For Minmax... · Step 3: Backpropagation Of... · ​​limitations Of The...
  108. [108]
    Why you should minimax in two-player zero-sum games - LessWrong
    May 17, 2020 · Here's an argument I like: We have two players A,B who play strategies a,b. A is trying to maximize V(a,b). Since it's a zero-sum game, B is trying to minimize ...Missing: explanation | Show results with:explanation
  109. [109]
    Game Theory - Stanford Encyclopedia of Philosophy
    Jan 25, 1997 · Game theory is the study of the ways in which interacting choices of economic agents produce outcomes with respect to the preferences (or utilities) of those ...
  110. [110]
    Pitfalls of a Minimax Approach to Model Uncertainty
    Pitfalls of a Minimax Approach to Model Uncertainty by Christopher A. Sims. Published in volume 91, issue 2, pages 51-54 of American Economic Review, May 2001.
  111. [111]
    [PDF] arXiv:2104.12975v4 [q-fin.GN] 1 Feb 2024
    Feb 1, 2024 · Our minmax regularization suggests more conservative portfolios following out- of-sample results that are disappointing compared to prior ...
  112. [112]
    Minimax and Bayes rule - Cross Validated - Stack Exchange
    Jan 6, 2021 · The two are based on different assumptions. Defining a minimax rule for a Bayesian setting would be odd. Similarly, a Bayesian rule with standard minimax ...Missing: debates | Show results with:debates
  113. [113]
    [PDF] Solving Games with Functional Regret Estimation
    We prove a bound on our approach relating the accuracy of the regressor to the regret of the algorithm and demonstrate its efficacy in a simplified poker game.Missing: alternatives | Show results with:alternatives
  114. [114]
    Minimax vs. Bayes Prediction | Probability in the Engineering and ...
    Jul 27, 2009 · Minimax vs. Bayes Prediction. Published online by Cambridge University Press: 27 July 2009. D. Blackwell.
  115. [115]
    [2302.10831] Minimax-Bayes Reinforcement Learning - arXiv
    Feb 21, 2023 · This paper studies (sometimes approximate) minimax-Bayes solutions for various reinforcement learning problems to gain insights into the properties of the ...Missing: debates | Show results with:debates
  116. [116]
    Pitfalls of a Minimax Approach to Model Uncertainty - ResearchGate
    Aug 8, 2025 · The objection from Sims (2001) to the maxmin approach of the theory of robust control is basically that the former often implies unreasonable ...