Fact-checked by Grok 2 weeks ago

Decision theory

Decision theory is an interdisciplinary field within , , , and that formalizes the principles and processes for making rational choices, particularly under conditions of or incomplete , by integrating probabilistic reasoning with evaluations of outcomes via functions. At its core, decision theory distinguishes between normative approaches, which prescribe how decisions ought to be made to maximize expected —such as selecting the option with the highest probability-weighted of potential consequences—and descriptive approaches, which analyze how decisions are actually made, often uncovering systematic deviations like cognitive biases and heuristics..pdf) A third category, prescriptive decision theory, bridges the gap by offering practical strategies to improve real-world based on normative ideals adjusted for human limitations. These frameworks rely on key concepts including acts (available choices), states of the world (possible scenarios), outcomes (results of act-state combinations), and representations of beliefs (via probabilities) and preferences (via utilities)..pdf) The field's modern development traces back to early 20th-century work on probability and utility, with foundational advances by and in their 1944 book Theory of Games and Economic Behavior, which axiomatized expected utility theory for decisions under risk through a set of postulates ensuring consistency in preferences. Leonard J. Savage built on this in 1954 with of Statistics, extending the framework to decisions under uncertainty by deriving both subjective probabilities and utilities from behavioral axioms, thus establishing a subjective expected utility model that treats probabilities as personal degrees of belief rather than objective frequencies. Earlier influences include Frank Ramsey's 1926 contributions to subjective probability and Bruno de Finetti's 1937 work on exchangeability, which emphasized in betting odds as a criterion for rational belief. Decision theory encompasses several branches, including statistical decision theory, which applies statistical methods to minimize risk or loss in estimation and hypothesis testing; Bayesian decision theory, which updates beliefs with new evidence using Bayes' theorem; and game theory, a subfield addressing strategic decisions where outcomes depend on others' choices. It has profoundly influenced economics (e.g., in welfare analysis), artificial intelligence (e.g., in reinforcement learning algorithms), and policy-making (e.g., in cost-benefit analysis), while descriptive insights from behavioral studies continue to challenge and refine normative models.

Historical Development

Early Philosophical and Economic Roots

The foundations of decision theory can be traced to ancient philosophical inquiries into rational choice and ethical action under uncertainty. In ancient Greek philosophy, Aristotle introduced the concept of phronesis, or practical wisdom, as an intellectual virtue essential for deliberating and deciding on actions that promote human flourishing in specific contexts. Aristotle described phronesis as the ability to perceive the particular circumstances of a situation and apply general ethical principles to achieve the good life, distinguishing it from theoretical knowledge by its focus on contingent, practical matters. This notion laid early groundwork for understanding decision-making as a deliberative process balancing virtues and situational demands. Stoic philosophy further developed ideas of decision-making under constraints, particularly through the teachings of in the 1st and 2nd centuries CE. As a former slave, emphasized distinguishing between what is within one's control—such as judgments, intentions, and choices—and what is not, like external events or outcomes. He argued that rational decisions arise from aligning one's will with nature and accepting constraints, thereby achieving inner freedom and ethical consistency regardless of circumstances. This stoic framework influenced later conceptions of resilient choice-making in the face of unavoidable limitations. In the , economic thought began formalizing probabilistic elements of . , in his 1738 paper "Exposition of a New Theory on the Measurement of Risk," addressed the —a gamble with expected monetary value but finite —by proposing "moral expectation" as a measure of value weighted by diminishing of wealth. Bernoulli's approach resolved the paradox by shifting focus from raw monetary expectation to an individual's subjective valuation, introducing a precursor to utility-based . Jeremy Bentham's , articulated in his 1789 work An Introduction to the Principles of Morals and Legislation, provided a normative criterion for decisions centered on maximizing aggregate and minimizing . Bentham defined as the tendency of an action to produce , measured by the balance of and across intensity, duration, certainty, and extent. This "greatest happiness principle" served as a decision rule for individuals and legislators, influencing economic and ethical evaluations of choices by prioritizing net welfare outcomes. Early 20th-century contributions bridged these ideas toward modern frameworks. In his 1926 essay "Truth and Probability," Frank Ramsey developed qualitative notions of probability as degrees of , arguing for their coherence through arguments: inconsistent beliefs would allow an adversary to construct a set of bets guaranteeing loss. Ramsey's insights linked subjective probabilities to rational , emphasizing avoidance of sure-loss scenarios as a criterion for belief calibration. Similarly, , in his 1937 work, advanced subjective probability by emphasizing exchangeability and the coherence of betting odds as a standard for rational beliefs, further solidifying the subjective Bayesian approach to uncertainty. These philosophical and economic roots informed subsequent formalizations of in decision models.

20th-Century Formalization

The 20th-century formalization of decision theory marked a shift from philosophical and economic intuitions to rigorous mathematical frameworks, driven by interdisciplinary efforts in , , and . A foundational contribution came from and Oskar Morgenstern's 1944 book Theory of Games and Economic Behavior, which developed an axiomatic theory of for strategic interactions and individual choices under . This work demonstrated that preferences satisfying , , , and independence axioms could be represented by a numerical function, enabling the analysis of expected outcomes in games and decisions. Von Neumann and Morgenstern's approach extended earlier ideas, such as Daniel Bernoulli's 1738 moral expectation, by providing a formal structure for risk attitudes in collective and personal contexts. Building on this axiomatic base, Leonard J. Savage advanced subjective decision making in his 1954 book The Foundations of Statistics, where he formulated subjective expected utility theory. Savage's system integrated personal probabilities with utilities, using axioms of ordering, cancellation, and to justify decisions based on subjective beliefs about states of the world, rather than objective frequencies. He also introduced the criterion, where decisions minimize the maximum possible relative to the best alternative across unknown states, providing a robust alternative to expected utility for adversarial or highly uncertain environments. This framework emphasized state-dependent utilities and Bayesian updating, establishing decision theory as a normative tool for and rational choice under incomplete information. Parallel developments in statistical decision theory were led by Abraham Wald's 1950 Statistical Decision Functions, which introduced formal criteria for optimal actions in the face of uncertainty. Wald proposed the risk criterion, providing a robust approach to in and testing, influencing sequential and admissibility concepts in statistics. In the post-1950s period, decision theory intersected with , exemplified by George B. Dantzig's 1947 invention of the method for . This solved optimization problems by iteratively improving feasible solutions to linear objective functions subject to constraints, enabling practical decision support in and . Such integrations expanded decision theory's applicability to complex systems. By the , institutions like the applied these tools in , conducting studies on defense and that shaped U.S. government decision processes. RAND's work, including assessments of military budgeting under , demonstrated decision theory's role in bridging theoretical models with real-world policy evaluation.

Fundamental Principles

Preferences and Utility

In decision theory, preferences over alternatives form the basis for rational choice, represented by a \succsim on a set of outcomes X, where x \succsim y indicates that x is at least as preferred as y. A relation is complete if for any x, y \in X, either x \succsim y or y \succsim x (or both), ensuring all pairs of alternatives are comparable. It is transitive if x \succsim y and y \succsim z imply x \succsim z, preventing cycles in rankings. Additionally, preferences are continuous if for any x \succsim y \succsim z, there exists \lambda \in (0,1) such that y \succsim \lambda x + (1-\lambda) z, allowing intermediate mixtures to bridge strict preferences without abrupt jumps. These axioms enable the representation of preferences by a . captures only the ranking of preferences, where a u: X \to \mathbb{R} satisfies u(x) > u(y) x \succ y, but allows for monotonic transformations since only relative matters. In , assigns numerical values that preserve both and intensity differences, requiring a more restrictive invariant under positive affine transformations u' = a u + b with a > 0. The Von Neumann-Morgenstern (vNM) utility representation theorem extends this to choices involving , stating that if preferences over lotteries (probability distributions on X) satisfy completeness, , independence, and , then there exists a function u: X \to \mathbb{R} such that for lotteries p, q, p \succsim q if and only if \sum_{x \in X} p(x) u(x) \geq \sum_{x \in X} q(x) u(x). The independence axiom requires that if p \succsim q, then for any r and \alpha \in (0,1), \alpha p + (1-\alpha) r \succsim \alpha q + (1-\alpha) r, ensuring preferences are unaffected by common components in mixtures. This theorem, proven in the seminal work on , justifies under by linking preferences directly to expected utility. The axiom ensures the linearity of the expected utility form EU(p) = \sum_{i} p_i u(x_i) for a p with outcomes x_i and probabilities p_i. To derive this, consider simple lotteries: for a degenerate lottery on x, EU(\delta_x) = u(x). For a compound lottery \alpha p + (1-\alpha) q, independence implies \alpha p + (1-\alpha) q \succsim \alpha p' + (1-\alpha) q' if p \succsim p', mirroring the comparison of p and p'. Iterating over finite-support lotteries via shows EU must be affine in probabilities, yielding the linear form; non-linearity would violate independence by allowing mixtures to alter rankings inconsistently. ensures the representation extends to all probability distributions. Risk attitudes arise from the curvature of the vNM function. A decision maker is risk-averse if u is (u'' < 0), preferring a sure outcome to a risky lottery with the same expected value, as in Jensen's inequality: u(E) > E[u(x)]. For example, individuals purchase despite a fair premium because the values loss avoidance more than equivalent gain potential. Risk-seeking behavior corresponds to a u (u'' > 0), where u(E) < E[u(x)], such as gambling on lotteries with negative expected returns. Risk neutrality holds for linear u, equating sure and expected values. Violations of transitivity undermine rational choice, as illustrated by the money pump argument: suppose preferences cycle with A \succ B \succ C \succ A. Starting from A, one could trade for B at a small gain \epsilon > 0, then C for another \epsilon, and back to A for yet another, yielding infinite profit $3\epsilon per cycle while exploiting the decision maker's inconsistencies, potentially leading to arbitrary losses if roles reverse. This pragmatic argument, rooted in early experimental decision studies, demonstrates that intransitive preferences invite exploitation and thus fail as a basis for consistent action.

Normative Frameworks

Expected Utility Theory

Expected utility theory provides the foundational normative model in decision theory for rational choice under risk, where probabilities of outcomes are objectively known. It posits that a decision maker should select the action that maximizes the of , where represents the subjective value of outcomes. This framework assumes that preferences over lotteries—probability distributions over outcomes—can be represented by a function that is linear in probabilities. Formally, for an action a leading to outcomes depending on states of nature s \in S, the expected utility is given by EU(a) = \sum_{s \in S} p(s) \, u(\text{outcome}(a, s)), where p(s) is the known probability of state s, and u is the von Neumann-Morgenstern utility function. Rational decisions select the action a that maximizes EU(a). This representation derives from the von Neumann-Morgenstern (vNM) axioms applied to preferences over lotteries: completeness (all lotteries are comparable), transitivity, continuity (preferences are continuous in probabilities), and independence (preferences between lotteries are unaffected by mixing with a third lottery in the same proportions). The independence axiom ensures the linearity of the utility representation in probabilities. To sketch the proof: the continuity axiom allows assigning utilities to outcomes by interpolating between sure outcomes using , establishing a cardinal scale unique up to affine transformations. Independence then implies that preferences over compound reduce to weighted sums, yielding the expected utility form; for L_1 \succ L_2, mixing each with an identical lottery L_3 preserves the ordering, enforcing additivity over probability mixtures. In applications, expected utility theory underpins portfolio choice under risk, where investors allocate assets to maximize expected of returns, balancing mean returns against variance via utility functions reflecting . This leads to the (CAPM), which derives equilibrium asset prices from mean-variance optimization under expected utility, implying that expected returns compensate for measured by . The theory also resolves the , where a game with infinite expected monetary value ( flips until heads, payoff $2^n on the nth flip) yields finite expected under bounded, functions like logarithmic utility, as diminishes with wealth. Normatively, expected utility serves as the benchmark for rationality in decisions under risk, where probabilities are known and objective, contrasting with uncertainty where probabilities are unknown or subjective.

Axiomatic Foundations

The axiomatic foundations of decision theory provide rigorous logical systems that underpin normative models of rational choice under uncertainty. A cornerstone is the framework developed by and (vNM), which justifies expected utility representation for decisions involving objective probabilities. The vNM axioms include (every pair of is comparable), (preferences are consistent across comparisons), (preferences are preserved under continuous mixtures), and . The axiom states that if p is preferred to q, then for any r and mixing probability \alpha \in (0,1], the mixture \alpha p + (1-\alpha) r is preferred to \alpha q + (1-\alpha) r. This axiom ensures that preferences over mixtures are preserved, implying that the function must be linear in probabilities, leading to an expected representation V(p) = \sum_{x} \pi(x) u(x), where \pi are objective probabilities and u is the over outcomes. The of this representation involves constructing a scale via binary and showing uniqueness up to positive affine transformations (i.e., u' = a u + b with a > 0), as transformations preserve the ordering and expected value calculations. Extending vNM to subjective uncertainty, Savage's framework incorporates states of the world without objective probabilities, using a state-act-consequence space where acts map states to consequences. Savage's axioms are: P1 ( and , forming a weak order over acts); P2 (), which states that if two acts f and g yield the same consequences outside E, and f is preferred to g (due to differences in E), then replacing the consequences in E with those from a third act h preserves the preference; P3 (event-wise independence), requiring that preferences between constant acts (yielding fixed consequences in certain events) depend only on the events' comparative likelihoods; and P4 (non-triviality), ensuring some events are neither null nor certain. These axioms yield a subjective expected utility representation V(a) = \sum_{s} \pi(s) u(c(a,s)), where \pi is a unique (up to affine scaling) subjective probability measure over states s, and u is unique up to positive affine transformation. The derivation of subjective probabilities from these qualitative axioms is particularly notable: P2 (sure-thing) implies additivity of probabilities for disjoint events, as preferences between acts that isolate event comparisons behave as if probabilities sum, while P3 ensures monotonicity and qualitative consistency akin to probability orderings. Together, they embed a unique derived solely from comparisons over acts, without presupposing numerical probabilities. A key challenge to Savage's framework arises in the Ellsberg paradox, where individuals prefer options with known probabilities over those with ambiguous (unknown) probabilities, even when expected utilities are equal. This behavior suggests aversion to ambiguity and violates Savage's sure-thing principle (P2), indicating limitations in applying subjective expected utility to real-world uncertainty. Savage's system, however, faces challenges in "small worlds" where acts must be fully specified across all states, potentially leading to inconsistencies in large or hypothetical state spaces. Anscombe and Aumann (1963) resolve this by hybridizing the framework: consequences are objective lotteries (horse lotteries with known probabilities), acts map states to these lotteries, and axioms include vNM-style completeness, transitivity, and independence over mixtures of acts, plus Savage-like sure-thing and non-triviality principles adapted to events. This setup derives both subjective probabilities over states and a vNM utility over lotteries, yielding V(a) = \sum_{s} \pi(s) EU(a(s)), where EU is expected utility over objective lotteries, thus separating belief formation from utility while avoiding small-world paradoxes through the objective lottery primitive. For robustness in infinite outcome or state spaces, Debreu's (1959) continuous extension generalizes the vNM axioms by replacing the finite-support continuity with topological (preferences are continuous in the ) and the (no "infinitesimal" gaps in preferences). Under weak ordering, , and a connectedness assumption on the outcome space, this yields a continuous utility representation unique up to monotonic , ensuring the expected utility form holds for uncountable mixtures without discreteness restrictions. These extensions address violations in finite models, such as non-representability due to discontinuities, by leveraging topological methods to guarantee existence and robustness in broader domains.

Descriptive Approaches

Behavioral Decision Making

Behavioral decision making focuses on descriptive models of how individuals actually choose under risk and uncertainty, revealing systematic deviations from normative frameworks like expected utility theory due to psychological factors such as dependence and emotional responses. These models emphasize that people evaluate outcomes relative to a subjective point rather than wealth, leading to behaviors that prioritize avoiding losses over acquiring equivalent gains. A cornerstone of this approach is , introduced by Kahneman and Tversky in 1979, which posits a value function v(x) that is reference-dependent, S-shaped, and exhibits . Specifically, v(x) is concave for gains (indicating ) and convex for losses (indicating risk seeking), with losses looming larger than gains; empirical estimates place the \lambda \approx 2.25, meaning losses are felt about twice as intensely as gains. also incorporates a probability weighting function \pi(p) that overweights small probabilities and underweights moderate to high ones, distorting perceived likelihoods in decision processes. The overall prospect value is computed as V = \sum \pi(p_i) v(x_i), aggregating weighted values across outcomes in a prospect. This formulation was refined in by Tversky and Kahneman in 1992, which replaces separate weighting of probabilities with rank-dependent cumulative weights to handle both gains and losses more coherently, avoiding violations of . In this extension, positive and negative rank-ordered outcomes are weighted separately using cumulative distribution functions, ensuring the model applies to decisions under both and while preserving the core insights of dependence and probability distortion. Framing effects illustrate how reference points influence choices, where logically identical options lead to different decisions based on their presentation. A seminal demonstration is the : when framed positively as "saving 200 out of 600 lives" with a certain option, most participants choose the risk-averse path; reframed negatively as "400 out of 600 will die" with the same certain option, preferences shift toward the risky gamble. This sensitivity to framing underscores how gains and losses are defined contextually, amplifying 's descriptive power over normative models. Related phenomena include the and , both rooted in and reference dependence. The manifests as a gap between willingness-to-accept (WTA) and willingness-to-pay (WTP) for the same good, with WTA exceeding WTP because selling an owned item frames the transaction as a loss relative to the . Similarly, arises when individuals disproportionately prefer maintaining the current state over alternatives of equal value, as changes are evaluated as losses from the reference point of the existing arrangement. Experimental evidence shows this bias persists even when transaction costs are absent, confirming its psychological origins. Neuroeconomic research using (fMRI) provides neural evidence supporting prospect theory's mechanisms in reward processing. For instance, studies post-2000 have identified activation specifically linked to framing-induced biases, where emotional responses in this region correlate with shifts from rational to biased choices. Complementary fMRI work reveals asymmetric encoding of gains and losses in the and insula, with stronger responses to potential losses reflecting the neural basis of during mixed-gamble decisions. These findings validate prospect theory's predictions at the level, showing how motivational factors shape computation beyond abstract utility.

Heuristics and Biases

Heuristics are mental shortcuts that individuals employ to simplify complex processes under , often leading to systematic biases that deviate from rational norms. Pioneering research by and identified key heuristics that influence probability judgments and predictions, revealing how these cognitive strategies, while efficient, can produce predictable errors in assessing likelihoods and outcomes. This work, grounded in , demonstrated that people rely on intuitive rules of thumb rather than comprehensive statistical analysis, resulting in biases that affect everyday decisions from to judgments. The representativeness heuristic involves evaluating the probability of an event or category membership based on how closely it resembles a typical , often neglecting s or probabilities. For instance, in the classic "lawyer-engineer" problem, participants are told that 70% of a group are engineers and 30% lawyers, then given a description of a person that is neutral or stereotypical; most judge the probability of the person being an engineer based on the description's similarity to an engineer stereotype, ignoring the 70:30 . This leads to base-rate neglect, where essential statistical information is overlooked in favor of superficial resemblance, as shown in experiments where judgments violated Bayesian updating principles. The causes people to estimate event frequencies or probabilities based on the ease with which examples come to mind, rather than objective data. Vivid or recent events are more mentally accessible, leading to overestimation of their likelihood; for example, after the , 2001, terrorist attacks, public surged despite statistically lower risks compared to driving, resulting in an estimated 1,500 additional U.S. traffic deaths in the following year as people avoided . This bias is exacerbated by media coverage, which amplifies recall of sensational incidents over mundane but more probable ones. Anchoring and adjustment occurs when decision-makers start from an initial value (the ) and make insufficient adjustments to reach a final estimate, even if the anchor is arbitrary. In a seminal experiment, participants spun a roulette wheel rigged to show 10 or 65, then estimated the percentage of countries in the ; those anchored at 10 guessed around 25%, while those at 65 guessed about 45%, demonstrating how random anchors skew numerical judgments. This affects negotiations, pricing, and forecasting, where initial figures unduly influence outcomes despite irrelevance. Confirmation bias manifests as a tendency to seek, interpret, or recall information that confirms preexisting beliefs while ignoring disconfirming evidence, hindering objective hypothesis testing. Demonstrated in the , participants are shown cards with a on one side and a number on the other, tasked with verifying the rule "if a card has a on one side, it has an even number on the other"; most select cards that could confirm the rule (e.g., ) but neglect those that could falsify it (e.g., odd number), succeeding only about 10-20% of the time. This bias persists across domains, from scientific inquiry to everyday beliefs, promoting selective evidence gathering. Overconfidence bias refers to the unwarranted certainty in one's judgments, where subjective exceeds actual accuracy. Calibration studies reveal that when individuals provide 80% intervals for answers to questions, these intervals contain the true value only about 50% of the time, indicating systematic overprecision. Research by Lichtenstein and Baruch Fischhoff showed this effect across trivia and probabilistic forecasts, with experts often more overconfident than novices due to illusions of validity. Such biases contribute to poor in fields like and . These heuristics and biases form the foundation of descriptive models in behavioral decision theory, highlighting deviations from normative rationality without prescribing corrections.

Decision Contexts

Choices Under Uncertainty

In decision theory, choices under uncertainty arise when the probabilities of outcomes are unknown or ambiguous, distinct from situations of risk where probabilities are objectively known. This distinction was formalized by economist in his 1921 book Risk, Uncertainty and Profit, where refers to measurable uncertainties that can be quantified probabilistically, such as through or odds, while involves unmeasurable or subjective probabilities that cannot be reliably estimated, often due to novel or unique events. Under such conditions, decision makers cannot rely on expected utility calculations that assume precise probabilities, prompting alternative normative strategies to guide rational choice. One pessimistic approach is the maximin rule, which selects the action that maximizes the minimum possible payoff, thereby safeguarding against the worst-case scenario. This criterion assumes extreme caution, prioritizing security over potential gains, and is particularly suited to environments where the decision maker believes adverse outcomes are likely. For instance, in under , a planner might choose the option guaranteeing the highest floor level of utility regardless of states of nature. The rule traces back to statistical decision theory, notably Abraham Wald's work on principles, and is critiqued for being overly conservative in non-hostile settings. Another strategy, minimax regret, addresses the emotional or of suboptimal decisions by minimizing the maximum potential . for an is defined as the difference between the payoff of the best in a given and the payoff of the chosen in that , forming a regret from the original payoff table. The decision maker then selects the with the smallest maximum value. Consider a simple example with two s (invest or not) and two states (boom or ), yielding payoffs as follows:
Action/StateBoomRecession
Invest100-50
Not Invest2010
The matrix is constructed by subtracting each payoff from the maximum in its column:
Action/StateBoom RegretRecession RegretMax Regret
Invest06060
Not Invest80080
Here, investing minimizes the maximum at 60, making it the preferred under this rule. This approach, formalized in theory texts, balances with sensitivity to foregone opportunities but can lead to counterintuitive selections when compared to probabilistic methods. Empirical evidence reveals that people often exhibit , preferring options with known probabilities over those with ambiguous ones even when expected values are equal, as demonstrated by the . In Ellsberg's 1961 experiment, subjects faced an urn with 90 balls: 30 red, 60 either black or yellow (known risk for red vs. ambiguous for black/yellow). Most preferred betting on red (known 1/3 probability) over black (ambiguous 1/3 on average) and similarly favored yellow over red in complementary bets, violating the Savage axioms of subjective expected utility that require consistent probabilistic beliefs. This paradox highlights how ambiguity—unquantifiable uncertainty—triggers aversion beyond standard risk attitudes. To model such behavior, ambiguity aversion frameworks extend expected utility by incorporating multiple or non-additive probability measures. A seminal model is the maxmin expected utility proposed by Gilboa and Schmeidler in 1989, where the decision maker maximizes over actions the minimum expected utility over a set of possible priors: \max_a \min_{p \in \Pi} \mathbb{E}_p [u(a)] Here, \Pi represents the set of plausible probability distributions reflecting ambiguity, and u(a) is the utility of outcomes from action a. This captures pessimism by focusing on the worst-case prior within \Pi. Related models employ the Choquet integral to handle non-additive capacities, where beliefs are represented by a capacity function v rather than a probability measure, allowing for ambiguity through sub- or super-additivity; the expected utility becomes \int u \, d v, integrating outcomes weighted by the capacity over events. Schmeidler's 1989 work axiomatizes this Choquet expected utility, providing a foundation for non-probabilistic ambiguity attitudes while preserving continuity and monotonicity.

Intertemporal Choices

Intertemporal choice in decision theory involves evaluating trade-offs between outcomes that occur at different points in time, where individuals must decide whether to prioritize immediate rewards or delay gratification for larger future benefits. This area examines how preferences evolve over time, often revealing inconsistencies that challenge classical models of rational choice. Utility functions over outcomes, extended to temporal dimensions, form the basis for modeling these decisions. The discounted utility (DU) model, introduced by Samuelson in 1937, provides a foundational normative framework for intertemporal choices by assuming that future are discounted exponentially at a constant rate. In this model, total U is given by U = \sum_{t=0}^{\infty} \delta^t u(c_t), where u(c_t) is the of consumption c_t at time t, and \delta (with $0 < \delta < 1) is the discount factor reflecting time preference. This exponential discounting implies time-consistent preferences, but when \delta < 1, it captures present bias, where immediate outcomes are valued more highly than equivalent delayed ones, leading to lower savings or higher consumption in the present. The model has been widely adopted in economics for its analytical tractability and alignment with rational choice axioms. However, empirical observations often deviate from exponential discounting, prompting the development of hyperbolic discounting models that better capture time-inconsistent preferences. Proposed by Ainslie in 1975, hyperbolic discounting values delayed rewards according to V(\tau) = \frac{1}{1 + k \tau}, where V(\tau) is the present value of a reward delayed by time \tau, and k > 0 is a parameter determining the steepness of discounting. Unlike exponential models, hyperbolic discounting produces declining discount rates over longer horizons, resulting in preference reversals: for example, an individual might prefer $100 today over $110 tomorrow but prefer $110 in 31 days over $100 in 30 days, as the relative value of immediacy diminishes. This dynamic inconsistency arises because short-term temptations dominate when decisions are proximate, explaining phenomena like procrastination or inconsistent saving plans. To address time inconsistency, decision theory distinguishes between naive and sophisticated agents in . Naive agents fail to anticipate their future selves' inconsistencies and thus do not plan for them, often leading to suboptimal outcomes like repeated preference reversals without corrective action. In contrast, sophisticated agents recognize their future biases and employ game-theoretic strategies to self-regulate, treating future selves as adversaries in a framework, as analyzed by Strotz in 1956. Commitment devices, such as Ulysses contracts—precommitments to bind future actions, like automating savings withdrawals—enable sophisticated agents to achieve outcomes closer to their long-term preferences by restricting impulsive choices. Applications of these models extend to savings behavior and , where undermines long-term goals. In savings, hyperbolic discounters may plan to save aggressively but consume more in the present due to time inconsistency, reducing wealth accumulation. Laibson's 1997 quasi-hyperbolic discounting model refines this by applying a parameter \beta < 1 only to immediate rewards, while using \delta for all future periods: immediate utility is weighted by \beta, and future utilities by \beta \delta^t. This framework explains undersaving in liquid assets and over-reliance on illiquid ones like retirement accounts as commitment tools, and in addiction models, it accounts for cycles of indulgence followed by regret, as immediate rewards are disproportionately valued. Empirical evidence for these concepts comes from studies on , such as the conducted by Mischel and colleagues in the early 1970s. In this test, children aged 4-6 were offered a choice between one marshmallow immediately or two if they waited 15 minutes; the original follow-up suggested that those who delayed longer showed better life outcomes, including higher SAT scores and . However, subsequent replications and analyses, such as Watts et al. (2018) and Sperber et al. (2024), have found little evidence for strong long-term predictive validity, attributing much of the original effect to socioeconomic factors rather than alone. Follow-up research confirmed that attentional strategies, like distracting from the reward, facilitated delay, supporting models over purely exponential ones.

Interactive and Complex Decisions

Multi-Agent Interactions

Multi-agent interactions in decision theory examine situations where the outcomes of a decision depend not only on an individual's choices but also on the actions of other agents, introducing strategic interdependence. This framework, central to , models how rational agents anticipate and respond to others' decisions, often leading to equilibria where no agent benefits from unilateral deviation. Unlike single-agent decisions under , where arises from nature's , multi-agent settings involve strategic uncertainty from opponents' potential strategies. Normal-form games represent these interactions through payoff matrices that specify each agent's possible strategies and the resulting payoffs for all players. In a , players simultaneously choose actions without observing others' choices, and the payoff for each player depends on the strategy profile selected by all. A Nash equilibrium emerges as a strategy profile where each player's is a best response to the strategies of others, ensuring mutual optimality given the fixed choices. proved the existence of at least one such equilibrium, in mixed strategies if necessary, for finite games. Games are classified as zero-sum or non-zero-sum based on whether one player's gains equal another's losses. In zero-sum games, the total payoff is fixed, leading to pure antagonism; John von Neumann's guarantees an value v such that the row player can secure at least v by choosing \max_{\sigma} \min_{\tau} u(\sigma, \tau), while the column player can limit the row player to at most v via \min_{\tau} \max_{\sigma} u(\sigma, \tau), with equality holding in . Non-zero-sum games allow for mutual gains or losses, enabling cooperation but also defection incentives, as payoffs sum to a variable total. The exemplifies a non-zero-sum game with a suboptimal . Two suspects, interrogated separately, each choose to confess (defect) or remain silent (cooperate). The payoff matrix is:
Cooperate (Silent)Defect (Confess)
Cooperate (Silent)( -1, -1 )( -3, 0 )
Defect (Confess)( 0, -3 )( -2, -2 )
Here, payoffs represent years in prison (lower is better). Defecting is the dominant strategy for each, yielding (-2, -2), despite mutual cooperation offering the collectively superior (-1, -1). This structure illustrates social dilemmas where individual rationality leads to collective inefficiency. In repeated games, agents interact over multiple periods, allowing history-dependent strategies to sustain beyond one-shot outcomes. The folk theorem states that, for sufficiently players (high discount factor), any feasible payoff above the level can be approximated as a payoff, often through trigger strategies that reward and punish . This result highlights how repetition fosters outcomes closer to joint optimization in games like the . Bayesian games extend the framework to incomplete information, where players hold private types (e.g., valuations or costs) drawn from probability distributions, and strategies condition on beliefs about others' types. A refines this by requiring sequential rationality: at every information set, players' actions maximize expected utility given updated beliefs consistent with prior probabilities and equilibrium strategies. John C. Harsanyi introduced the framework of Bayesian games in 1967–1968, incorporating private types drawn from probability distributions and allowing strategies to depend on beliefs about others' types, analyzed via Bayesian Nash equilibria on an expanded type space.

Multi-Attribute and Dynamic Decisions

Multi-attribute utility theory (MAUT) extends expected utility theory to decisions involving multiple, often conflicting, objectives by constructing a function that aggregates preferences across attributes. Under mutual preferential , the overall utility u(\mathbf{x}) for an outcome \mathbf{x} = (x_1, \dots, x_n) is typically additive: u(\mathbf{x}) = \sum_{i=1}^n w_i u_i(x_i), where u_i is the single-attribute utility for the i-th criterion and w_i are scaling weights summing to 1 that reflect relative importance. This decomposition relies on assessing s through methods like direct weighting or indifference elicitation, enabling rational choice by maximizing the composite utility. The (AHP), developed by Thomas Saaty, provides a structured for multi-attribute decisions by decomposing the problem into a of goals, criteria, and alternatives, then deriving priorities via pairwise comparisons. Decision-makers rate the relative importance of elements on a 1-9 scale, forming reciprocal matrices whose principal eigenvalues yield normalized weights through the eigenvector method, with consistency checked via the consistency ratio. AHP handles both tangible and intangible factors, making it suitable for complex prioritization where attributes are not easily quantified. Dynamic programming addresses sequential decisions in evolving environments, particularly Markov decision processes (MDPs), where the state transitions probabilistically based on actions. The value function V(s) for state s satisfies Bellman's equation: V(s) = \max_a \left[ r(s,a) + \gamma \sum_{s'} P(s'|s,a) V(s') \right], with r(s,a) as the immediate reward, \gamma the discount factor, and P the transition probabilities; solutions iterate via value or iteration to find the optimal . This approach optimizes long-term in dynamic settings by over stages. MAUT and AHP find applications in , such as prioritizing infrastructure investments by balancing cost, environmental impact, and social benefits, while dynamic programming aids in optimizing sequences like inventory management under uncertain demand. In , these methods support trade-off analysis, for instance, in cost-benefit assessments for control strategies that weigh economic costs against and ecological gains. However, dynamic programming faces of dimensionality, where the grows exponentially with state variables, rendering exact solutions infeasible for high-dimensional problems like large-scale resource planning.

Alternatives and Criticisms

Non-Probabilistic Approaches

Non-probabilistic approaches in decision theory provide frameworks for reasoning under uncertainty that deviate from the additive probability measures of Bayesian methods, often to handle incomplete information, vagueness, or non-additive beliefs more flexibly. These alternatives address scenarios where assigning precise probabilities is impractical or undesirable, such as when evidence is partial or preferences are imprecise, by employing structures like belief functions, fuzzy memberships, similarity relations, worst-case scenarios, or quantum-inspired interference effects. Unlike standard probabilistic models, which rely on a full probability distribution over states, these methods prioritize evidential support, graded memberships, or robust guarantees without assuming probabilistic independence or additivity. Recent developments as of 2025 include hybrid quantum-classical models that integrate quantum interference with classical decision processes to better explain human cognition in uncertain environments. Dempster-Shafer theory, also known as evidence theory, models using belief functions that assign masses to subsets of possible states, enabling the combination of evidence from multiple sources without committing to full probabilistic additivity. A belief function \operatorname{Bel}(A) represents the total evidence for a set A as the sum of basic probability assignments to subsets contained within A, yielding upper and lower probabilities \overline{P}(A) = 1 - \operatorname{Bel}(\overline{A}) and \underline{P}(A) = \operatorname{Bel}(A) that bound plausible probability intervals. This approach, foundational in Dempster's work on multivalued mappings and formalized by Shafer, facilitates decision-making in expert systems and risk assessment by accommodating ignorance explicitly, as uncommitted mass can remain on the universal set. Fuzzy decision theory extends classical to environments with imprecise or linguistic by representing alternatives and criteria via fuzzy sets, where membership functions \mu(x) \in [0,1] capture degrees of belonging rather than binary truths. Introduced by Zadeh for fuzzy sets and applied to decisions through max-min compositions—where the strength of a decision rule is the minimum of antecedent and consequent memberships—this framework handles vague preferences, such as "somewhat preferable," in multi-criteria problems like supplier selection or policy evaluation. Bellman and Zadeh's seminal model integrates fuzzy goals and constraints by maximizing the intersection of membership functions, providing a non-probabilistic basis for optimizing under without distributional assumptions. Case-based decision theory posits that choices under arise from recalling and generalizing past cases based on similarity, bypassing explicit probability assignments in favor of levels and comparative evaluations. In Gilboa and Schmeidler's model, an act is evaluated by the weighted sum of utilities from similar past cases, where similarity s(\mathbf{x}, \mathbf{x'}) decreases with distance in problem-state space, and decisions aim to exceed an threshold derived from historical outcomes. This approach, axiomatized for uniqueness, explains phenomena like reference dependence and context effects in economic choices without probabilistic beliefs, applying to consumer behavior and legal reasoning where precedents guide non-quantified judgments. Robust optimization in decision theory focuses on solutions that perform well against the worst-case realization of , formulating problems as min-max optimizations over ambiguity sets without relying on probabilistic distributions. For instance, in a linear program \min_x \max_{u \in \mathcal{U}} c^\top x + u^\top Ax, the decision x hedges against adversarial perturbations u within a bounded uncertainty set \mathcal{U}, ensuring feasibility and bounded regret in applications like design and portfolio management. Pioneered in for static and adjustable decisions, this method prioritizes distributional robustness over expected utility, offering guarantees like \alpha-efficiency where performance exceeds a \alpha of the optimal in hindsight. Quantum decision theory models preference reversals and conjunction fallacies using representations, where probabilities emerge from projections and terms capture non-commutative belief updates. In this framework, decisions are vectors in a complex , with subjective probabilities P(A) = \langle \psi | P_A | \psi \rangle incorporating an factor Q(A,B) = 2 \operatorname{Re} \langle \psi | P_A (I - P_B) P_B | \psi \rangle that explains violations of classical additivity, such as order effects in surveys. Developed post-2000s to reconcile with cognitive paradoxes, it applies to dynamic choices in and , treating beliefs as superpositions rather than additive measures; recent extensions as of 2025 explore applications in computational and hybrid models.

Key Limitations and Fallacies

Decision theory, while foundational for rational choice under , faces significant limitations stemming from its idealized assumptions about human cognition, probabilistic modeling, and ethical neutrality. One prominent critique is the ludic fallacy, which describes the error of treating real-world decision problems as if they were structured games with well-defined rules and probabilities, thereby underestimating the impact of rare, unpredictable events known as "black swans" that follow fat-tailed distributions. This fallacy arises because traditional decision models, such as expected utility theory, rely on known payoff structures and ignore the opacity and non-stationarity of complex systems like financial markets or geopolitical events. Another key limitation involves the in utility functions, where aggregating individual for social decisions requires interpersonal comparisons that cannot be made objectively without invoking ethical assumptions. This issue is highlighted by , which demonstrates that no can simultaneously satisfy basic fairness conditions like non-dictatorship, , and when interpersonal comparisons are infeasible. As a result, decision theory struggles to provide a neutral framework for collective choices, often leading to arbitrary or value-laden resolutions. Bounded rationality further undermines the theory's prescriptive power by recognizing that human decision-makers operate under severe cognitive and informational constraints, making full optimization computationally infeasible. Introduced by Herbert Simon, this concept posits that agents pursue satisficing—selecting satisfactory rather than maximally optimal options—due to limited time, attention, and processing capacity. Heuristics serve as partial adaptive responses to these bounds, enabling faster decisions but introducing systematic deviations from theoretical rationality. Ethical critiques highlight how expected utility maximization prioritizes aggregate outcomes over , neglecting equity considerations in . For instance, it may endorse policies that benefit the majority at the expense of the worst-off, contrasting with Rawlsian maximin principles that advocate maximizing the minimum welfare level to ensure fairness under a veil of ignorance. This oversight renders decision theory ethically incomplete for applications in or social welfare, where demands protecting vulnerable groups rather than solely optimizing expected gains. Finally, decision theory exhibits outdated gaps in addressing contemporary computational paradigms, with limited integration of post-2010s advancements in , such as , which extends Bayesian decision frameworks to dynamic, sequential environments through trial-and-error optimization. Traditional models have not fully incorporated these tools, leaving them less applicable to high-dimensional problems like autonomous systems or adaptive , where empirical learning from outperforms static probabilistic assumptions—though recent Bayesian RL surveys as of 2024 highlight growing convergence.

References

  1. [1]
    Decision Theory - an overview | ScienceDirect Topics
    Decision theory is an approach that uses available information to make optimal decisions under uncertainty [11]. In standard decision theory, uncertainty is ...
  2. [2]
  3. [3]
    Rational Choice, Decision Theory and Game Theory
    The mathematician John von Neumann and the economist Oskar Morgenstern established game theory as an important branch of social science in 1944 with the ...
  4. [4]
    How Economists Came to Accept Expected Utility Theory: The Case ...
    Among the early supporters of the expected utility hypothesis in the von Neumann–Morgenstern version were Milton Friedman and Leonard Jimmie Savage, both based ...
  5. [5]
    Savage for dummies and experts - ScienceDirect.com
    von Neumann and Morgenstern used a mixture technique to axiomatize a decision model and this has remained the most popular technique in decision theory.
  6. [6]
    On the Foundations of Decision Theory | Homo Oeconomicus
    Dec 12, 2017 · Bayesian decision theory was invented by Leonard Savage, who is on record as saying that it would be “preposterous” and “utterly ridiculo.<|control11|><|separator|>
  7. [7]
    (PDF) Decision Theory: A Brief Introduction - ResearchGate
    Dec 28, 2015 · In the situations treated by decision theorists,. there are options to choose between, and we choose in a non-random way. Our choices, in these ...
  8. [8]
    [PDF] Statistical Decision Theory: Concepts, Methods and Applications ...
    Nov 30, 2003 · Statistical decision theory is decision making with statistical knowledge, addressing uncertainties in the problem, and using statistical ...<|control11|><|separator|>
  9. [9]
    [PDF] 8 Decision-Making
    In short, normative theories draw upon philosophical standpoints about how the ideal decision-maker ought to choose, whereas descriptive analyses are the.
  10. [10]
    Aristotle's Ethics - Stanford Encyclopedia of Philosophy
    May 1, 2001 · Therefore practical wisdom, as he conceives it, cannot be acquired solely by learning general rules. We must also acquire, through practice, ...
  11. [11]
    Epictetus - Stanford Encyclopedia of Philosophy
    Dec 23, 2008 · A Greek philosopher of 1 st and early 2 nd centuries CE, and an exponent of Stoic ethics notable for the consistency and power of his ethical thought.
  12. [12]
    Self-Assessment and Rational Reflexivity in Epictetus
    This article explores Epictetus' views about the importance of self-knowledge and self-assessment in his ethics, with special emphasis on book 3 of the ...
  13. [13]
    [PDF] Exposition of a New Theory on the Measurement of Risk
    Apr 6, 2005 · He was the first to apply mathematical analysis to the problem of the movement of liquid bodies. (On Bernoulli see: Handwörterbuch der ...Missing: primary | Show results with:primary
  14. [14]
    the St. Petersburg paradox - Stanford Encyclopedia of Philosophy
    Jul 30, 2019 · The St. Petersburg paradox was introduced by Nicolaus Bernoulli in 1713. It continues to be a reliable source for new puzzles and insights in decision theory.Missing: primary | Show results with:primary
  15. [15]
    [PDF] An Introduction to the Principles of Morals and Legislation
    lot: In Bentham's usage, a 'lot' of pleasure, of pain, of punishment etc. is an episode or dose of pleasure, pain, etc. There is no suggestion of a large ...Missing: criterion | Show results with:criterion
  16. [16]
    Jeremy Bentham - Stanford Encyclopedia of Philosophy
    Mar 17, 2015 · Bentham delineated four “sanctions” or sources of pain and pleasure, which he may have learnt from Gay's essay Concerning the Fundamental ...
  17. [17]
    Probability and Induction | Internet Encyclopedia of Philosophy
    Dutch Books. Ramsey identified a connection between Dutch books and the laws of mathematical probability. In “Truth and Probability” we read that (1926: 182):.
  18. [18]
    (PDF) Dutch Book Arguments - Academia.edu
    Ramsey's ground breaking paper 'Truth and Probability' (1926), which inaugurates the Dutch Book argument1, speaks of 'a book being made against you' (1980, 44; ...
  19. [19]
  20. [20]
    The Foundations of Statistics - Leonard J. Savage - Google Books
    Jun 1, 1972 · Leonard J. Savage. Dover Publications, Jun 1, 1972 - Mathematics - 352 pages. With the 1954 publication of his Foundations of Statistics, in ...
  21. [21]
    Statistical Decision Functions - Abraham Wald - Google Books
    Abraham Wald. Publisher, Wiley, 1950. Original from, the University of Michigan. Digitized, Nov 29, 2006. Length, 179 pages. Export Citation, BiBTeX EndNote ...
  22. [22]
    [PDF] Statistical Decision Functions | Semantic Scholar
    2,374 Citations · An Essay on Statistical Decision Theory · A New Theory of Probability · Statistical Decision Techniques · Optimal Multiple Decision Statistical ...
  23. [23]
    Linear Programming and Extensions on JSTOR
    George Dantzig is properly acclaimed as the "father of linear programming." Linear programming is a mathematical technique used to optimize a situation. It ...
  24. [24]
    How Much Is Enough? Shaping the Defense Program, 1961-1969
    Oct 20, 2005 · This book is both a classic account of the application of powerful ideas to the problem of managing the Department of Defense (DoD) and a cautionary history.Missing: studies | Show results with:studies
  25. [25]
    Decision Theory - Stanford Encyclopedia of Philosophy
    Dec 16, 2015 · Decision theory is concerned with the reasoning underlying an agent's choices, whether this is a mundane choice between taking the bus or getting a taxi.
  26. [26]
    [PDF] Lecture 8: Expected Utility Theory | MIT
    U (p) = C p (c) u (c) for all p ∈ P. In this case, the function U is called an expected utility function, and the function u is call a von Neumann-Morgenstern ...Missing: derivation | Show results with:derivation
  27. [27]
    [PDF] Lecture 10 - Risk and Insurance - DSpace@MIT
    The basic idea here is that because of the concavity of the (risk averse) utility function, taking a little bit of money away from everyone incurs lower social ...
  28. [28]
    Theory of Games and Economic Behavior: 60th Anniversary ... - jstor
    Theory of Games and Economic Behavior: 60th Anniversary Commemorative Edition. John von Neumann. Oskar Morgenstern. With an introduction by Harold W. Kuhn.
  29. [29]
    CAPITAL ASSET PRICES: A THEORY OF MARKET EQUILIBRIUM ...
    CAPITAL ASSET PRICES: A THEORY OF MARKET EQUILIBRIUM UNDER CONDITIONS OF RISK* ; First published: September 1964 ; Citations · 3,840 ; A great many people provided ...
  30. [30]
    [PDF] Risk, Uncertainty, and Profit - FRASER
    RISK, UNCERTAINTY. AND PROFIT. BY. FRANK H. KNIGHT, PÅ.D. ASSOCIATE PROFESSOR OF ECONOMICS IN THE STATE UNIVERSITY. OF IOWA. Gout bren o. Ghe Riverside Press.
  31. [31]
    [PDF] Risk, Ambiguity, and the Savage Axioms
    By DANIEL ELLSBERG ... These highly pertinent articles came to my attention only after this paper had gone to the printer, allowing no space for comment here.
  32. [32]
    [PDF] Savage for dummies and experts
    Savage's foundation of expected utility is considered to be the most convincing justification of Bayesian expected utility and the crowning glory of decision ...
  33. [33]
    Prospect Theory: An Analysis of Decision under Risk - jstor
    VOLUME 47 MARCH, 1979 NUMBER 2. PROSPECT THEORY: AN ANALYSIS OF DECISION UNDER RISK. BY DANIEL KAHNEMAN AND AMOS TVERSKY'. This paper presents a critique of ...
  34. [34]
    [PDF] Advances in prospect theory: Cumulative representation of uncertainty
    We develop a new version of prospect theory that employs cumulative rather than separable decision weights and extends the theory in several respects.
  35. [35]
    [PDF] The Framing of Decisions and the Psychology of Choice
    The effects of frames on preferences are compared to the effects of perspectives on perceptual appear- ance. The dependence of preferences on the formulation ...
  36. [36]
    The Endowment Effect, Loss Aversion, and Status Quo Bias
    This column documents the evidence supporting endowment effects and status quo biases, and discusses their relation to loss aversion.
  37. [37]
    [PDF] Status Quo Bias in Decision Making - Scholars at Harvard
    Status Quo Bias in Decision Making. Author(s): WILLIAM SAMUELSON and RICHARD ZECKHAUSER. Source: Journal of Risk and Uncertainty, Vol. 1, No. 1 (March 1988), pp ...
  38. [38]
    Frames, Biases, and Rational Decision-Making in the Human Brain
    Aug 4, 2006 · We found that the framing effect was specifically associated with amygdala activity, suggesting a key role for an emotional system in mediating decision biases.
  39. [39]
    [PDF] Sabrina M. Tom, Decision-Making Under Risk The Neural Basis of ...
    Jan 25, 2007 · Alternatively, loss aversion could reflect an asymmetric response to losses versus gains within a single system that codes for the subjective ...
  40. [40]
    Judgment under Uncertainty: Heuristics and Biases - Science
    This article described three heuristics that are employed in making judgments under uncertainty: (i) representativeness, which is usually employed when people ...
  41. [41]
    Availability: A heuristic for judging frequency and probability
    This paper explores a judgmental heuristic in which a person evaluates the frequency of classes or the probability of events by availability.
  42. [42]
    Dread Risk, September 11, and Fatal Traffic Accidents - Sage Journals
    People tend to fear dread risks, that is, low-probability, high-consequence events, such as the terrorist attack on September 11, 2001.
  43. [43]
    [PDF] On the failure to eliminate hypotheses in a conceptual task
    Apr 7, 2008 · The results showed that those subjects, who reached two or more incorrect conclusions, were unable, or unwilling to test their hypotheses. The.
  44. [44]
    Do those who know more also know more about how much they ...
    A series of experiments revealed that: (1) Although people are moderately well calibrated, their probability judgments are prone to systematic biases. The most ...
  45. [45]
    [G10] Formal decision theory - Philosophy@HKU
    The Maximin decision rule is used by a pessimistic decision maker who wants to make a conservative decision. Basically, the decision rule is to consider the ...
  46. [46]
    Minimax decision rules for planning under uncertainty
    Dec 1, 2023 · It is common to use minimax rules to make planning decisions when there is great uncertainty about what may happen in the future.
  47. [47]
    Risk, Ambiguity, and the Savage Axioms - jstor
    F. P. Ramsey, "Truth and Probability" (1926) in The Foundations of. Mathematics and Other Logical Essays, ed. R. B. Braithwaite (New York: Har- court Brace ...
  48. [48]
    Maxmin expected utility with non-unique prior - ScienceDirect.com
    1989, Pages 141-153. Journal of Mathematical Economics. Maxmin expected utility with non-unique prior☆. Author links open overlay panelItzhak Gilboa, David ...
  49. [49]
    Modeling attitudes towards uncertainty and risk through the use of ...
    The aim of this paper is to present in a unified framework a survey of some results related to Choquet Expected Utility (CEU) models.
  50. [50]
    the folk theorem in repeated games with discounting or with ... - jstor
    [8] FUDENBERG, D., AND E. MASKIN: "Nash and Perfect Equilibrium Payoffs in Discounted. Repeated Games," mimeo, Harvard University, 1986. [9] : " ...
  51. [51]
    Perfect Bayesian equilibrium and sequential equilibrium
    6. J Harsanyi. Games of incomplete information played by Bayesian players. Management Sci., 14 (1967), ...Missing: citation | Show results with:citation
  52. [52]
    The analytic hierarchy process—what it is and how it is used
    Here we introduce the Analytic Hierarchy Process as a method of measurement with ratio scales and illustrate it with two examples.
  53. [53]
    [PDF] THE THEORY OF DYNAMIC PROGRAMMING - Richard Bellman
    To begin with, the theory was created to treat the mathe- matical problems arising from the study of various multi-stage decision processes, which may roughly ...
  54. [54]
    A review of 20-year applications of multi-attribute decision-making in ...
    Feb 18, 2021 · A review of 20-year applications of multi-attribute decision-making in environmental and water resources planning and management. Review ...
  55. [55]
    Decision-Making in a Fuzzy Environment - jstor
    By decision-making in a fuzzy environment is meant a decision process in which the goals and/or the constraints, but not necessarily the system under ...
  56. [56]
    Case-Based Decision Theory* | The Quarterly Journal of Economics
    This paper suggests that decision-making under uncertainty is, at least partly, case-based. We propose a model in which cases are primitive.
  57. [57]
    Quantum probability and quantum decision-making - Journals
    Jan 13, 2016 · Our proposed definition of quantum probability makes it possible to describe quantum measurements and quantum decision-making on the same common mathematical ...Quantum Probability And... · Abstract · (b) Quantum Logic Of Events<|control11|><|separator|>
  58. [58]
    [PDF] INTERPERSONAL COMPARISONS OF UTILITY - Stanford University
    Interpersonal comparisons of utility (ICU's) have to be made if there is to be any satisfactory escape from Arrow's impossibility theorem, with its implication.
  59. [59]
    Bounded Rationality - Stanford Encyclopedia of Philosophy
    Nov 30, 2018 · Herbert Simon introduced the term 'bounded rationality' (Simon 1957b: 198; see also Klaes & Sent 2005) as shorthand for his proposal to ...The Emergence of Procedural... · The Emergence of Ecological...
  60. [60]
    Using large-scale experiments and machine learning to ... - Science
    Jun 11, 2021 · Peterson et al. leverage machine learning to evaluate classical decision theories, increase their predictive power, and generate new theories of decision- ...