Fact-checked by Grok 2 weeks ago

Discrete choice

Discrete choice refers to a class of statistical models used to analyze and predict decisions made by individuals or entities among a finite set of mutually exclusive and collectively exhaustive alternatives, such as selecting a transportation mode, a product , or a healthcare provider. These models are grounded in the theory of random maximization, where the chosen alternative is assumed to provide the highest to the decision-maker, with comprising an observable component (based on attributes of alternatives and individual characteristics) and an unobserved random component capturing idiosyncratic preferences or measurement errors. The probability of selecting a particular alternative is derived as the over the of these random components, often requiring methods for in complex cases. The theoretical foundation of discrete choice models traces back to early work in and , with introducing the binary probit model in 1927 to represent choices as comparisons of latent utilities disturbed by errors. In the 1960s and 1970s, economists like Jacob Marschak adapted these ideas to economic contexts, emphasizing utility maximization under uncertainty. The modern framework was revolutionized by , who in 1973-1974 established the connection between multinomial models and the extreme value distribution of errors, proving the global concavity of the log-likelihood function for efficient estimation and linking the models rigorously to random utility theory. McFadden's contributions earned him the in Economics in 2000 for developing theory and methods for analyzing discrete choice, transforming the field from ad hoc probabilistic approaches to a unified econometric paradigm. Key variants of discrete choice models include the multinomial , which assumes (IIA) and yields closed-form choice probabilities as P_{nj} = \frac{\exp(V_{nj})}{\sum_k \exp(V_{nk})}, where V is the ; the , using errors for correlated alternatives; and nested or generalized value models, which relax IIA within groups of similar options. Advanced extensions, such as and generalized multinomial , incorporate unobserved heterogeneity by allowing parameters to vary randomly across individuals, often estimated via simulation-based maximum likelihood to handle integration over high-dimensional distributions. Data for these models come from revealed preferences (observed behaviors) or stated preferences (hypothetical scenarios), enabling predictions of choice probabilities as functions of attributes like price, quality, or socioeconomic factors. Discrete choice models find broad applications across disciplines, including transportation economics for forecasting mode shares and ridership (e.g., predicting usage with 6.3% market share versus actual 6.2%), for brand selection and willingness-to-pay estimation, and for analyzing healthcare decisions like rural practice preferences among physicians. In environmental and , they assess consumer responses to pricing or efficiency standards, such as fuel type choices; in labor , they model job or migration decisions; and in , they evaluate or site selections. These applications often involve welfare analysis, computing changes in consumer surplus as \Delta CS = \frac{1}{\alpha} \ln \left( \frac{\sum_j \exp(V_{nj}')}{\sum_j \exp(V_{nj})} \right), where \alpha is the of income, to inform policy impacts.

Fundamentals

Definition and Overview

Discrete choice models are statistical frameworks used to predict and analyze decisions in which individuals or entities select one option from a of mutually exclusive alternatives, incorporating both attributes of the alternatives and decision-makers as well as factors that introduce randomness into the choice process. These models are grounded in economic theory and are particularly suited for scenarios where outcomes are categorical rather than numerical, such as selecting a transportation mode or a product . The foundations of discrete choice modeling emerged in the field of during the 1960s and 1970s, building on earlier work in and transportation . A pivotal contribution came from economist , whose development of theory and methods for analyzing discrete choice was recognized with the in Economic Sciences in 2000, jointly awarded with for their contributions to microeconometrics. McFadden's innovations, including the conditional logit model, provided rigorous tools to estimate choice probabilities from observed data, transforming how economists and social scientists model individual behavior. In contrast to continuous choice models, such as , which predict unbounded numerical outcomes like prices or quantities, discrete choice models address selections among distinct, ordered or unordered categories without imposing a inherent ranking unless explicitly modeled, such as in for Likert scales. At their core lies the random maximization (RUM) framework, where the utility U_{ij} that individual i derives from alternative j is expressed as the sum of a deterministic component V_{ij}, which captures observable influences like cost and attributes, and a stochastic error term \varepsilon_{ij} representing unobserved heterogeneity: U_{ij} = V_{ij} + \varepsilon_{ij} The individual chooses alternative j if U_{ij} > U_{ik} for all other alternatives k \neq j. This setup assumes that decision-makers are rational maximizers who select the option providing the highest perceived , with error terms typically assumed to be and identically distributed (IID) across alternatives to derive tractable probabilistic predictions, though relaxations exist for more complex dependencies. A classic example is modeling travel mode selection, where a commuter chooses among , bus, or based on factors like travel time, cost, and comfort; the model estimates the likelihood of each mode being selected by incorporating these attributes into the utility function while accounting for random tastes through the error term.

Choice Sets and Alternatives

In discrete choice models, the choice set refers to the finite collection of mutually exclusive alternatives available to a decision-maker at a given point in time, ensuring that the options are exhaustive and can be explicitly enumerated. These alternatives must cover all possible decisions without overlap, such as redefining bundled options (e.g., " alone" versus " alone" for household heating) to maintain . The universal choice set encompasses all theoretically possible alternatives in a given context, providing an exhaustive framework that includes even unlikely options to ensure completeness. In contrast, individual choice sets are often subsets of the universal set, tailored to specific decision-makers based on factors like availability, awareness, or personal constraints; for instance, a may exclude certain heating fuels if they are not connected to the relevant . This variation allows models to reflect realistic heterogeneity, where the effective options differ across individuals while still summing to unity in probability terms. Each alternative in the choice set is characterized by intrinsic attributes, such as , , or travel time, which directly influence the decision-maker's evaluation. These attributes can interact with individual-specific characteristics, like modulating the perceived of , thereby personalizing the assessment within the broader utility maximization framework. When choice sets are incomplete due to unavailable alternatives, models address this through methods like excluding non-viable options to normalize probabilities or employing sampling techniques to approximate the full set efficiently. For large universal sets, subset sampling—leveraging properties like —allows estimation using a representative portion of alternatives, including the chosen one, while maintaining consistency. Inclusive value corrections, often used in nested structures, further adjust for unobserved subsets by incorporating log-sum terms that capture expected from excluded options, preventing biased substitution patterns. A representative example occurs in transportation mode choice, where the choice set might include walking, biking, , or public transit, with exclusions applied based on contextual factors like or conditions that render certain modes unavailable to specific individuals. Challenges arise from the of choice sets, where self-selection—such as individuals opting into options based on unobserved preferences or constraints—can correlate availability with unobservables, leading to biased estimates if unaddressed. Models must also accommodate dynamic or context-dependent sets, formed through processes like sequential search or external influences (e.g., ), which alter availability over time or across situations. Robust approaches, such as those allowing arbitrary dependence between choice sets and preferences, help mitigate these issues without restrictive assumptions.

Utility Maximization Framework

The random maximization () paradigm forms the foundational principle of discrete choice models, under which decision-makers are assumed to select the alternative that yields the highest from a finite set of options, with itself being a latent construct that is only partially observable to the analyst. This framework, originally formalized in econometric analyses of qualitative choices, posits that observed choices reveal preferences through the maximization of this unobserved function. Utility for individual i from alternative j, denoted U_{ij}, is decomposed into a deterministic systematic component V_{ij} and a stochastic error term \varepsilon_{ij}, such that U_{ij} = V_{ij} + \varepsilon_{ij}. The systematic component V_{ij} represents factors observable to the researcher and is typically specified as a of alternative-specific attributes x_{ij} (e.g., or ) and individual-specific socioeconomic characteristics z_i (e.g., or ), given by V_{ij} = \beta' x_{ij} + \alpha' z_i, where \beta and \alpha are parameters to be estimated that capture the marginal of these attributes. The random \varepsilon_{ij} encapsulates all unobserved influences on , including idiosyncratic tastes, measurement inaccuracies, or omitted variables affecting the . The distribution of the error terms \varepsilon_{ij} is a critical assumption that determines the form of the choice model; common specifications include the type I extreme value (Gumbel) distribution for logit models, which ensures closed-form probability expressions, or the normal distribution for probit models, which allows for more flexible correlations among errors but requires numerical integration. These errors introduce randomness into the model, reflecting the analyst's incomplete information about the decision process. Because the full utility U_{ij} remains unobserved due to \varepsilon_{ij}, RUM models cannot predict individual choices deterministically but instead generate probabilities of choice at the population level, aggregating over the distribution of errors across decision-makers. Sources of heterogeneity in preferences are twofold: observed heterogeneity, incorporated through individual covariates in z_i to account for systematic differences across people, and unobserved heterogeneity, which arises from the stochastic nature of \varepsilon_{ij} or, in more advanced specifications, from random parameters that vary across individuals according to a distribution (e.g., normal or lognormal). As an illustrative example, consider a job choice scenario where individual i evaluates multiple employment options; the systematic utility V_{ij} might depend on observable attributes like and commute distance for job j, while \varepsilon_{ij} captures unmeasured factors such as intrinsic or workplace culture.

Defining Choice Probabilities

In the random utility maximization (RUM) framework, the choice probability for alternative j by decision-maker i, denoted P_{ij}, is defined as the probability that the utility of alternative j exceeds the utility of all other alternatives in the choice set:
P_{ij} = \Pr(U_{ij} > U_{ik} \ \forall k \neq j).
This formulation captures the probabilistic nature of choices arising from unobserved components of , assuming decision-makers select the alternative providing the highest .
Mathematically, P_{ij} can be expressed in integral form over the joint distribution of the random error terms \varepsilon:
P_{ij} = \int I(U_{ij} > U_{ik} \ \forall k \neq j) \, f(\varepsilon) \, d\varepsilon,
where I(\cdot) is the that equals 1 if the condition holds and 0 otherwise, and f(\varepsilon) is the joint density of the errors \varepsilon_{i1}, \dots, \varepsilon_{iJ}. Closed-form expressions for this probability emerge only under specific distributional assumptions for the errors, such as or particular parametric forms; otherwise, it requires . The observable components of , captured in the systematic part V_{ij}, influence probabilities solely through differences across alternatives, i.e., P_{ij} depends on V_{ij} - V_{ik} for k \neq j, ensuring invariance to additive shifts in utility levels.
This probability P_{ij} is interpreted as the expected share of the population—or a representative sample of decision-makers under similar observable conditions—who would choose alternative j; when aggregated across individuals, it corresponds to observed market shares or choice frequencies in data. For a simple two-alternative case (e.g., choosing between options 1 and 2), the probability simplifies to P_{i1} = F_{\varepsilon}(V_{i1} - V_{i2}), where F_{\varepsilon} is the cumulative distribution function of the difference \varepsilon_{i2} - \varepsilon_{i1}. This highlights how relative advantages in systematic utility translate into choice likelihoods. The framework assumes no ties in , meaning the probability of exact equality U_{ij} = U_{ik} is zero; if ties occur with positive probability, they can be handled by randomizing the choice among tied alternatives, though this is rarely emphasized in standard derivations.

Key Properties

Utility Differences and Invariance

In discrete choice models based on the random maximization () framework, absolute levels of are unidentifiable because choice probabilities depend solely on the differences between utilities across alternatives. For an n choosing among alternatives j and k, the probability of selecting j over k is determined by \Delta U_{nj} = U_{nj} - U_{nk}, where U_{nj} = V_{nj} + \varepsilon_{nj} represents the total of alternative j, with V_{nj} as the component and \varepsilon_{nj} as the unobserved . This principle implies that models estimate relative preferences rather than values, aligning with the ordinal nature of choice behavior in economic theory. A key consequence is the invariance of probabilities to certain transformations of the function. Adding a to all utilities across alternatives does not alter the differences \Delta U_{nj}, and thus leaves probabilities unchanged; similarly, multiplying all utilities by a positive scalar preserves the of alternatives without affecting choices. This invariance ensures that the model's predictions remain robust to arbitrary shifts or positive scalings in the specification, focusing on economically meaningful relative effects. These properties have direct implications for model specification, particularly regarding alternative-specific constants (ASCs), which capture unobserved mean differences in utilities across options. Since ASCs are defined up to an additive constant, only J-1 of them are identifiable in a model with J alternatives, and one is conventionally normalized to zero to achieve identification. For instance, in a transportation mode choice model, the ASC for driving might be set to zero, allowing estimation of ASCs for bus and train relative to it. Consider a brand choice example where utilities for two brands include an income effect: U_{1} = \beta_1 x_1 + \gamma \cdot \text{[income](/page/Income)} + \varepsilon_1 and U_{2} = \beta_2 x_2 + \gamma \cdot \text{[income](/page/Income)} + \varepsilon_2. The income term cancels in the difference \Delta U = (\beta_1 x_1 - \beta_2 x_2) + (\varepsilon_1 - \varepsilon_2), so choices depend only on attribute s, not income levels. This focus on utility s ensures consistency with , where observed choices reveal comparative advantages between alternatives rather than levels, grounding discrete choice models in economic principles of maximization. However, identification of these s requires sufficient variation in attributes across alternatives in the ; without it, parameters may not be uniquely recoverable, leading to estimation challenges.

Scale Normalization

In discrete choice models, the scale of the utility function is arbitrary because the unobserved error terms ε introduce indeterminacy in the absolute level and variance of utility, such that only differences in utility across alternatives influence choice probabilities. Models therefore estimate parameters as β / μ, where β represents the systematic utility coefficients and μ is the inversely related to the standard deviation (or variance) of the error terms. This arbitrariness arises from the random utility maximization , where the choice probability depends solely on the relative magnitudes of utilities, rendering absolute scaling unidentifiable without . A common normalization convention fixes μ = 1 for the multinomial logit model, which assumes independent and identically distributed Gumbel (extreme value type I) error terms with a fixed variance of π²/6. For the model, normalization typically sets the variance of the difference in error terms between alternatives to 2, often achieved by assuming each error term has unit variance under a . These conventions ensure parameter identification while preserving the model's probabilistic structure, as derived in McFadden's foundational work on conditional analysis. Changing the scale rescales all estimated parameters proportionally—for instance, multiplying utilities by a constant c multiplies β by c and divides μ by c, leaving choice probabilities unchanged but altering the interpretation of coefficients. This has direct implications for marginal effects, which scale with the parameters and thus vary across models unless adjusted for the error variance; for example, direct elasticities in models are invariant to scale, but cross-elasticities depend on the normalized variance. In the , normalizing the standard deviation of ε to 1 (σ_ε = 1) ensures that estimated β coefficients represent standardized effects relative to the error scale, facilitating comparisons with other specifications. Heteroscedasticity occurs when the error variance σ varies across observations, alternatives, or groups, leading to differing scales that violate standard assumptions and require generalized models for identification. Such variation can be accommodated through heteroskedastic extreme value models, where one alternative's variance is normalized to π²/6 and others are estimated relative to it, or via and frameworks that incorporate random coefficients to capture individual-specific scale heterogeneity. McFadden's contributions, particularly in developing the conditional and generalized extreme value models, emphasized scale normalization as essential for consistent estimation and comparability across discrete choice specifications.

Independence Assumptions

In the multinomial logit model, the (IIA) property states that the ratio of choice probabilities for any two alternatives is independent of the attributes or of other alternatives in the choice set. This implies that all alternatives are equally similar substitutes to one another, leading to symmetric cross-substitution patterns across options. The IIA property arises from the assumption that the error terms in the random utility function are independently and identically distributed (IID) according to a . Under this setup, the choice probability for alternative j is given by P_j = \frac{\exp(V_j)}{\sum_{m \in C} \exp(V_m)}, where V_j is the observable utility component for alternative j and C is the choice set. Consequently, the ratio of probabilities for alternatives j and m simplifies to \frac{P_j}{P_m} = \frac{\exp(V_j)}{\exp(V_m)}, which depends solely on the utilities of j and m, unaffected by other alternatives. To test the validity of the IIA assumption, the Hausman-McFadden test compares parameter estimates from a restricted model (where a of alternatives is excluded) against an unrestricted full model, checking for significant differences that would indicate IIA violation. This test leverages the asymptotic properties of maximum likelihood estimators to assess whether the exclusion of irrelevant alternatives alters the relative probabilities consistently with IIA. Violations of IIA occur when error terms are correlated across alternatives, often due to unobserved similarities (e.g., shared attributes not captured in the model), resulting in biased parameter estimates and unrealistic substitution patterns. Such violations necessitate models with correlated errors, like nested structures, to account for hierarchical substitution. A classic example is in transportation mode choice, where bus and may be close substitutes due to similar characteristics, but is not; adding a new train option under IIA would predict equal diversion from bus and car, which contradicts observed behavior where diversion primarily comes from bus users. Beyond cross-alternative , discrete models typically assume errors are independent across individuals or choice occasions, enabling aggregation of probabilities. This assumption can be relaxed in settings through random coefficients or models to capture unobserved heterogeneity and correlations over time.

Applications

Transportation and

Discrete choice models play a central role in transportation and by predicting travelers' selections among alternatives like , bus, , or for a given trip, incorporating attributes such as time, monetary cost, reliability, and comfort. These models are essential for , enabling planners to simulate how changes in or services affect overall system usage and congestion levels. McFadden's pioneering application of conditional models to travel behavior in the 1970s established this framework, demonstrating how individual preferences could inform predictions for modes in cities like . Route and destination choices extend discrete choice analysis to spatial decisions, where alternatives represent specific paths through a transportation or potential trip endpoints like shopping districts or workplaces, subject to constraints such as and . These models account for network effects, such as link capacities and travel impedances, to forecast flows and optimize layouts. In , they support decisions on land-use integration with transport, ensuring balanced development across zones. Policy applications leverage discrete choice models to assess interventions like through tolls, which shift mode or route selections to reduce peak-hour traffic, as seen in evaluations of cordon-based systems that vary charges by time or location. For public investments, such as new rail lines or , models quantify shifts in ridership and mode shares, aiding cost-benefit analyses for funding allocations. McFadden's early studies exemplified this by simulating policy impacts on commuter choices, influencing modern evaluations of expansions. Data for these models primarily derive from revealed preference surveys, which capture actual travel behaviors through household diaries or GPS tracking to reveal how attributes influence real decisions. Integration with activity-based models enhances this by linking choices to daily schedules, allowing simulation of chained trips and time budgets across an individual's routine. A prominent example is the Area's travel demand model, which employs multinomial specifications to estimate mode shares for work and non-work trips, incorporating variables like in-vehicle time and out-of-pocket costs to forecast regional demand under various scenarios. Recent advancements incorporate dynamic elements, such as time-of-day choices, where models account for scheduling interdependencies and forward-looking behavior in selecting departure times or modes to align with activity constraints. analysis has also advanced, using discrete choice frameworks to evaluate how policies disproportionately affect low-income or minority groups in access to modes and destinations, informing inclusive planning.

Marketing and Consumer Choice

In marketing, discrete choice models are widely applied to predict consumer selections among competing brands and products, incorporating factors such as , product features, and exposure to simulate real-world . These models, rooted in random , enable firms to forecast market shares and understand how attribute changes influence . A seminal application is the multinomial model calibrated on scanner data, which demonstrated that to brands, measured via purchase history, significantly affects choice probabilities in categories like ground , revealing how past behavior moderates sensitivity to promotions. Conjoint analysis serves as a key stated-preference method within this framework, where respondents evaluate and rank hypothetical product profiles to estimate part-worth utilities for individual attributes, such as brand name, , or . Developed as a practical tool for , it decomposes overall preferences into additive components, allowing quantification of trade-offs consumers make between attributes. For instance, in new , conjoint results guide feature prioritization by simulating market responses to attribute combinations, ensuring designs align with consumer valuations. This approach has been instrumental in applications like packaging optimization and assortment planning, where utilities inform which variants maximize appeal across segments. Market segmentation enhances these models by incorporating individual-specific attributes, such as demographics or , to capture heterogeneity in preferences and reveal distinct consumer groups with varying sensitivities to elements. Techniques like models allow parameters to vary across individuals, enabling segmentation based on unobserved taste differences rather than solely observable traits, which improves targeting accuracy in campaigns. In practice, this supports tailored strategies, such as positioning premium features for high-income segments while emphasizing value for price-sensitive ones. (Note: Train's book on discrete choice methods, 2009 edition, discusses extensively.) Applications of discrete choice in extend to optimal and , where logit-based models compute own- and cross- elasticities to evaluate impacts of price adjustments. For example, of grocery scanner for like ketchup or has shown that price cuts can increase a brand's share, depending on competitive intensity, while highlighting patterns that inform promotional timing. These insights drive that balance volume and margin, often integrating effects to predict uplift in probabilities. Advancements include discrete choice experiments (DCEs), which refine conjoint methods by presenting choice sets mimicking actual purchase scenarios to estimate willingness-to-pay (WTP) for attributes like sustainability certifications or innovative features. DCEs provide robust WTP measures by accounting for realistic trade-offs, such as paying a premium for eco-friendly packaging, and have been applied to assess market potential for new variants in consumer goods. This evolution supports more dynamic applications, like real-time pricing in , where WTP distributions personalized offers.

Health Economics and Policy

Discrete choice models have been extensively applied in health economics to analyze patient preferences for treatments, incorporating factors such as efficacy, side effects, and costs to inform clinical decision-making and resource allocation. These models help quantify trade-offs patients make when selecting among therapies, surgeries, or no treatment, often using discrete choice experiments (DCEs) to elicit stated preferences under hypothetical scenarios that reflect real-world risks and outcomes. For instance, in modeling antidepressant selection, DCEs reveal that patients prioritize higher efficacy and lower side effect severity over cost reductions. Such analyses support personalized medicine by identifying heterogeneous preferences across patient subgroups, like those with varying depression severity. In decisions, discrete choice frameworks evaluate how individuals select plans based on premiums, coverage breadth, provider , and out-of-pocket costs, revealing that access and premium affordability often outweigh comprehensive coverage in choice probabilities. data from actual enrollment choices, combined with stated preferences from surveys, demonstrate that consumers exhibit toward incumbent plans but respond strongly to changes in quality. This approach aids policymakers in designing subsidies or mandates to enhance market competition and , particularly for underserved populations facing asymmetric information. For policy evaluation, DCEs assess interventions like and access reforms by simulating uptake scenarios, such as vaccine programs where attributes like efficacy, safety, and delivery convenience drive preferences. Studies on vaccination policies show that mandates, incentives, high efficacy, and low side effect risks can increase predicted uptake and willingness to vaccinate. These models evaluate cost-effectiveness of reforms, informing decisions on equitable distribution and behavioral nudges. Data in health DCEs often blend stated preferences, which allow controlled attribute variation but risk hypothetical , with revealed preferences from observational choices, achieving predictive accuracy of 70-85% for real behaviors when calibrated properly. Ethical constraints, including for sensitive health scenarios, are addressed through approvals and transparent attribute framing to ensure participant understanding without coercion. Advancements in incorporating discrete choice into assessments (HTA) enable robust valuation of innovations by integrating preferences into cost-utility analyses, such as weighting quality-adjusted years against treatment attributes. Agencies like and CADTH increasingly use DCE-derived utilities for submissions, with evidence showing these methods improve equity in appraisals by capturing non-clinical outcomes like , leading to more -centered reimbursement decisions.

Environmental and Energy Choices

Discrete choice models have been extensively applied to analyze household decisions on adopting energy-efficient technologies, such as selecting between electric vehicles (EVs) and -powered cars, where attributes like operating costs, range, charging infrastructure, and government subsidies influence preferences. For instance, studies show that higher prices significantly boost EV demand compared to prices, with subsidies further accelerating adoption by reducing perceived financial barriers. In choices, models reveal that ratings and upfront costs drive selections toward energy-saving options like LED or high- refrigerators, often integrated with logit frameworks to predict market shares under policy scenarios. Environmental policy support is another key domain, where discrete choice experiments gauge willingness-to-pay (WTP) for measures like es or schemes, capturing trade-offs between economic costs and environmental benefits. Research indicates average annual WTP for a U.S. at around $177 per household, with preferences favoring revenue toward clean energy investments over general funds. Similarly, European studies estimate WTP per ton of CO2 avoided at €94–133, highlighting heterogeneity based on and , such as rebates to mitigate regressive impacts. Applications extend to forecasting the diffusion of renewable energy technologies, where dynamic discrete choice models project adoption rates for solar panels or wind systems by incorporating learning effects and network externalities. For example, household-level models predict photovoltaic diffusion probabilities based on installation costs and peer adoption, aiding policymakers in targeting subsidies for faster market penetration. Integration with agent-based models enhances realism by simulating interactions among heterogeneous agents, drawing on discrete choice estimates to parameterize individual utilities while accounting for spatial and social dynamics in energy transitions. Data challenges in these applications include hypothetical bias in stated preference surveys, where respondents overstate WTP for green options due to desirability, leading to inflated estimates by 20–50% compared to revealed preferences. strategies involve cheap talk scripts or incentive-compatible designs, while combining discrete choice with addresses issues like or in energy decisions. For household , discrete choice experiments demonstrate that (e.g., curbside collection) and incentives (e.g., $14 monthly rebates) increase participation rates by up to 2.12 times, outweighing effort costs like . Advancements incorporate social norms into hybrid models, revealing that descriptive norms (e.g., neighbors' adoption) boost WTP for renewables by 10–15%, enhancing predictive accuracy for collective behaviors. Additionally, models now account for long-term climate impacts by including attributes like or projections, as seen in experiments where framing future risks increases support for stringent policies by 25%.

Model Specifications

Binary Choice Models

Binary choice models analyze decisions between two mutually exclusive alternatives, such as participation versus non-participation or selecting one option over another, where the outcome is a indicator denoting the chosen alternative. These models assume that individuals select the alternative providing the highest , with utility comprising an component influenced by covariates and an unobservable random error term. The probability of choosing alternative 1 over alternative 0 is derived from the (CDF) of the difference in error terms, leading to specifications that link covariates to choice probabilities. The binary logit model, introduced by McFadden, specifies the probability of choosing alternative 1 as P_1 = \frac{1}{1 + \exp(-\beta' x)}, where \beta is a vector of parameters and x represents covariates. This form arises from assuming that the error terms follow a type I extreme value (Gumbel) distribution, ensuring a closed-form expression for estimation. Specifications can incorporate person-specific attributes, such as income or demographics, which do not vary by alternative and directly shift the overall probability; for example, higher income might increase the likelihood of labor force participation without comparing alternatives. Alternatively, for alternative-varying attributes like prices or travel times, the model uses differences in covariates, yielding P_1 = 1 / (1 + \exp(-\beta' (x_1 - x_0))), akin to a conditional logit setup that conditions on the choice being between the two options. Interpretation focuses on odds ratios, where \exp(\beta_j) indicates the multiplicative change in odds of choosing alternative 1 for a unit increase in x_j, holding other factors constant; marginal effects, given by \beta_j P_1 (1 - P_1), vary across covariate values and are often evaluated at means. The binary probit model offers an alternative specification, defining P_1 = \Phi(\beta' x), where \Phi is the CDF of the standard normal distribution. This model assumes normally distributed errors, which may align better with underlying data-generating processes exhibiting normality, though estimation involves the normal CDF, which may be slightly more computationally intensive than the logit due to numerical evaluation of the CDF and its derivatives, but both use standard maximum likelihood without simulation. Like the logit, probit accommodates person-specific or alternative-varying attributes through similar covariate structures, with interpretation emphasizing changes in probabilities via marginal effects \phi(\beta' x) \beta_j, where \phi is the normal density function; these effects also depend on covariate levels. A representative application is modeling married women's labor force participation, as in Mroz's analysis of 1975 U.S. data, where the binary outcome is whether a works (1) or not (0), with covariates including her potential , , , number of children, and husband's . In this setup, person-specific factors like positively influence participation probability, while alternative-varying elements, such as comparing offered against non-market opportunities, can be incorporated via differences. Empirical results typically show wages and increasing participation odds, with children exerting a negative effect. Logit and probit models yield similar parameter estimates and predictions in binary settings due to the resemblance between the logistic and normal CDFs, but benefits from analytical tractability stemming from Gumbel errors, while probit may be preferred when normality is theoretically justified or for consistency with broader multivariate extensions. Both avoid the (IIA) property in a trivial sense for two alternatives, focusing instead on robust probability estimation.

Uncorrelated Multinomial Models

Uncorrelated multinomial models in discrete choice analysis primarily encompass the model and, to a lesser extent, the model under the assumption of uncorrelated error terms. These models extend binary choice frameworks to scenarios with J > 2 alternatives, assuming that the random components are across options. The MNL, developed by McFadden, dominates due to its analytical tractability and closed-form probabilities. The core of the MNL is the probability that decision-maker n selects alternative j from a choice set of J options: P_{nj} = \frac{\exp(V_{nj})}{\sum_{k=1}^J \exp(V_{nk})} where V_{nj} represents the observable systematic utility for alternative j by n, typically linear in parameters: V_{nj} = \beta' x_{nj}, with x_{nj} denoting relevant attributes and \beta the vector of coefficients. This formulation arises from a random utility maximization framework where the error terms follow an independent and identically distributed (IID) type I extreme value (Gumbel) distribution, yielding the logit form. The model inherits the independence of irrelevant alternatives (IIA) property, whereby the relative probability of two alternatives is unaffected by others in the choice set. Specifications of the MNL vary based on the attributes incorporated into V_{nj}. In cases relying solely on person-specific attributes, utility takes the form V_{nj} = \alpha_j + \beta' z_n, where \alpha_j are alternative-specific constants (with one normalized to zero for ) and z_n captures individual characteristics like or demographics, independent of the . This setup suits scenarios where choices reflect inherent preferences modulated by traits. Alternatively, the conditional specification focuses on alternative-varying attributes: V_{nj} = \beta' x_{nj}, where x_{nj} includes features of the options (e.g., , ) that may interact with person traits but initially lack person-alternative interactions. More flexible mixed specifications combine elements, such as V_{nj} = \alpha_j + \beta' z_n + \gamma' (z_n \odot x_{nj}), incorporating generic coefficients \beta for attributes common across alternatives, alternative-specific constants, and interaction terms \gamma for person-alternative varying effects. A key feature of the MNL is the inclusive value, or logsum , given by \ln \sum_{k=1}^J \exp(V_{nk}), which represents the expected maximum across the choice set and serves as a measure of overall attractiveness. This appears in the model's expected consumer surplus calculation: E(CS_n) = \frac{1}{\mu} \ln \left( \sum_{k=1}^J \exp(V_{nk}) \right) + C, where \mu is the of the errors (often normalized to 1) and C is a constant. The logsum facilitates aggregation and analysis in applications like evaluation. For illustration, consider market share prediction among three brands (A, B, C) of a consumer good, where utilities depend on price p_j and a promotion dummy prom_j: V_{nj} = \alpha_j - \beta p_{nj} + \gamma prom_{nj}, with \beta > 0 capturing price sensitivity and \gamma > 0 the promotion effect. Choice probabilities are then computed via the MNL formula, enabling forecasts of shares (e.g., if prices are $10, $12, $11 and promotions apply to B only, shares might approximate 40%, 35%, 25% under estimated parameters). This setup highlights the model's utility in marketing for simulating demand responses to pricing or promotions. Despite its advantages, the MNL's IIA assumption imposes limitations by restricting substitution patterns; for instance, it implies symmetric cross-price elasticities across alternatives, which may not hold if unobserved factors correlate (e.g., two similar brands drawing disproportionately from a third). However, the closed-form probabilities enable straightforward maximum likelihood estimation and computation, even for large datasets. A variant is the uncorrelated MNP model, where errors follow a multivariate normal distribution with zero correlations (i.e., independent univariate normals across alternatives). Probabilities require numerical integration over the cumulative normal distribution: P_{nj} = \int_{-\infty}^{V_{nj}} \prod_{k \neq j} \Phi \left( \frac{V_{nk} - \epsilon_j}{\sqrt{1 + \sigma^2}} \right) \phi(\epsilon_j) d\epsilon_j, but this lacks closure, making estimation computationally intensive via simulation or quadrature. Such specifications are rare in practice, as the MNL's simplicity prevails under independence, and correlated MNP extensions are preferred for realism.

Correlated Multinomial Models

Correlated multinomial models extend the standard multinomial logit framework by incorporating correlations in the error terms across alternatives, allowing for more realistic substitution patterns that violate the (IIA) assumption. These models are essential when alternatives are grouped by shared attributes or when unobserved heterogeneity leads to interdependent choices, such as in selection where options may correlate more strongly among themselves than with driving. The nested logit model structures alternatives into hierarchical nests to capture correlations within groups while maintaining IIA conditionally within each nest. The choice probability for alternative j in nest m is given by P_j = P(m|j) \cdot P(j|m), where P(j|m) = \frac{\exp(V_j / \lambda_m)}{\sum_{k \in m} \exp(V_k / \lambda_m)} is the within the nest, and P(m|j) = \frac{\exp(\lambda_m I_m)}{\sum_{n} \exp(\lambda_n I_n)} incorporates the inclusive value I_m = \ln \sum_{k \in m} \exp(V_k / \lambda_m), with \lambda_m (0 < \lambda_m ≤ 1) measuring the degree of within nest m—values closer to 0 indicate stronger within-nest . The parameter \lambda_m provides a test for IIA: if \lambda_m = 1, the model reduces to the multinomial . The generalized extreme value (GEV) family encompasses the nested logit and further generalizations, deriving choice probabilities from a joint for the maxima of random utilities that satisfies specific consistency conditions. Within this family, the cross-nested logit model allows alternatives to belong to multiple overlapping nests, accommodating partial similarities across groups, such as when certain transportation modes share attributes with both and options. This flexibility is achieved through a that incorporates cross-nest dissimilarity parameters, enabling estimation of complex substitution patterns without strict partitioning. The model specifies error terms as multivariate normally distributed with a full \Sigma, directly modeling arbitrary correlations across all alternatives without restrictive structures like nests. Unlike models, it avoids IIA by allowing off-diagonal elements of \Sigma to capture inter-alternative dependencies, but choice probabilities lack closed form and require simulation-based estimation, such as the Geweke-Hajivassiliou-Keane (GHK) simulator, which draws from truncated multivariate normals to approximate integrals. The model, also known as the random coefficients , accounts for correlations through individual-specific random parameters \beta drawn from a f(\beta | \Omega), capturing unobserved taste heterogeneity that induces correlations over alternatives. The choice probability is P_j = \int \frac{\exp(V_j(\beta))}{\sum_k \exp(V_k(\beta))} g(\beta | \theta) d\beta, where the is approximated via methods like maximum simulated likelihood, often using Halton sequences for efficiency. This approach nests the multinomial as a special case when \beta is degenerate and can approximate any discrete choice model. A representative application is vehicle type choice, where alternatives are nested by size (e.g., compact, , ) to reflect correlated preferences within categories, and mixed logit incorporates random coefficients on attributes like to account for taste variation across consumers. These models offer advantages over uncorrelated multinomial by enabling flexible substitution patterns, such as stronger cross-elasticities within nests or due to heterogeneity, thus providing more accurate predictions in empirical settings like transportation .

Ordered Choice Models

Ordered choice models address situations where the alternatives possess a natural ordering, such as levels (low, medium, high) or severity grades (mild, moderate, severe), where the underlying is assumed to increase monotonically across categories. In these models, an unobserved latent variable U^* = \beta' x + \epsilon represents the propensity for higher categories, with the observed outcome y = j occurring if the latent falls between category-specific thresholds \tau_{j-1} < U^* \leq \tau_j, where \tau_0 = -\infty and \tau_J = \infty for J ordered categories. The model, also known as the proportional odds model, assumes logistic-distributed errors \epsilon with \Lambda(z) = \frac{1}{1 + e^{-z}}. The probability of observing category j is given by P(y = j | x) = \Lambda(\tau_j - \beta' x) - \Lambda(\tau_{j-1} - \beta' x), where thresholds \tau_j and parameters \beta are estimated via maximum likelihood. Similarly, the ordered probit model employs normally distributed errors with \Phi(z), yielding P(y = j | x) = \Phi(\tau_j - \beta' x) - \Phi(\tau_{j-1} - \beta' x), and is particularly suited for applications assuming Gaussian disturbances; it originates from extensions of the probit to multiple ordered thresholds. A key assumption in both models is the parallel lines or proportional odds condition, which posits a single set of \beta coefficients across all category transitions, implying constant effects of covariates on the log-odds of being above each threshold. This can be tested using the Brant test, which assesses coefficient equality across binary logits for cumulative categories; violations are addressed by the generalized ordered logit model, relaxing the assumption for specific covariates while retaining ordinal structure. Parameter interpretation focuses on marginal effects, which quantify changes in category probabilities from a unit shift in x_k: a positive \beta_k shifts probability mass toward higher categories, with effects varying by outcome level and typically computed at means. For instance, in rating hotel quality on a 1-5 star scale, an might use covariates like price and review scores to predict the probability of each rating, revealing how lower prices increase of higher ratings while thresholds capture inherent rating boundaries. Applications include modeling ratings (e.g., to D) based on firm financials, where the helps assess progression, and scales in surveys (e.g., none to severe), informing by linking characteristics to symptom severity levels. The binary choice model emerges as a special case with two categories and one .

Estimation Techniques

Parameter Estimation from Revealed Choices

Revealed choice data in discrete choice models consist of observed selections made by individuals from actual behavior, such as travel mode choices in real-world surveys, which are used to infer underlying preferences through random maximization. For choice models, the is given by L = \prod_{i=1}^N P_{ij}^{y_{ij}} (1 - P_{ij})^{1 - y_{ij}}, where P_{ij} is the probability that individual i chooses alternative j, and y_{ij} is a indicator equal to 1 if the choice is made and 0 otherwise. This formulation generalizes to multinomial models as L = \prod_{i=1}^N \prod_{j=1}^J P_{ij}^{y_{ij}}, where J is the number of alternatives, reflecting the product of choice probabilities across observations. Maximum likelihood estimation (MLE) seeks to maximize the log-likelihood function \log L = \sum_{i=1}^N \sum_{j=1}^J y_{ij} \log P_{ij} + (1 - y_{ij}) \log (1 - P_{ij}) for cases, or its multinomial extension \log L = \sum_{i=1}^N \sum_{j=1}^J y_{ij} \log P_{ij}, to obtain parameter estimates \hat{\beta}. Parameters are estimated via numerical optimization algorithms, such as the Newton-Raphson method, which iteratively updates \beta_{t+1} = \beta_t - H^{-1} g_t, where g_t is the and H is the evaluated at iteration t. This approach ensures consistent and asymptotically efficient estimates under standard regularity conditions. Parameter identification requires sufficient variation in the across alternatives and individuals, with normalized by fixing the of the (e.g., variance set to \pi^2 / 6 in models) since only differences in utilities are observable. Standard errors of the estimates are derived from the inverse of the negative at the maximum, scaled by the sample size N, providing measures of parameter precision. In cases of -based sampling, where are oversampled from specific sets (e.g., only users), corrections such as weighted maximum likelihood are applied to avoid bias, as in the weighted estimator that adjusts the log-likelihood by frequencies in the population. A representative example is estimating mode choice parameters \beta from travel survey data, such as the (BART) study involving 771 commuters, where a multinomial model predicts a 6.3% BART usage rate closely matching the observed 6.2%, with coefficients reflecting trade-offs between travel time, cost, and access. Common software tools for these estimations include Biogeme, an open-source package for of parametric discrete choice models, which efficiently handles large datasets through integration. Another tool is the Apollo package for , an open-source suite supporting advanced specifications like models and large-scale computations. NLOGIT, a former commercial suite extending LIMDEP, was widely used until its discontinuation in 2024 following the closure of Econometric Software. The core assumptions underlying MLE include independent and identically distributed (IID) observations across individuals, with errors following an extreme value distribution in models; violations, such as correlation due to clustering (e.g., in ), can lead to underestimated standard errors and require adjustments like clustered robust .

Handling Ranked Data

Ranked data in discrete choice analysis consist of observations where respondents provide full or partial orderings of alternatives, such as ranking the top three options from a set of products or policies. This approach captures ordinal preferences more comprehensively than choices, enabling richer on differences among alternatives. The exploded logit model, also known as the rank-ordered logit (ROL), treats full rankings as a sequence of conditional choices, where the first-ranked alternative is selected from the full choice set, the second from the remaining alternatives, and so on. Under the (IIA) assumption of the multinomial (MNL), the probability of a specific ranking—say, alternative j first and k second—is the product of the unconditional probability for the first choice and the for the second: P(\text{rank } j=1, k=2) = P_j \cdot \frac{P_k}{1 - P_j}, where P_i = \frac{\exp(V_i)}{\sum_{m} \exp(V_m)} denotes the MNL choice probability for alternative i with systematic utility V_i. The full likelihood for a complete ranking is the product of such terms across all positions, yielding a globally log-likelihood that ensures unique maximum likelihood estimates. This method offers advantages in efficiently utilizing all information to estimate utilities, producing parameters consistent with those from standard models while improving precision through additional . For instance, in conjoint surveys for , respondents rank attribute combinations (e.g., , features), and the exploded logit derives part-worth utilities from the ordering, as demonstrated in analyses of preferences for appliances where rankings revealed stronger attribute trade-offs than choice-only data. However, the model's reliance on IIA limits its applicability when alternatives exhibit in unobserved utilities, prompting alternatives like the rank-ordered , which accommodates general error structures but requires for . Extensions to partial rankings, such as top-J orderings, apply successive conditionals by treating the ranking as complete for the specified positions and ignoring lower ranks, maintaining the sequential logit structure while adjusting the sets accordingly.

Computational Considerations

Estimating discrete choice models often involves computationally intensive tasks, particularly when dealing with intractable integrals that arise in models like the or , where closed-form solutions for choice probabilities are unavailable. In such cases, simulation-based methods, such as , approximate these integrals by drawing random samples from the distribution of random coefficients or error terms, typically using 100 to 1000 draws per observation to balance accuracy and computational cost. Maximum simulated likelihood (MSL) estimation then maximizes the simulated log-likelihood function, providing consistent parameter estimates as the number of draws increases, though can occur with finite draws. To enhance simulation efficiency, quasi-Monte Carlo methods like Halton sequences generate low-discrepancy draws that reduce correlation across dimensions compared to standard pseudo-random numbers, leading to faster convergence and more stable estimates in high-dimensional settings. further improves efficiency by weighting draws toward regions of the that contribute most to the probability, minimizing variance in the simulator and allowing fewer draws for equivalent precision, as demonstrated in applications to models. These techniques are particularly valuable in correlated multinomial models, where the need for simulation arises from the integration over random parameters. Scalability challenges intensify in models with high-dimensional integrals, such as the , where the curse of dimensionality can make even simulation-based estimation prohibitive for large choice sets or datasets. Approximations like address this by optimizing a lower bound on the likelihood, enabling scalable Bayesian estimation that avoids full while maintaining reasonable accuracy, especially in contexts with thousands of observations. For validation, the bootstrap method resamples the data to compute standard errors of parameters, accounting for simulation variability in MSL estimators, while k-fold cross-validation assesses out-of-sample predictive fit by partitioning data into training and holdout sets, helping evaluate model generalization beyond in-sample metrics. A practical illustration is the estimation of a model for vehicle choices, where simulating choice probabilities with 500 Halton draws per observation approximates the integral over random coefficients for attributes like fuel and , yielding reliable willingness-to-pay estimates from stated . Emerging trends integrate techniques, such as neural to flexibly specify the V, capturing complex nonlinearities and interactions that traditional parametric forms overlook, while handling through scalable algorithms like . These advancements, including hybrid neural-discrete choice frameworks, promise improved predictive power for large-scale applications in and , though they require careful validation to ensure interpretability.

References

  1. [1]
    [PDF] Discrete Choice Methods with Simulation
    approach and methods of choice analysis. The early models have now. been supplemented by a variety of more powerful and more flexible.
  2. [2]
    [PDF] Discrete Choice Modeling - NYU Stern
    Discrete choice modeling models outcomes from discrete choices, like voting or purchasing, based on individual maximizing behavior.
  3. [3]
    Discrete Choice Model and Analysis | Columbia Public Health
    Discrete choice models are used to explain or predict a choice from a set of two or more discrete (ie distinct and separable; mutually exclusive) alternatives.
  4. [4]
    [PDF] Daniel L. McFadden - Nobel Lecture
    Estimating Willingness-to-Pay in Discrete Choice Models. Applications of discrete choice models to economic policy problems often call for estimation of ...
  5. [5]
    [PDF] Econometric Models of Probabilistic Choice - Daniel McFadden
    The problem of economic discrete choice parallels the decision context in which psychophysical models have been applied successfully. On the other hand,.
  6. [6]
    [PDF] Discrete choice modelling - RAND
    Discrete choice modelling provides an analytical framework with which to analyse and predict how people's choices are influenced by their personal ...
  7. [7]
    Discrete Choice Model - an overview | ScienceDirect Topics
    Discrete choice models refer to a class of probabilistic choice models that analyze individual preferences when selecting between two or more distinct ...
  8. [8]
    Daniel L. McFadden – Facts - NobelPrize.org
    Daniel McFadden's work combines economic theory, statistical methods and empirical applications toward the resolution of social problems.
  9. [9]
    The 2000 Prize in Economic Sciences - Popular information
    Oct 11, 2000 · McFadden's Contributions McFadden's theory of discrete choice emanates from microeconomic theory, according to which each individual chooses ...
  10. [10]
    [PDF] Part I Behavioral Models
    This chapter describes the features that are common to all discrete choice models. We start by discussing the choice set, which is the set of options that are ...
  11. [11]
    [PDF] Heterogeneous Choice Sets and Preferences - Francesca Molinari
    THE PRIMITIVES OF ANY DISCRETE CHOICE MODEL include two sets: a known universal set of feasible alternatives—the feasible set—and the finite subset of the ...Missing: individual | Show results with:individual
  12. [12]
    [PDF] Conditional Logit Analysis of Qualitative Choice Behavior
    Conditional logit analysis is a procedure for modeling population choice behavior, especially for qualitative choices, and is used in areas like choice of ...
  13. [13]
    [PDF] Mixed MNL models for discrete response - NYU Stern
    Discrete choice models continuous in their arguments can be approximated by. MNL models in which the scale value of each alternative is a general function of ...<|control11|><|separator|>
  14. [14]
    [PDF] Discrete Choice Models for Estimating Labor Supply
    Apr 4, 2021 · Discrete choice models analyze tax and transfer policy effects on labor supply, distinguishing participation from hours worked and capturing ...
  15. [15]
    [PDF] Lecture 3 Discrete Choice Models
    Random utility maximization (RUM). Assumption: Revealed preference. The decision maker selects the alternative that provides the highest utility. That is ...
  16. [16]
    Specification Tests for the Multinomial Logit Model - jstor
    We consider a generalization of the multinomial logit model which is called the nested logit model.<|control11|><|separator|>
  17. [17]
    [PDF] Discrete Choice Modeling for Transportation - UC Irvine
    Of course, the IIA property is also implied by any discrete choice model with independent and identically distributed unobserved utility terms. McFadden's ...
  18. [18]
    The Path to Discrete-Choice Models - ACCESS Magazine
    Oct 3, 2022 · 303–328, 1974. Daniel McFadden, “The Theory and Practice of Disaggregate Demand Forecasting for Various Modes of Urban Transportation,” in ...
  19. [19]
    Section 2.5 - Guidebook on Methods to Estimate Non-Motorized ...
    A discrete choice model predicts a decision made by an individual (choice of mode, choice of route, etc.) as a function of any number of variables, including ...
  20. [20]
    Comparative Analysis of Discrete Choice Approaches for Modeling ...
    Nov 30, 2023 · This study compares the RUM and RRM approaches to model the destination choice of urban home-based trips to work, with a case study in the city ...
  21. [21]
    Discrete choice models of traveler participation in differential time of ...
    Tolls that vary based on time of day or congestion are gaining attention around the world as a potential travel demand management strategy that can shift ...
  22. [22]
    The Value of Having a Public Transit Travel Choice - ScienceDirect
    A full understanding of economic impacts is important in investment evaluation and in making policy decisions regarding investment levels for transportation.
  23. [23]
    [PDF] Discrete choice modelling - RAND
    In an ideal situation we would build discrete choice models using information from choices that people are observed to make, i.e., revealed preference (RP) ...
  24. [24]
    A Dynamic Discrete Choice Activity-Based Travel Demand Model
    Oct 17, 2019 · This paper presents a dynamic discrete choice model (DDCM) for daily activity–travel planning. A daily activity–travel pattern is ...
  25. [25]
    [PDF] Disaggregate Travel Demand Models for the San Francisco Bay Area
    models could be extended using multinomial logit (MNL) to estimate equations representing a wide range of travel decisions. These models were designed to pre-.
  26. [26]
    A dynamic discrete choice modelling approach for forward-looking ...
    In this paper, we present a systematic approach based on dynamic discrete choice models (DDCM) to investigate individuals' forward-looking mode choice ...
  27. [27]
    Activity-Based Travel Models and Transportation Equity Analysis
    Jan 1, 2012 · The first objective of this paper is to present a research framework for the equity analysis of long-range transportation plans.
  28. [28]
    Conjoint Analysis in Consumer Research: Issues and Outlook - jstor
    Carmone, Frank J., Green, Paul E., and Jain, Arun K. (1978), "The Robustness of Conjoint Analysis: Some. Monte Carlo Results," Journal of Marketing Research ...
  29. [29]
    Discrete Choice Experiments in Health Economics - NIH
    The aim of this paper was to update prior reviews (1990–2012), to identify all health-related DCEs and to provide a description of trends, current practice and ...
  30. [30]
    Eliciting Health Care Preferences With Discrete Choice Experiments
    Apr 25, 2022 · Discrete choice experiments are a valuable method for quantifying preferences, comparing competing options, and understanding preference variations among ...
  31. [31]
    Preferences of patients with depression for medication management
    Sep 3, 2025 · A discrete choice experiment (DCE) was conducted to elicit and quantify the preferences of people with depression for medication management, and ...
  32. [32]
    A discrete-choice experiment to assess treatment modality ...
    Materials and methods: In a discrete-choice experiment, patients chose repetitively between two hypothetical depression treatments that varied in four treatment ...Missing: side | Show results with:side
  33. [33]
    What Is the Right Price in Discrete Choice Models of Health Plan ...
    In estimating discrete choice models for health plans, it is important to specify correctly the equation to be estimated. If we believe that consumers derive ...
  34. [34]
    Preferences for private health insurance in China: A discrete choice ...
    Sep 5, 2022 · A discrete choice experiment was conducted to explore potential clients' preferences for a type of government-involved private supplementary health insurance.
  35. [35]
    Patient Preferences for Provider Choice: A Discrete Choice ... - AJMC
    Jul 15, 2020 · The authors used a discrete choice experiment to analyze patient preferences for attributes of provider choice, including wait time, breadth, travel time, ...
  36. [36]
    Public Preferences for Policies to Promote COVID-19 Vaccination ...
    We conducted a discrete choice experiment in which 747 respondents were asked to choose between policies to promote vaccination uptake and their impacts on the ...
  37. [37]
    Discrete Choice Analysis of COVID-19 Vaccination Decisions for ...
    Jan 30, 2023 · This survey study assesses preferences for attributes of adult and pediatric COVID-19 vaccination among US adults.
  38. [38]
    Using discrete choice experiments to measure preferences for hard ...
    Jun 11, 2020 · This paper outlines the advantages of using SP data in health services research to inform policy making and describes how and when to use a Discrete Choice ...How Sp Data Could Be Used · Methods · Efficient Design
  39. [39]
    OP02 The Use Of Discrete Choice Experiments For Measuring ... - NIH
    Dec 23, 2022 · Incorporating DCEs into HTA is inexpensive and provides robust data for improving HTA. Articles from International Journal of Technology ...
  40. [40]
    [PDF] Energy Prices and Electric Vehicle Adoption
    This paper presents evidence that gasoline prices have a larger effect on demand for electric vehicles (EVs) than electricity prices in California. We match a ...
  41. [41]
    Technology advancement is driving electric vehicle adoption - PNAS
    May 30, 2023 · This study provides insight into current and past drivers of battery electric vehicle (BEV) adoption as well as the future for BEV adoption ...
  42. [42]
    Modelling interest in co-adoption of electric vehicles and solar ...
    Apr 24, 2024 · In this paper, consumer choice to adopt PV and/or EV is explained using the exogenous explanatory variables and endogenous latent variables ...
  43. [43]
    [PDF] Public willingness to pay for a US carbon tax and preferences for ...
    Sep 13, 2017 · The average willingness to pay for a carbon tax is $177/year. Americans support clean energy, infrastructure, and assisting displaced coal ...
  44. [44]
    What Is the Willingness to Pay to Reduce CO 2 Emissions?
    Based on the discrete choice responses, our preferred estimates of the willingness to pay per ton of CO2 emissions avoided are €133 for the Italians and €94 ...
  45. [45]
    The social acceptability of a personal carbon allowance: a discrete ...
    Our study aims at eliciting preferences for a specific PCA scheme and explores their heterogeneity, using a discrete choice experiment among Belgian citizens.
  46. [46]
    Household level innovation diffusion model of photo-voltaic (PV ...
    We focus on predicting the adoption time probabilities of photo-voltaic solar panels by households using discrete choice experiments and an innovation ...
  47. [47]
    [PDF] A Discrete Choice Framework for Modeling and Forecasting ... - arXiv
    Apr 27, 2025 · The technology adoption model predicts the probability that a certain individual will adopt the service at a certain time period, and is ...
  48. [48]
    Linking of a multi-country discrete choice experiment and an agent ...
    In this paper, we link findings from a demographically representative discrete choice experiment (DCE) in eight European countries on the adoption of smart ...
  49. [49]
    Enhancing Agent-Based Models with Discrete Choice Experiments
    Integrating empirical data in the agents' decision model can improve the validity of agent-based models (ABMs). We present an approach of using discrete choice ...
  50. [50]
    Unraveling hypothetical bias in discrete choice experiments
    Hypothetical bias—the difference between stated and real values—has attracted significant attention in stated preference studies across several fields in ...
  51. [51]
    Discrete choice modeling in environmental and energy decision ...
    Apr 4, 2022 · Discrete choice models (DCMs) are stated preference techniques used to investigate policy effects in environmental and energy decisions, ...
  52. [52]
    Hybrid discrete choice models: Gained insights versus ... - PubMed
    Oct 15, 2016 · Hybrid choice models expand the standard models in discrete choice modelling by incorporating psychological factors as latent variables.
  53. [53]
    The effect of the intensive margin on waste separation behavior
    Jan 10, 2025 · Providing a $14 monthly incentive increased willingness to sort waste by 2.12 times, while adding 3 min to the walking distance reduced ...
  54. [54]
    Households' preferences for door-to-door recycling service: A choice ...
    This study designs and implements a discrete choice experiment to elicit urban households' preferences for various attributes of such a system.
  55. [55]
    Assessing the impacts of social norms on low-carbon mobility options
    Mobility choices and climate change : assessing the effects of social norms and economic incentives through discrete choice experiments. Transportation and ...
  56. [56]
    The effect of social and personal norms on stated preferences for ...
    Jan 15, 2022 · For example, Ndebele (2020) employs a discrete choice experiment to examine stated preferences for green electricity in New Zealand and includes ...2 Social Norms, Personal... · 3 Survey Design · 5.2 Hybrid Model Results<|control11|><|separator|>
  57. [57]
    Social norms and individual climate protection activities: A survey ...
    This paper empirically examines the causal effect of related information interventions on climate protection activities, measured through incentivized ...
  58. [58]
    [PDF] The Sensitivity of an Empirical Model of Married Women's Hours of ...
    Apr 2, 2001 · Indeed, most of the studies of female labor supply over the past ten years have focused on statistical controls for these sample selection ...
  59. [59]
    A Conditional Probit Model for Qualitative Choice - jstor
    THE SOCIAL SCIENCES attempt to explain and predict the behavior of individu- als. In practice, this often requires that they predict individual decisions or.
  60. [60]
    [PDF] Modelling the Choice of Residential Location
    The model represented by equations. (9) and (11), termed the nested logit model, was first used with the estimation procedure described above, but with an ...
  61. [61]
    Application of Cross-Nested Logit Model to Mode Choice in Tel Aviv ...
    The cross-nested logit model is a generalization of nested logit, accounting for cross similarities between pure and combined modes, unlike nested logit's ...
  62. [62]
    A statistical model for the analysis of ordinal level dependent variables
    Aug 26, 2010 · This paper develops a model, with assumptions similar to those of the linear model, for use when the observed dependent variable is ordinal.
  63. [63]
    Regression Models for Ordinal Data - McCullagh - 1980
    This paper develops regression models for ordinal data, using stochastic ordering. Proportional odds and proportional hazards models are discussed.
  64. [64]
    [PDF] Efficient Estimation of Discrete-Choice Models - Stephen R. Cosslett
    If a logit model is used for the choice probabilities, and if the model contains alternative-specific dummy variables, then the coefficients of the model are ...
  65. [65]
    Biogeme 3.3.1
    It was written in Borland C++, and was the first discrete choice estimation software with a graphical user interface. It was designed for the estimation of ...Biogeme.biogeme module · Biogeme 3.3.1 documentation · Examples
  66. [66]
    Assessing the potential demand for electric cars - ScienceDirect
    An ordered logit specification for use on ranked individual data is used to analyze survey data on potential consumer demand for electric cars.
  67. [67]
    [PDF] Using Conjoint Analysis to Estimate Consumers' Willingness to Pay ...
    When all the conjoint data were collected, the card rankings were regressed againstthe attribute levels on the cards using an exploded logit model.3The.
  68. [68]
    [PDF] on the use of probit based models for ranking data analysis
    The tendency for the ROL and ROP models to show coefficient variation across rank levels is studied for the case of two empirical datasets by comparing ...<|control11|><|separator|>
  69. [69]
    [PDF] Quasi-random simulation of discrete choice models
    Halton sequences are created in multiple dimensions by using a different base for each dimension. For example, a. Halton sequence in two dimensions can be ...
  70. [70]
    [PDF] A New Use of Importance Sampling to Reduce Computational ...
    4.0.4 Choice of g​​ As mentioned, the traditional use of importance sampling is to reduce the variance of simulation estimators. An appropriate choice of g can ...
  71. [71]
    [PDF] Fast variational Bayes methods for multinomial probit models - arXiv
    Oct 15, 2022 · In this paper we propose a variational Bayes (VB) method for estimation in the MNP model, which is accurate and fast even when applied to choice ...
  72. [72]
    [PDF] A cross-validation approach for discrete choice models with ...
    Apr 8, 2015 · A performance measure based on predicted choice probabilities is used to estimate the model's (out-of-sample) fit on the validation set. The ...
  73. [73]
    Representing Random Utility Choice Models with Neural Networks
    Jul 26, 2022 · We propose a class of neural network-based discrete choice models, called RUMnets, inspired by the random utility maximization (RUM) framework.