Fact-checked by Grok 2 weeks ago

Multinomial probit

The model is a statistical framework for analyzing data, where decision-makers select one unordered alternative from a set of multiple options, extending the binary probit model to handle more than two categories. It is based on a latent representation, where the observed corresponds to the alternative with the highest unobserved , modeled as y_{ij}^* = x_i' \beta_j + \epsilon_{ij}, with error terms \epsilon_i following a that permits correlations across alternatives. This structure allows the model to estimate probabilities through simulation methods, such as the Geweke-Hajivassiliou-Keane (GHK) simulator, due to the absence of closed-form solutions for probabilities in cases with three or more alternatives. Developed as an to the multinomial model, the multinomial probit addresses the latter's restrictive (IIA) assumption by incorporating a non-diagonal for the errors, enabling more realistic modeling of patterns among choices. Early applications emerged in during the late , with key advancements in estimation techniques, including maximum likelihood via simulation and Bayesian methods using (MCMC). For instance, the model normalizes the scale of the errors (often setting the variance of differenced errors to 2) and identifies parameters relative to a base , ensuring the location of the utility function is fixed. The multinomial probit is widely applied in fields such as for product selection, transportation for mode choice, and for , particularly when survey data reveal correlated preferences across options. Its flexibility supports both individual-specific covariates (e.g., demographics) and alternative-specific variables (e.g., prices), though computational demands have historically limited its use compared to simpler variants; modern software implementations, including Bayesian approaches, have mitigated these challenges.

Introduction

Definition and Purpose

The multinomial probit model represents an extension of the binary probit model to situations involving more than two unordered categorical alternatives, where observed choices are modeled as the outcome of an underlying process in which individuals select the alternative yielding the highest latent utility. This utility for each option incorporates systematic components based on observable attributes—such as individual characteristics or alternative features—along with random error terms that capture unobserved heterogeneity and are assumed to follow a . The primary purpose of the multinomial probit model is to derive probabilities of choosing one alternative from a set of mutually exclusive options, enabling researchers to quantify how observed factors influence while allowing for correlations among the error terms across alternatives. This flexibility makes it particularly valuable in and statistics for applications requiring nuanced modeling of substitution patterns, as opposed to models imposing stricter assumptions. Within the context of discrete choice modeling, the serves as a key tool for analyzing behaviors where individuals face multiple distinct options, providing insights into preferences and trade-offs without relying on closed-form probability expressions that might oversimplify real-world interdependencies. For instance, it has been employed to examine voter selections among competing , accounting for correlated unobserved influences on preferences. Similarly, in , the model aids in understanding consumer decisions across brands, such as yogurt varieties, by incorporating variables like price and promotions.

Historical Development

The multinomial probit model emerged within the framework of random utility maximization theory, which formalized in his seminal 1973 work on conditional logit analysis for qualitative behavior. This theory posits that individuals select alternatives to maximize their utility, with unobserved components introducing probabilistic elements to choice predictions. Building on the earlier binary probit model developed for dichotomous outcomes in the mid-20th century, the multinomial extension addressed multi-alternative scenarios in . In the , the model was introduced as a flexible alternative to the multinomial logit, particularly to accommodate general correlations among error terms across alternatives, which the logit assumes away via the property. Hausman and Wise (1978) proposed a conditional formulation in their paper, recognizing interdependence and heterogeneous preferences in discrete decisions, such as labor force participation choices. This innovation stemmed from the need to relax restrictive assumptions in earlier logit-based models while maintaining consistency with utility maximization. Carlos Daganzo further elaborated on the full multinomial in his 1979 book, applying it to in transportation and emphasizing its theoretical advantages for correlated utilities. Early adoption highlighted significant computational challenges, as choice probabilities required evaluating high-dimensional multivariate normal integrals, rendering maximum likelihood estimation intractable without approximations for more than a few alternatives. These issues limited practical use throughout the late 1970s and early , prompting innovations in estimation techniques. A key milestone came in the with the development of simulation-based methods; McFadden (1989) introduced the method of simulated moments, which used to approximate integrals and enable feasible parameter estimation for complex specifications. This advancement, along with subsequent simulators like the Geweke-Hajivassiliou-Keane algorithm, revitalized the model's applicability in .

Model Formulation

Latent Utility Specification

The multinomial probit model originates from the random utility maximization paradigm in theory. For each individual i and alternative j, the utility U_{ij} is expressed as the sum of an observable deterministic component V_{ij} and an unobservable random error term \epsilon_{ij}, such that U_{ij} = V_{ij} + \epsilon_{ij}. The deterministic utility is typically specified as a V_{ij} = x_{ij}' [\beta](/page/Beta), where x_{ij} is a of observed attributes specific to the individual i and alternative j, and [\beta](/page/Beta) is a of parameters. This formulation allows the model to accommodate both individual- and alternative-specific characteristics in predicting preferences. The vector of error terms for individual i across all J alternatives, \epsilon_i = (\epsilon_{i1}, \dots, \epsilon_{iJ})^\top, is assumed to follow a multivariate normal distribution with zero mean and a full covariance matrix \Sigma, denoted \epsilon_i \sim \text{MVN}(0, \Sigma). Unlike models assuming independence, this structure permits nonzero off-diagonal elements in \Sigma, enabling the capture of correlations between unobserved factors influencing choices across alternatives, such as shared tastes or substitution patterns. The observed choice y_i for individual i is determined by selecting the alternative that maximizes , given by y_i = \arg\max_j U_{ij}. This latent variable interpretation links the probabilistic model to the underlying decision process, where the probability of choosing j arises from the event U_{ij} > U_{ik} for all k \neq j. A key challenge in the model is , stemming from the invariance of choice probabilities to affine transformations of the utilities. To address scale indeterminacy, the \Sigma is normalized by fixing at least one diagonal element (e.g., a variance) to 1, ensuring unique parameter estimates. normalization is often achieved by setting one alternative's utility differences to zero, further restricting the matrix to a (J-1) \times (J-1) form for .

Choice Probabilities

In the multinomial probit model, the probability that decision-maker i chooses alternative j from a set of J alternatives, denoted P(y_i = j), is derived from the latent framework where choice j occurs if the utility of j exceeds that of all other alternatives. This probability is expressed as the integral of the multivariate normal density function over the specific region in the error space that corresponds to the choice conditions. Specifically, P(y_i = j) = \int_{R_j} \phi(\boldsymbol{\epsilon}; \mathbf{0}, \boldsymbol{\Sigma}) \, d\boldsymbol{\epsilon}, where \phi(\boldsymbol{\epsilon}; \mathbf{0}, \boldsymbol{\Sigma}) is the probability density function of a J-dimensional multivariate normal distribution with mean vector \mathbf{0} and covariance matrix \boldsymbol{\Sigma}, and the integration region R_j is defined by the inequalities \epsilon_j > \epsilon_k - (V_{ij} - V_{ik}) for all k \neq j. Here, V_{ij} represents the systematic (deterministic) component of utility for alternative j by individual i, typically a function of observed covariates. The region R_j captures the set of error realizations \boldsymbol{\epsilon} = (\epsilon_1, \dots, \epsilon_J) under which alternative j provides the highest utility, ensuring that the random disturbances align to favor j relative to competitors after accounting for systematic utility differences. This formulation arises directly from the choice rule: y_i = j if V_{ij} + \epsilon_j > V_{ik} + \epsilon_k for every k \neq j, which rearranges to the boundary conditions defining R_j. To reduce redundancy, the integral can often be expressed in a (J-1)-dimensional form by differencing errors relative to a reference alternative (e.g., setting one \epsilon to zero), as the absolute levels do not affect choice probabilities. This probability integral is inherently multidimensional, requiring evaluation over a (J-1)-dimensional for J alternatives, which renders it analytically intractable for J > 2. For the binary case (J=2), it simplifies to a univariate , but for more alternatives, no closed-form solution exists due to the complexity of the multivariate over the irregular polyhedral region R_j. Consequently, numerical methods such as are essential for computation, though the theoretical structure preserves the model's flexibility in handling error correlations. The choice probabilities satisfy the fundamental property that \sum_{j=1}^J P(y_i = j) = 1 for each individual i, as the regions \{R_j\}_{j=1}^J form a partition of the entire \mathbb{R}^J error space under the multivariate normal distribution. Moreover, P(y_i = j) depends solely on the differences V_{ij} - V_{ik} across alternatives, emphasizing the model's focus on relative utilities driven by covariates, which allows for realistic substitution patterns without the independence of irrelevant alternatives restriction found in simpler models.

Comparison to Multinomial Logit

Shared Foundations

Both the multinomial probit and multinomial logit models are grounded in the theory of random utility maximization, originally developed by , which posits that individuals choose the alternative that maximizes their latent utility, comprising a deterministic component observable to the researcher and a component representing unobserved factors. This framework assumes that the observed choice reveals the highest utility among a of alternatives, enabling probabilistic modeling of under . In the framework shared by both models, the alternatives are unordered and mutually exclusive, meaning the decision-maker selects exactly one option from a set, with utilities potentially varying by individual-specific covariates such as socioeconomic characteristics or attributes of the choices. This setup is particularly suited to modeling behaviors like transportation mode selection or product choices, where the focus is on predicting probabilities based on explanatory variables. Regarding independence assumptions, the multinomial logit model imposes the property due to its assumption of independently and identically distributed extreme value error terms, while the multinomial probit model accommodates general error correlations through a ; nevertheless, both models specify the deterministic as a of observed covariates. The probit structure, involving latent variables with correlated normals, aligns with this linear observed component but extends flexibility beyond IIA constraints. Empirically, the two models often produce similar predictions, especially when error correlations in the specification are low, as the serves as a reasonable to the in such cases, facilitating comparable insights across datasets like travel demand surveys.

Computational and Interpretational Differences

One key distinction between the multinomial (MNP) and multinomial (MNL) models lies in their computational demands during estimation. The MNP lacks a for choice probabilities, requiring the evaluation of high-dimensional multivariate normal integrals to compute the probability that one alternative's exceeds all others. This necessitates simulation-based methods, such as the Geweke-Hajivassiliou-Keane (GHK) simulator or accept-reject algorithms, which approximate these integrals through repeated draws from the normal , leading to significantly higher computational intensity—often orders of magnitude slower than MNL estimation, especially with more than three alternatives. In contrast, the MNL derives choice probabilities directly via the formula, enabling straightforward without simulation. Interpretational differences stem from the underlying error structures and their implications for parameter meanings. In the MNP, coefficients capture marginal effects on the latent utilities of alternatives, assuming normally distributed errors with a flexible that allows for heteroskedasticity and correlations across options; these effects influence choice probabilities through the of the multivariate normal, providing a scale relative to the error variance (often normalized to 1 for identification). The MNL, however, interprets coefficients as changes in the log-odds of choosing one alternative over another, under the of independent and identically distributed extreme value errors, yielding relative risk ratios that are intuitive for odds-based reasoning but constrained by the (IIA) property. While both models share a random maximization , MNP parameters offer more nuanced insights into substitution patterns by avoiding IIA's restrictive implication that the relative attractiveness of two alternatives is unaffected by others, as the probit naturally accommodates cross-alternative error correlations. The MNP is preferable when error correlations are empirically relevant, such as in transportation mode choice where alternatives like bus and may share unobserved factors (e.g., ), allowing the model to capture realistic elasticities without IIA . Conversely, the MNL is favored for its simplicity and reliability in scenarios where holds or computational efficiency is prioritized, as simulations indicate it often yields more accurate and stable estimates even under moderate IIA violations.

Estimation Techniques

Maximum Likelihood Framework

The (MLE) of the multinomial probit (MNP) model involves maximizing the log-likelihood function with respect to the parameters \beta (the of coefficients on observed covariates) and \Sigma (the of the error terms). The log-likelihood is given by L(\beta, \Sigma) = \sum_{i=1}^N \log P(y_i \mid x_i; \beta, \Sigma), where N is the number of observations, y_i is the observed choice for individual i, x_i are the covariates, and P(y_i \mid x_i; \beta, \Sigma) is the probability of the observed choice under the probit specification, computed as the multivariate over the region where the latent for the chosen alternative exceeds those for all others. This function arises directly from the random utility maximization framework underlying the MNP, with the choice probabilities derived from the of the multivariate errors. Optimization of this log-likelihood is a nonlinear problem due to the multidimensional integrals required to evaluate the probabilities, necessitating numerical methods such as the Newton-Raphson algorithm or the Berndt-Hall-Hall-Hausman (BHHH) procedure. These methods iteratively update the parameter estimates by computing the gradient (score) and (or an outer-product approximation thereof) of the log-likelihood, which in turn requires repeated evaluations of the choice probabilities for each . The computational demands stem from the lack of a closed-form expression for the probabilities, though the process converges to the maximum under standard regularity conditions, providing point estimates of \beta and \Sigma. A key challenge in MNP estimation is identification, as the parameters \beta and \Sigma are only identified up to scale; specifically, only ratios such as \beta / \sigma (where \sigma is the scale of the errors) and the correlations within \Sigma are estimable, while absolute levels and overall scale are normalized (e.g., by setting one error variance to 1 or using the of \Sigma). This normalization exploits the invariance of choice probabilities to affine transformations of the utilities, focusing estimation on utility differences and error correlations that capture substitution patterns across alternatives. Failure to impose such restrictions can lead to non-identification, particularly in models with flexible covariance structures. Under correct model specification and standard assumptions (e.g., independent observations, correct distribution of errors), the MLE for the MNP is consistent, asymptotically efficient, and normally distributed, with the asymptotic given by the inverse of the information matrix (or a robust sandwich if misspecification is suspected). These properties ensure that, as the sample size N grows, the estimates converge in probability to the true parameters and their approaches , enabling via Wald tests, likelihood ratio tests, or standard errors derived from the .

Simulation Methods

The estimation of multinomial probit (MNP) models faces significant computational challenges due to the need to evaluate high-dimensional multivariate integrals for probabilities, which lack closed-form solutions. methods address this by approximating these integrals through techniques, enabling practical implementation of maximum likelihood and other estimators. These approaches computational cost for accuracy, with the number of simulation draws balancing bias reduction against increased processing time. The Geweke-Hajivassiliou- (GHK) simulator is a cornerstone method for MNP estimation, employing to from truncated multivariate distributions defined by the choice-specific truncation regions. It proceeds sequentially: starting from the first , it samples from a univariate truncated based on the lower and upper bounds implied by the observed , then conditions on that to truncate the next , and so on until all dimensions are simulated. This yields an unbiased estimator of the choice probability with low variance, even for models with many alternatives, as the sequential conditioning avoids the curse of dimensionality in direct sampling. The GHK method was independently developed by Geweke, Hajivassiliou and McFadden, and in the early , and has become the standard simulator for MNP due to its efficiency and reliability in empirical applications. Maximum simulated likelihood (MSL) integrates simulators like GHK into the maximum likelihood framework by replacing exact choice probabilities in the log-likelihood with averages over R draws. For individual i choosing alternative j, the approximated probability is \hat{P}_{ij}(\beta, \Sigma) = \frac{1}{R} \sum_{r=1}^{R} \mathbf{1} \left( y_{ij}^{*(r)} > y_{ik}^{*(r)} \ \forall k \neq j \right), where y_{i\ell}^{*(r)} are simulated latent utilities for alternatives \ell under parameters \beta and covariance \Sigma, typically generated via GHK to ensure accuracy. The parameters are then obtained by maximizing the simulated log-likelihood \sum_i \log \hat{P}_{ij}(\beta, \Sigma), which converges to the true maximum likelihood as R \to \infty. This approach, while introducing noise that diminishes with more draws, allows consistent estimation without and is particularly effective when combined with GHK for its and derivative properties. Alternative simulation-based techniques include the method of simulated moments (MSM), which avoids full likelihood maximization by matching simulated moments of the data to empirical moments, such as choice shares or covariances, using draws from the model's distribution. MSM is computationally lighter than MSL for high-dimensional MNP models and provides consistent estimates under mild conditions, though it may be less efficient unless optimal moments are selected. In Bayesian settings, (MCMC) methods simulate the posterior distribution of parameters by augmenting the latent utilities and sampling from full conditional distributions, often using to handle the multivariate normal structure. This approach, pioneered for MNP by McCulloch and Rossi, yields full posterior including credible intervals for \Sigma, but requires careful tuning to achieve convergence in correlated choice settings. Across these methods, convergence properties hinge on the number of simulation draws R: increasing R reduces variance (scaling as $1/\sqrt{R}) and in smoothed simulators, improving , but escalates computational demands, often requiring hundreds to thousands of draws per evaluation for reliable results in medium-sized MNP models with 5–10 alternatives. Empirical studies show that GHK-based MSL achieves near-exact likelihood values with R = 100–500, while MSM and MCMC offer robustness to misspecification at the cost of slightly higher variance.

Applications and Limitations

Empirical Uses

In , the multinomial probit model has been widely applied to transportation mode choice problems, where it accommodates correlated error terms across alternatives such as , bus, or , allowing for realistic patterns among options. For instance, empirical analyses of decisions have used the model to estimate preferences based on factors like travel time and , demonstrating its in forecasting demand under correlated utilities. In a study of commuters, the model revealed significant correlations between public transit modes, improving predictions over independent alternatives assumptions. In , multinomial probit models support analyses by incorporating unobserved heterogeneity through flexible structures, enabling researchers to assess the impact of variables like price and promotion on purchase decisions. A key application involves scanner for goods, where the model captures cross- correlations and individual-specific preferences, outperforming simpler specifications in explaining dynamics. For example, in modeling selections, the approach highlighted the role of demographic factors and efforts in driving heterogeneity. Political science applications of the multinomial probit model focus on in multi-candidate , where it models choices among parties or candidates while accounting for correlated voter preferences across options. Empirical studies of multi-party systems, such as in the , have employed the model to test spatial voting theories, revealing how issue positions and demographics influence vote shares without imposing independence restrictions. Comparisons with multinomial in U.S. and European data underscore the probit's advantage in handling realistic error correlations for accurate prediction of electoral outcomes. In , the model aids in analyzing treatment selection among discrete options, such as choice of delivery sites or medical providers, by jointly estimating selection and outcome equations to address . An application to rural data used a multinomial probit to evaluate preferences for public versus private health facilities, incorporating latent attributes like perceived quality to explain utilization patterns. Similarly, in , the framework modeled birthing location choices (e.g., accredited health units versus public clinics or private providers), highlighting socioeconomic determinants and correlations in . Implementation of multinomial probit models is facilitated by statistical software, including Stata's mprobit command, which supports estimation for unordered categorical outcomes with various correlation structures, and R's MNP package, which employs Bayesian Markov chain Monte Carlo for flexible fitting to choice data.

Advantages and Challenges

The multinomial probit (MNP) model offers several advantages over simpler discrete choice models like the multinomial logit (MNL), primarily due to its greater flexibility in capturing underlying behavioral processes. One key strength is its ability to accommodate arbitrary correlations among the error terms across alternatives through a general covariance matrix \Sigma, allowing for realistic modeling of substitution patterns that reflect unobserved similarities between options. Unlike the MNL, the MNP does not impose the independence of irrelevant alternatives (IIA) restriction, which assumes that the relative probabilities of two alternatives are unaffected by the presence of others; this makes the MNP more suitable for scenarios where cross-alternative dependencies are empirically relevant. Additionally, the MNP's multivariate normal error structure facilitates natural extensions to panel data settings, where repeated observations per individual can be incorporated via random effects or parameter expansion techniques to account for unobserved heterogeneity over time. Despite these benefits, the MNP faces significant challenges that limit its widespread adoption. Computationally, it is demanding, particularly for models with a large number of alternatives K, as evaluating choice probabilities requires multidimensional integration over the multivariate normal distribution, often necessitating simulation-based methods like the Geweke-Hajivassiliou-Keane (GHK) simulator; this can result in estimation times orders of magnitude higher than the closed-form MNL. The model's results are also sensitive to the specification of \Sigma, with identification issues arising from scale invariance that typically require fixing elements (e.g., the (1,1) entry to 1), potentially leading to biased inferences if the prior or normalization is misspecified. Furthermore, MNP estimation often exhibits slower convergence in Bayesian or maximum likelihood frameworks compared to the MNL, due to high posterior dependence and autocorrelation in MCMC samplers. Given these trade-offs, the MNP is typically avoided in favor of the MNL or nested models when computational is prioritized and error correlations are not central to the , as the added flexibility may not justify the increased complexity in many applied contexts. Looking ahead, ongoing research aims to enhance the MNP's scalability through integrations with techniques, such as factor structures or shrinkage priors that reduce the dimensionality of \Sigma, enabling efficient for high-dimensional choice sets. For example, augmentation samplers for multinomial probit Bayesian additive regression trees (MPBART) have been developed to accommodate complex nonlinear effects.

References

  1. [1]
    [PDF] MNP: R Package for Fitting the Multinomial Probit Model - Kosuke Imai
    The multinomial probit model is often used to analyze the discrete choices made by individuals recorded in survey data. Examples where the multinomial probit ...
  2. [2]
    [PDF] A Short Introduction into Multinomial Probit Models
    Nested Logit models introduced in the previous lecture are one way to avoid the IIA assumption. Another one is the use of multinomial Probit models.
  3. [3]
    [PDF] mprobit — Multinomial probit regression - Description Quick start Menu
    mprobit fits a multinomial probit (MNP) model for a categorical dependent variable with outcomes that have no natural ordering. The actual values taken by ...Missing: definition | Show results with:definition
  4. [4]
    A Conditional Probit Model for Qualitative Choice - jstor
    THE SOCIAL SCIENCES attempt to explain and predict the behavior of individu- als. In practice, this often requires that they predict individual decisions or.
  5. [5]
    [PDF] Discrete Choice Methods with Simulation
    Some models, such as mixed logit and pure probit in ad- dition of course to ... Multinomial logit (in chapter 3), nested logit (chapter 4), and ordered ...
  6. [6]
    a comparison of choice models for voting research - ScienceDirect
    We compare the MNP and MNL models and argue that the simpler logit is often preferable to the more complex probit for the study of voter choice in multi-party ...
  7. [7]
    Estimating a Multinomial Probit Model of Brand Choice Using the ...
    Marketing researchers have used the multinomial logit model to study the effects of marketing mix and demographic variables on households' choice probabilities ...
  8. [8]
    [PDF] Conditional Logit Analysis of Qualitative Choice Behavior
    Conditional logit analysis of qualitative choice behavior. DANIEL MCFADDEN'. UNIVERSITY OF CALIFORNIA AT BERKELEY. BERKELEY, CALIFORNIA. 1. Preferences and ...Missing: URL | Show results with:URL
  9. [9]
    Multinomial Probit - ScienceDirect.com
    1979. Author: Carlos Daganzo. Multinomial Probit. The Theory and its Application to Demand Forecasting. A volume in Economic Theory, Econometrics, and ...
  10. [10]
    On the estimation of the multinomial probit model - ScienceDirect
    This paper discusses the problems both of calculating the multinomial probit choice functions and in estimating the model parameters. Four techniques for ...
  11. [11]
    A Method of Simulated Moments for Estimation of Discrete ... - jstor
    The method of simulated moments (MSM) replaces numerical integration with Monte Carlo simulation for discrete response models, using the law of large numbers.
  12. [12]
    [PDF] Classical Estimation Methods for LDV Models Using Simulation
    Example 1 (Multinomial Probit) The multinomial probit model is a leading illustration of the ... discussed in Hausman and Wise (1978), p.310. An example of the ...<|control11|><|separator|>
  13. [13]
    [PDF] A conditional probit model for qualitative choice - DSpace@MIT
    by Berndt, Hall, Hall, and Hausman [1974]. It requires only firstderivatives, each iteration is guaranteed to increase the value of the likelihood function and ...Missing: 1978 | Show results with:1978
  14. [14]
    [PDF] What can we learn about correlations from multinomial probit ...
    It is well known that, in a multinomial probit, only the covariance matrix of the location and scale normalized utilities are identified.
  15. [15]
    [PDF] Daniel L. McFadden - Nobel Lecture
    probabilities of maximization of utilities that contained random elements. Marschak called this the Random Utility Maximization (RUM) model. An influential ...<|separator|>
  16. [16]
    The accuracy of the multinomial logit model as an approximation to ...
    The multinomial probit model of travel demand is considerably more general but much less tractable than the better-known multinomial logit model.
  17. [17]
    Specification Tests for the Multinomial Logit Model - jstor
    the multinomial probit model does not provide a convenient specification test for ... for a random utility model. This proposition is proven together with a ...
  18. [18]
    [PDF] Choosing Between Multinomial Logit and Multinomial Probit Models ...
    MNL is simpler but assumes IIA, while MNP is computationally intensive but doesn't. MNL nearly always provides more accurate results than MNP.
  19. [19]
    A Conditional Probit Model for Qualitative Choice
    Mar 1, 1978 · Econometrica: Mar, 1978, Volume 46, Issue 2. A Conditional Probit Model for Qualitative Choice: Discrete Decisions Recognizing ...
  20. [20]
    A practical technique to estimate multinomial probit models in ...
    The Multinomial Probit (MNP) formulation provides a very general framework to allow for inter-dependent alternatives in discrete choice analysis.
  21. [21]
    Multinomial Probit and Qualitative Choice: A Computationally ...
    The purpose of this paper is to introduce such a technique and to demonstrate the feasibility of forecasting with multinominal probit models. Our limited ...Missing: challenges | Show results with:challenges
  22. [22]
    A multinomial probit analysis of shanghai commute mode choice
    In this paper, we apply a fully flexible multinomial probit (MNP) model for the analysis of commute mode choice behavior, and compare this MNP model with more ...
  23. [23]
    Estimating a Multinomial Probit Model of Brand Choice Using the ...
    This paper presents an application of the method of simulated moments, a new methodology that enables easy estimation of probit models with a large number of ...
  24. [24]
    Investigating the effects of marketing variables and unobserved ...
    J. Hausman, D.A. Wise. A conditional probit model for qualitative choice: Discrete decisions recognizing interdependence and heterogenous preferences.
  25. [25]
    Modeling Toothpaste Brand Choice: An Empirical Comparison of ...
    The purpose of this study is to compare the performances of Artificial Neural Networks (ANN) and Multinomial Probit (MNP) approaches in modeling the choice ...
  26. [26]
    [PDF] Voter Choice in Multi-Party Democracies: A Test of Competing ...
    Here, we test two competing explanations of voting behavior in the. Netherlands with special emphasis on obtaining both a reasonable functional form as well as ...
  27. [27]
    [PDF] Multinomial probit and multinomial logit: a comparison of choice ...
    Several recent studies of voter choice in multiparty elections point to the advantages of multinomial probit (MNP) relative to multinomial/conditional logit ...
  28. [28]
    The choice of medical providers in rural Bénin - ScienceDirect.com
    We formulated a multinomial probit model that distinguished between three mutually exclusive sites for delivering a baby: a health unit specifically accredited ...<|separator|>
  29. [29]
    A Multinomial Probit Model with a Discrete Endogenous Variable - NIH
    Methods. We formulated a multinomial probit model that distinguished between three mutually exclusive sites for delivering a baby: a health unit specifically ...