Fact-checked by Grok 2 weeks ago

Inverse probability weighting

Inverse probability weighting (IPW), also known as inverse probability of weighting (IPTW), is a statistical method employed in observational studies to adjust for and estimate causal effects by reweighting sample units inversely proportional to their estimated probability of receiving the observed or given measured covariates. This approach creates a pseudo-population in which assignment is independent of the confounders, allowing for unbiased estimation of marginal effects through techniques such as marginal structural models. The technique traces its origins to , where the Horvitz-Thompson estimator, introduced in 1952, used inverse inclusion probabilities to obtain unbiased estimates of population totals from non-random samples. In the context of , IPW gained prominence in the late through its integration with marginal structural models, particularly in to handle time-varying exposures and confounders affected by prior treatment, as developed by James and colleagues. Propensity scores—the of treatment given covariates—form the basis for estimating these weights, with unstabilized weights calculated as $1 / \hat{e}(X) for treated units and $1 / (1 - \hat{e}(X)) for untreated units, where \hat{e}(X) is the estimated propensity score. Stabilized weights, which incorporate marginal treatment probabilities, are often preferred to reduce variance while preserving consistency. IPW is widely applied in fields like , , and social sciences for analyzing longitudinal data with time-dependent , such as in studies of antiretroviral therapy effects on progression or policy interventions on health outcomes. Key advantages include its ability to accommodate nonlinear relationships between covariates and outcomes without strong assumptions on the outcome model, making it more flexible than traditional adjustment in certain scenarios. However, its validity relies on critical assumptions: no unmeasured (exchangeability), positivity (nonzero probability across all covariate levels), (observed outcomes match counterfactuals under observed ), and correct specification of the propensity score model. Violations, such as extreme weights from low positivity or model misspecification, can lead to bias or high variance, often necessitating analyses or doubly robust extensions like augmented IPW.

Introduction

Definition and Motivation

Inverse probability weighting (IPW) is a technique in causal inference that adjusts for confounding in observational data by assigning weights to individual observations equal to the inverse of the estimated probability of receiving the observed treatment, conditional on measured covariates. These probabilities, known as propensity scores, are typically estimated using models like logistic regression that predict treatment assignment based on baseline characteristics such as age, sex, and other potential confounders. By up-weighting individuals who are underrepresented in their treatment group relative to their covariates, IPW creates a balanced pseudo-population where treatment assignment is independent of those covariates. The motivation for IPW arises from the challenge of in non-experimental studies, where treatments are not randomly assigned, leading to imbalances in covariate distributions between treated and untreated groups that can distort causal effect estimates. Unlike randomized trials, observational data often reflect real-world , and IPW addresses this by emulating through weighting, thereby enabling unbiased estimation of effects like average treatment effects (ATE). This method contrasts with direct adjustment approaches, such as outcome , which model the relationship between covariates, treatment, and outcomes but risk if the model is misspecified; IPW avoids such outcome model assumptions, focusing instead on correct specification of the treatment assignment model. To illustrate, consider a treatment example evaluating the effect of on using data from the and Nutrition Examination Survey I Epidemiologic Follow-up Study (NHEFS). Quitters and continuing smokers may differ systematically in covariates like age, race, and education; IPW estimates propensity scores for quitting based on these factors and applies inverse weights to balance their distributions across groups. This reweighting yields a pseudo-population where covariate imbalances are minimized, facilitating an unbiased estimate of the ATE, such as an average 3.4 kg greater among quitters compared to if they had continued smoking. Key benefits of IPW include its capacity to reduce in ATE estimation without requiring assumptions about the outcome , making it robust in settings with complex or unknown outcome relationships. It also preserves nearly all points in the analysis, avoiding the information loss common in matching techniques, and supports of effects in specific subpopulations by restricting weights accordingly.

Historical Context

A pivotal development occurred in 1952 with the Horvitz-Thompson estimator, proposed by Daniel G. Horvitz and Donovan J. Thompson, which generalized unbiased estimation in by weighting observations inversely proportional to their inclusion probabilities, providing a foundational method for handling unequal sampling and extending to non-randomized settings. This approach gained traction in the 1980s through the propensity score literature, where Paul R. Rosenbaum and Donald B. Rubin highlighted the central role of estimated treatment assignment probabilities in balancing covariates for causal effects in observational studies, bridging weighting ideas to broader inference problems. The technique was popularized in epidemiology during the 1990s and early 2000s by James M. Robins and colleagues, who adapted inverse probability weighting to marginal structural models for addressing time-dependent confounding in longitudinal data, enabling robust estimation of causal effects in complex observational settings. Hernán and Robins further integrated these methods into modern causal inference frameworks throughout the 2000s, emphasizing their utility in epidemiology and beyond, with practical software implementations emerging in tools like R's ipw package and Stata's teffects commands by around 2010 to facilitate widespread adoption.

Background Concepts

Propensity Scores

The propensity score, denoted as e(\mathbf{X}), is defined as the of treatment assignment given a vector of observed baseline covariates \mathbf{X}, formally e(\mathbf{X}) = P(A=1 \mid \mathbf{X}) for a treatment indicator A. This scalar summary balances the distribution of covariates across treatment groups, facilitating in observational studies by enabling adjustments that mimic . Introduced in 1983 in the foundational work by Rosenbaum and Rubin, the propensity score serves as a key tool for reducing when estimating treatment effects under the potential outcomes framework. Logistic regression remains the most common method for estimating propensity scores, modeling the treatment probability as a function of covariates through a logit link to produce probabilities between 0 and 1. For datasets with complex, nonlinear covariate-treatment relationships or high-dimensional features, alternatives such as random forests offer improved flexibility by aggregating multiple decision trees to predict treatment assignment without assuming a specific parametric form. These machine learning approaches can enhance score accuracy in scenarios where logistic models underperform, such as with interactions or rare events. A core property of the propensity score is its role as a balancing score: conditional on e(\mathbf{X}), the of observed covariates \mathbf{X} is of assignment, allowing for unbiased covariate adjustment via , matching, or . Effective application requires the common support condition, which mandates substantial overlap in the propensity score between treated and untreated units to avoid unreliable and the generation of extreme weights that amplify variance. Practical computation of propensity scores follows a structured to ensure validity. Covariate selection begins by including all variables that influence both treatment assignment and outcomes (confounders), predict treatment alone, or predict outcomes but not treatment, guided by . The selected model is then fitted to the data, followed by diagnostics such as plotting propensity score densities to verify overlap and computing standardized mean differences in covariates before and after adjustment to confirm balance, with differences below 0.1 typically indicating adequate balancing.

Potential Outcomes Framework

The potential outcomes framework, also known as the , provides the foundational counterfactual approach for defining and estimating causal effects in observational and experimental studies. Under this model, each unit in the population is conceptualized as having potential outcomes corresponding to every possible treatment level, allowing researchers to contrast what would have happened under different interventions. Central to the framework is the notation for potential outcomes: for a treatment A (where A=1 indicates and A=0 indicates ), the potential outcome under treatment level a is denoted Y(a), with the observed outcome Y equaling Y(A) for the actually received by . This setup formalizes the individual causal effect as Y(1) - Y(0), though it remains unobservable for any unit due to the fundamental problem of , where only one potential outcome is realized per unit. Identification of causal effects from observed data relies on three key assumptions. The consistency assumption states that if a unit receives A = a, then the observed outcome Y = Y(a), linking counterfactuals to observables. The positivity assumption requires that the probability of receiving each treatment level given covariates X satisfies $0 < P(A=a \mid X) < 1, ensuring no covariate strata lack exposure to any treatment. Exchangeability, or conditional ignorability, posits that potential outcomes are independent of treatment assignment given covariates, Y(a) \perp A \mid X, which can be achieved through propensity scores that estimate P(A=1 \mid X) to condition on X. The primary target parameter in this framework is the average treatment effect (ATE), defined as E[Y(1) - Y(0)], representing the population-level causal impact of treatment versus control. Under the identifiability assumptions, the ATE can be expressed in terms of observed data as E[E[Y \mid A=1, X]] - E[E[Y \mid A=0, X]], enabling estimation via methods like that reweight the sample to mimic a randomized experiment. The framework extends to time-varying treatments in longitudinal settings, where potential outcomes depend on treatment histories \bar{A}_k up to time k. Here, identifiability requires sequential randomization, assuming that treatment at each time is independent of future potential outcomes given the observed history of treatments and covariates.

Core Methods

Inverse Probability Weighted Estimator (IPWE)

The inverse probability weighted estimator (IPWE), also known as the inverse probability of treatment weighting (IPTW) estimator, is a fundamental method in causal inference for estimating the average treatment effect (ATE) from observational data by reweighting observations to balance covariates between treatment groups. This approach relies on the propensity score, defined as the probability of receiving treatment given observed covariates, to create a pseudo-population where treatment assignment is independent of those covariates, thereby adjusting for measured confounding under the assumptions of exchangeability and positivity. The IPWE for the ATE is given by \hat{\tau}_{\text{IPW}} = \frac{1}{n} \sum_{i=1}^n \left( \frac{A_i Y_i}{\hat{e}(X_i)} - \frac{(1 - A_i) Y_i}{1 - \hat{e}(X_i)} \right), where n is the sample size, A_i \in \{0, 1\} is the binary treatment indicator for unit i, Y_i is the observed outcome, X_i are the covariates, and \hat{e}(X_i) is the estimated propensity score \Pr(A_i = 1 \mid X_i). This formula represents the difference between the weighted average outcome under treatment and under control, effectively estimating the counterfactual mean difference \E[Y(1) - Y(0)]. To construct the IPWE, first estimate the propensity scores \hat{e}(X_i) using a model such as logistic regression fitted to predict treatment from covariates X_i. Next, compute the inverse probability weights for each unit as w_i = \frac{A_i}{\hat{e}(X_i)} + \frac{1 - A_i}{1 - \hat{e}(X_i)}, which upweight units underrepresented in their received treatment arm relative to the covariate distribution. Finally, apply these weights to the outcomes by calculating the weighted means separately for treated (A_i = 1) and untreated (A_i = 0) groups, then subtract to obtain \hat{\tau}_{\text{IPW}}. In practice, stabilized weights can be used by multiplying by the marginal probability of treatment \Pr(A_i = 1) to reduce variability when propensity scores are extreme, though the basic form suffices for consistent estimation. The interpretation of the IPWE centers on the pseudo-population formed by the weights, where each original unit is replicated w_i times, resulting in balanced covariate distributions across treatment arms as if treatment were randomly assigned conditional on X_i. This reweighting emulates the covariate structure of a target population (often the full sample) while preserving the outcome distribution within treatment levels. For statistical inference, standard errors of \hat{\tau}_{\text{IPW}} can be estimated using the robust sandwich variance estimator, which accounts for the estimation of \hat{e}(X_i) and the heteroskedasticity induced by the weights, providing asymptotically valid confidence intervals under standard regularity conditions.

Augmented Inverse Probability Weighted Estimator (AIPWE)

The Augmented Inverse Probability Weighted Estimator (AIPWE) extends the Inverse Probability Weighted Estimator (IPWE) by integrating outcome regression models, which enhances estimation efficiency without sacrificing the robustness provided by propensity score weighting. This approach combines inverse probability weighting with predictions of the conditional mean outcome under each treatment level, allowing for bias correction even if one of the models is misspecified. When the outcome models are omitted or set to zero, the AIPWE reduces to the standard IPWE. The AIPWE is constructed by estimating the average treatment effect \hat{\tau}_{AIPW} using the following formula: \hat{\tau}_{AIPW} = \frac{1}{n} \sum_{i=1}^n \left[ \left( \frac{A_i (Y_i - \hat{m}_1(X_i))}{\hat{e}(X_i)} + \hat{m}_1(X_i) \right) - \left( \frac{(1-A_i) (Y_i - \hat{m}_0(X_i))}{1-\hat{e}(X_i)} + \hat{m}_0(X_i) \right) \right], where A_i is the binary treatment indicator, Y_i is the observed outcome, X_i are covariates, \hat{e}(X_i) is the estimated (probability of treatment given covariates), and \hat{m}_a(X_i) denotes the predicted outcome under treatment a \in \{0,1\}, typically obtained via regression models such as , for binary outcomes, or more flexible machine learning techniques. A key feature of the AIPWE is its bias correction mechanism, embodied in the augmentation terms \frac{A_i (Y_i - \hat{m}_1(X_i))}{\hat{e}(X_i)} and \frac{(1-A_i) (Y_i - \hat{m}_0(X_i))}{1-\hat{e}(X_i)}, which offset errors arising from misspecification in either the propensity score model or the outcome regression models. This structure ensures that the estimator remains consistent if at least one model is correctly specified. In practice, implementing the AIPWE requires careful estimation of the nuisance parameters \hat{e}(X) and \hat{m}_a(X) to prevent overfitting, particularly when using data-driven methods like machine learning. Cross-fitting addresses this by partitioning the sample into K folds, fitting the models on K-1 folds and evaluating on the held-out fold, then averaging across folds to compute the final estimate. This technique mitigates the bias from using the same data for both model fitting and estimation.

Properties and Analysis

Assumptions and Derivations

Inverse probability weighting (IPW) relies on several core assumptions to ensure the validity of causal estimates. The positivity assumption requires that the propensity score e(X) = P(A=1 \mid X) satisfies $0 < e(X) < 1 for all observed covariate values X in the support of the population distribution, ensuring that every individual has a non-zero probability of receiving each treatment level given their covariates. The exchangeability assumption, also known as no unmeasured confounding, posits that treatment assignment is independent of potential outcomes conditional on X, i.e., \{Y(1), Y(0)\} \perp A \mid X, which implies all confounders are measured and conditioned upon. Additionally, the consistency assumption states that the observed outcome Y equals the potential outcome under the received treatment, Y = A Y(1) + (1-A) Y(0). For the IPW estimator to be consistent, the propensity score model must be correctly specified; misspecification leads to biased estimates. The unbiasedness of the IPW estimator for the average treatment effect (ATE), \hat{\tau}_{IPW} = n^{-1} \sum_{i=1}^n \left( \frac{A_i Y_i}{e(X_i)} - \frac{(1-A_i) Y_i}{1-e(X_i)} \right), can be derived under the above assumptions when the true propensity score e(X) is known. Taking the expectation, E[\hat{\tau}_{IPW}] = E\left[ \frac{A Y}{e(X)} - \frac{(1-A) Y}{1-e(X)} \right]. By the law of iterated expectations, E\left[ \frac{A Y}{e(X)} \right] = E\left[ E\left[ \frac{A Y}{e(X)} \mid X \right] \right] = E\left[ \frac{E[A Y \mid X]}{e(X)} \right]. Since E[A \mid X] = e(X) and under exchangeability E[Y \mid A=1, X] = E[Y(1) \mid X], this simplifies to E\left[ \frac{e(X) E[Y(1) \mid X]}{e(X)} \right] = E[ E[Y(1) \mid X] ] = E[Y(1)]. A symmetric argument yields E\left[ \frac{(1-A) Y}{1-e(X)} \right] = E[Y(0)], so E[\hat{\tau}_{IPW}] = E[Y(1)] - E[Y(0)] = \tau, the true ATE. IPW estimators can exhibit high variance due to extreme weights when e(X) is close to 0 or 1, particularly in samples with limited overlap. To mitigate this, techniques such as weight trimming—capping weights at upper and lower thresholds (e.g., 95th and 5th percentiles)—are employed to reduce the influence of outliers while preserving approximate unbiasedness under mild conditions. Stabilized weights address variance by scaling the inverse propensity scores with marginal treatment probabilities: for treated units, w_i^* = P(A=1) / e(X_i), and for control units, w_i^* = P(A=0) / (1 - e(X_i)); these maintain the same estimating equations as standard IPW but yield more stable variance estimates. Under the stated assumptions and with a correctly specified propensity score model, the IPW estimator is consistent as the sample size n \to \infty, converging in probability to the true ATE. Asymptotically, \sqrt{n} (\hat{\tau}_{IPW} - \tau) follows a normal distribution with mean 0 and variance equal to the expected value of the squared influence function, \text{Var}\left( \frac{A (Y - E[Y(1) \mid X])}{e(X)} - \frac{(1-A) (Y - E[Y(0) \mid X])}{1-e(X)} + E[Y(1) \mid X] - E[Y(0) \mid X] - \tau \right), though the simple IPW influence function is \frac{A Y}{e(X)} - \frac{(1-A) Y}{1-e(X)} - \tau when outcome models are not incorporated. This normality enables inference via standard errors derived from the influence function or bootstrap methods.

Double Robustness and Efficiency

The augmented inverse probability weighted estimator (AIPWE) exhibits a double robustness property, providing consistent estimates of the average causal effect if at least one of the two nuisance models—the propensity score model or the outcome regression model—is correctly specified, even if the other is misspecified. This property stems from the estimator's construction as an orthogonal estimating equation, which ensures that the bias from model misspecification in one component is offset by the augmentation term derived from the other. In the potential outcomes framework, this leads to consistency for the average treatment effect without requiring joint correctness of both models. A sketch of the proof involves decomposing the bias of the AIPWE through its influence function or estimating equation, typically expressed as: \hat{U}(\mu) = n^{-1} \sum_{i=1}^n \left[ \frac{A_i (Y_i - \hat{m}(X_i))}{\hat{e}(X_i)} + \hat{m}(X_i) - \mu \right] = 0, where A_i is treatment indicator, Y_i is outcome, \hat{e}(X_i) is the estimated propensity score, and \hat{m}(X_i) is the estimated outcome regression. The first term represents the , while the second is the augmentation. If the propensity score \hat{e}(X_i) is correct, the weighted residual term is unbiased, and the augmentation has expectation zero; conversely, if the outcome model \hat{m}(X_i) is correct, the augmentation corrects any bias in the weighting term, yielding an unbiased estimating equation overall. This cancellation mechanism ensures the estimator converges to the true parameter despite errors in one model. Regarding efficiency, the AIPWE demonstrates variance reduction compared to the inverse probability weighted estimator (IPWE) when both nuisance models are correctly specified, as the augmentation incorporates additional information from the outcome model to stabilize estimates. Simulations in foundational work illustrate this, with AIPWE variance approximately 57% lower than IPWE in scenarios with correct models (e.g., 0.09 versus 0.21 in standardized units). Under correct specification of both models, the AIPWE attains the semiparametric efficiency bound, achieving the lowest possible asymptotic variance among regular estimators in the nonparametric model. In comparison, the IPWE is efficient only when the propensity score model is correctly specified and lacks this bound otherwise, while the AIPWE's enhanced robustness comes at the expense of greater computational demands from fitting dual models.

Applications and Extensions

Use in Causal Inference

Inverse probability weighting (IPW) is widely applied in causal inference to estimate the and the from observational data, particularly in where randomized trials are infeasible. By weighting observations inversely to their probability of receiving the —estimated via —IPW balances covariates between treatment groups, mimicking a randomized experiment and reducing . In scenarios involving time-varying exposures, IPW forms the basis of marginal structural models (MSMs) to address time-dependent confounding, where prior treatments influence subsequent confounders. MSMs use time-specific IPW to construct a stabilized pseudo-population free of such confounding, enabling consistent estimation of causal effects across multiple treatment periods. This approach is essential in longitudinal studies, such as those tracking dynamic interventions in chronic diseases. Practical implementation of IPW for causal estimation is supported by accessible software tools. The 'ipw' package in R facilitates propensity score estimation, weight calculation, and fitting of MSMs for both point and time-varying treatments. Similarly, SAS provides macros for propensity-based weighting, including procedures like CAUSALTRT for IPW-ATE estimation in observational data. A seminal case study illustrating IPW's application in causal inference is Hernán et al. (2000), who employed MSMs with IPW to estimate the causal effect of zidovudine (an early antiretroviral therapy) on survival among HIV-positive men. Their analysis of data from the Multicenter AIDS Cohort Study revealed a protective effect (hazard ratio of 0.7, conservative 95% CI: 0.6–1.0, for treated vs. untreated), with IPW substantially reducing bias from time-dependent confounding compared to naive adjustments (unadjusted hazard ratio of 3.6, 95% CI: 3.0–4.3). This work demonstrated IPW's ability to uncover valid causal inferences in complex, real-world settings with evolving treatments. IPW estimators can be augmented with outcome regression models to achieve double robustness, providing consistent estimates if either the propensity score or outcome model is correctly specified, which enhances practical reliability in epidemiological applications.

Variations in Specific Contexts

In survival analysis, inverse probability of censoring weighting (IPCW) adapts IPW to address right-censoring, where observations may be incomplete due to subjects dropping out before the event of interest occurs. This method estimates the survival function by reweighting observed data to account for the probability of remaining uncensored, assuming censoring is independent of the outcome conditional on covariates (missing at random). A common IPCW estimator for the survival function at time t is given by \hat{S}(t) = \frac{\sum_{i=1}^n \hat{w}_i I(T_i > t)}{\sum_{i=1}^n \hat{w}_i}, where \hat{w}_i is the estimated inverse probability of not being censored by time T_i, and I(\cdot) is the indicator function. This approach, introduced in foundational work on adjustment for dependent censoring, enables unbiased estimation of survival probabilities even when censoring depends on observed covariates. In , IPW extends the Horvitz-Thompson estimator to adjust for unit nonresponse, where selected units fail to provide data, potentially biasing estimates. The Horvitz-Thompson estimator originally weights sampled units by the inverse of their inclusion probabilities to obtain unbiased totals; for nonresponse, weights are further adjusted by the inverse of estimated response probabilities, often modeled via on auxiliary covariates. This adaptation ensures that the weighted sample represents the full target , reducing under the assumption that response depends only on observed variables. Such methods are routinely applied in large-scale surveys to correct for differential nonresponse rates across subgroups. For longitudinal data, stabilized IPW is employed in marginal structural models to handle time-varying and censoring, allowing of causal effects while adjusting for time-dependent affected by prior . Stabilized weights are constructed as the ratio of the numerator (marginal probability of the observed treatment and censoring history) to the denominator ( given past covariates and treatments), which stabilizes variance compared to unstabilized weights that can become extreme. This technique, key to estimating parameters in marginal structural models, has been widely used to assess dynamic treatment regimens, such as in studies evaluating sequential therapies. In high-dimensional settings with many covariates exceeding sample size, IPW incorporates regularized propensity score estimation, such as via penalization, to select relevant confounders and avoid . applies L1 regularization to the propensity score model, shrinking irrelevant coefficients to zero while estimating inverse probabilities for weighting, thereby enabling consistent estimation under sparsity assumptions. This approach mitigates the curse of dimensionality in observational data with abundant variables, improving efficiency over unregularized methods in scenarios like electronic health records analysis.

References

  1. [1]
    [PDF] Practice of Epidemiology Constructing Inverse Probability Weights ...
    Inverse probability weighting (henceforth, weighting) can be used to estimate exposure effects. Unlike standard statis- tical methods, weighting can ...
  2. [2]
    [PDF] A Primer on Inverse Probability of Treatment Weighting and ...
    Inverse probability of treatment weights are also based on estimated probabilities of treatment selection and can be used to create so-called pseudo- ...
  3. [3]
    [PDF] Inverse Probability Weighting: from Survey Sampling to Evidence ...
    Apr 14, 2025 · Two of the most popular examples of such estimators are the. Horvitz–Thompson estimator [Horvitz and Thompson, 1952], which uses fixed weights ...
  4. [4]
    An introduction to inverse probability of treatment weighting in ...
    In this article we introduce the concept of inverse probability of treatment weighting (IPTW) and describe how this method can be applied to adjust for measured ...
  5. [5]
    John Snow, Cholera, the Broad Street Pump; Waterborne Diseases ...
    John Snow conducted pioneering investigations on cholera epidemics in England and particularly in London in 1854 in which he demonstrated that contaminated ...
  6. [6]
    [PDF] On the Application of Probability Theory to Agricultural Experiments ...
    Abstract. In the portion of the paper translated here, Neyman introduces a model for the analysis of field experiments conducted for the purpose of comparing a ...
  7. [7]
    A Generalization of Sampling Without Replacement From a Finite ...
    A limitation of the Hansen and Hurwitz scheme is that an unbiased estimate of the sampling variance of their estimator cannot be obtained from the sample ...
  8. [8]
    The Central Role of the Propensity Score in Observational Studies ...
    42 P. R. ROSENBAUM AND D. B. RUBIN made here. For discussion of some possible violations of this assumption, see Cox (1958,. Chapter 2) or Rubin (1978, ?2 3).
  9. [9]
    Marginal structural models and causal inference in epidemiology
    This paper introduces marginal structural models, a new class of causal models that allow for improved adjustment of confounding in those situations.Missing: PDF | Show results with:PDF
  10. [10]
    [PDF] teffects ipw — Inverse-probability weighting - Stata
    teffects ipw estimates treatment effects from observational data via inverse-probability weighting. (IPW). IPW estimators use estimated probability weights ...
  11. [11]
    [PDF] Causal Inference: What If - HSPH Content
    Jan 2, 2024 · The authors of any Causal Inference book will have to choose which aspects of causal inference methodology they want to emphasize.
  12. [12]
  13. [13]
    On Variance of the Treatment Effect in the Treated When Estimated ...
    The variance of the IPW ATE estimator is often estimated by assuming that the weights are known and then using the so-called “robust” (Huber-White) sandwich ...
  14. [14]
    Constructing Inverse Probability Weights for Marginal Structural ...
    The method of inverse probability weighting (henceforth, weighting) can be used to adjust for measured confounding and selection bias.
  15. [15]
    Inverse Probability Weighting
    Inverse probability weighting uses a logistic regression model to estimate the probability of exposure, using the predicted probability as a weight in analyses.Inverse Probability... · Description · Readings
  16. [16]
    [PDF] STA 640 — Causal Inference Chapter 3.4: Propensity Score Weighting
    ▷ Inverse probability weighting (IPW), also known as inverse probability of ... ▷ When use any weighting method (e.g. IPW), good practice is to.
  17. [17]
    clarifying its role, assumptions, and estimand in real-world studies
    A critical assumption underlying all PS-based methods is positivity, the requirement that every individual in the target population has a nonzero probability of ...
  18. [18]
    Use of stabilized inverse propensity scores as weights to directly ...
    Abstract. Objectives. Inverse probability of treatment weighting (IPTW) has been used in observational studies to reduce selection bias.
  19. [19]
    Lecture 15: Inverse Probability Weighted Estimators
    The main goal of this lecture is to discuss how we can estimate and make inference about ATE and ATT using inverse probability weighted (IPW) estimators, under ...
  20. [20]
    [PDF] Doubly Robust Estimation in Missing Data and Causal Inference ...
    A doubly robust (DR) estimator is consistent if either a model for missingness or complete data is correct, giving two chances for valid inference.
  21. [21]
    Augmented Inverse Probability Weighting and the Double ...
    This article discusses the augmented inverse propensity weighted (AIPW) estimator as an estimator for average treatment effects.Missing: seminal | Show results with:seminal
  22. [22]
    Causal inference of general treatment effects using neural networks ...
    ... (NHANES) to investigate the causal effect of smoking on body mass index (BMI). The collected data consist of 6647 subjects, including 3359 smokers and 3288 ...
  23. [23]
    Marginal structural models to estimate the causal effect of ... - PubMed
    We describe the marginal structural Cox proportional hazards model and use it to estimate the causal effect of zidovudine on the survival of human ...
  24. [24]
    Recovery of Information and Adjustment for Dependent Censoring ...
    RobinsA. RotnitzkyL. Zhao. Medicine. 1995. Abstract We propose a class of inverse probability of censoring weighted estimators for the parameters of models for ...