Fact-checked by Grok 2 weeks ago

Pooled analysis

Pooled analysis is a statistical in and that combines individual-level data from multiple independent studies to enhance statistical power and derive more precise estimates of associations between exposures and outcomes. This approach is particularly valuable when individual studies lack sufficient sample size to detect modest effects or explore subgroups, allowing for standardized variable definitions, robust confounder adjustment, and evaluation of effect modification across datasets. Unlike , which aggregates such as odds ratios or risk estimates from published results, pooled analysis requires access to raw, participant-level , enabling more flexible modeling and direct computation of overall effects using techniques like fixed- or random-effects models to account for between-study variability. This distinction provides greater analytical depth, as it facilitates the incorporation of study-specific covariates and reduces biases from selective reporting in summaries. Methodological guidelines emphasize careful , including resolution of discrepancies in scales or follow-up periods, to ensure comparability before pooling. Pooled analyses have been instrumental in fields like molecular epidemiology and , where they address questions involving rare outcomes or complex interactions. For instance, in trials, pooling data from multiple randomized controlled trials has assessed efficacy endpoints such as maternal immunization outcomes across diverse populations. Similarly, in infectious disease research, combined datasets exceeding 5,000 participants have quantified risk reductions, such as a 30% lower for HSV-2 acquisition with consistent use (HR 0.70, 95% CI 0.40–0.94). These applications underscore the method's role in informing policy and advancing evidence-based interventions, though challenges like data privacy and heterogeneity persist.

Introduction

Definition

Pooled analysis is a statistical technique that combines raw, individual-level data from multiple independent studies to conduct a single, unified analysis, commonly applied in and to enhance statistical power and generalizability. This approach treats the aggregated dataset as originating from one large , enabling the examination of effects that may be undetectable in smaller, individual datasets. A defining characteristic of pooled analysis is its reliance on access to original participant-level data, such as individual covariates, exposure measures, and outcomes, rather than aggregated summary statistics. This facilitates more nuanced investigations, including subgroup analyses and adjustments for confounding factors at the individual level. True pooled analysis emphasizes individual participant data (IPD) pooling, where detailed records for each study participant are harmonized and analyzed together; in contrast, aggregate data pooling simply combines summary measures and is considered a less robust variant. For effective implementation, studies must have compatible data structures, including overlapping variables and measurement protocols, to ensure valid integration. Unlike meta-analysis, which synthesizes summary statistics from published reports, pooled analysis requires raw data sharing to achieve its analytical depth.

Historical Development

Pooled analysis originated in the late and early as an extension of meta-analytic approaches in , building on efforts to quantitatively synthesize results from multiple studies while addressing limitations in summary-level combinations. This development was driven by the need for greater statistical power and precision in observational research, particularly in fields like cancer where individual-level data allowed for more nuanced investigations of risk factors. Early discussions, such as those by Begg and Berlin in 1988 on in interpreting combined medical data, highlighted challenges in aggregating study results and paved the way for refined pooling techniques. A pivotal advancement came with the paper by Friedenreich, which outlined systematic methods for pooling and analyzing individual participant data from prior epidemiologic studies, emphasizing to minimize heterogeneity and . This work formalized pooled analysis as distinct from traditional by leveraging for direct statistical modeling. In 1998, Petitti's review further clarified the method's position in epidemiologic synthesis, contrasting it with narrative reviews and meta-analyses of published summaries, and underscoring its utility for deriving more reliable effect estimates. The early 2000s marked key milestones in standardization, including the 2000 guidelines, which provided reporting recommendations for meta-analyses of observational studies and explicitly incorporated pooled individual-level approaches to enhance and . Adoption accelerated through large consortia, exemplified by the in the mid-2000s, which harmonized data across cohorts to explore dietary influences on cancer risk with unprecedented scale. By the , pooled analysis transitioned from ad-hoc integrations to protocol-driven practices, propelled by data-sharing consortia and computational innovations that facilitated secure handling of distributed datasets without full disclosure. This evolution enabled broader collaborative efforts, such as international networks, improving the method's applicability to complex, multifactor research questions.

Versus Meta-Analysis

Pooled analysis and both aim to synthesize evidence from multiple studies but differ fundamentally in their approach to data handling and statistical modeling. Pooled analysis involves combining individual participant data (IPD) from multiple studies and treating the dataset as a single, large study for direct statistical modeling. In contrast, typically aggregates , such as ratios or ratios, from each study and combines them using methods like fixed- or random-effects models to derive an overall effect estimate. This core distinction arises because pooled analysis requires access to raw, participant-level data, enabling more granular analyses, whereas relies on published or extracted aggregate results, which are more readily available but limit the depth of investigation. The analytical advantages of pooled analysis stem from its use of IPD, which permits subgroup analyses at the individual level, adjustment for unmeasured confounders across studies, and exploration of interactions between variables that cannot be reliably assessed with aggregated data alone. For instance, IPD allows standardization of variable definitions and outcome measurements across studies, reducing heterogeneity and enabling multivariate adjustments that enhance the validity of causal inferences. Meta-analysis, while efficient for synthesizing broad trends, often faces limitations in handling such complexities, as aggregate data may obscure variations in participant characteristics or study protocols. Meta-analysis is preferable for rapid evidence synthesis when IPD is unavailable or when the goal is a high-level overview of effect sizes across diverse studies, as it can be conducted using publicly available . Pooled analysis, however, is ideal for deeper insights into heterogeneity and effect modifiers when collaborators grant access to , though it demands more resources for data and ethical approvals. To illustrate, consider multiple cohort studies examining the association between and risk. In a pooled analysis, IPD enables direct modeling of interactions, such as how modifies the - within a unified . By comparison, a would pool summary risk estimates from each study using fixed- or random-effects models, providing an overall but without the ability to explore participant-specific interactions or adjust for cross-study confounders at the individual level.

Versus Traditional Literature Reviews

Traditional literature reviews, often referred to as reviews, consist of qualitative summaries of published study findings without any statistical integration of data. These reviews typically provide descriptive overviews of the literature, synthesizing key themes and conclusions from multiple sources in a format. However, they are inherently subjective, relying on the reviewer's interpretation and selection of studies, which can introduce biases such as selective reporting and the exclusion of conflicting evidence. A major limitation of traditional narrative reviews is their inability to produce quantitative estimates of overall effect sizes, such as relative risks or odds ratios, or to formally test for heterogeneity across studies. This approach often results in vague conclusions, like "most studies suggest an association," without providing precise measures of uncertainty or the magnitude of effects. Consequently, these reviews are prone to publication bias, where positive or significant findings are overrepresented, and they cannot adjust for variations in study design, population, or methodology. In contrast, pooled analysis addresses these shortcomings by quantitatively integrating raw individual-level from multiple studies, yielding a formal and reproducible estimate of effects through standardized statistical models. This method reduces subjective by employing objective criteria for inclusion and , allowing for adjustments that account for factors and heterogeneity, thereby providing more robust evidence synthesis. For instance, pooled analyses can generate precise intervals around effect estimates, enhancing the reliability of conclusions compared to the descriptive nature of reviews. Historically, narrative reviews dominated epidemiological literature synthesis prior to the , serving as the primary means to consolidate knowledge amid growing research volumes. The shift toward quantitative methods like pooled analysis gained momentum in the late and 1990s, driven by the need for more rigorous evidence to inform decisions, particularly for assessing weak associations in large-scale studies. This evolution marked a transition from subjective overviews to data-driven approaches, similar to the rise of as another quantitative alternative.

Methodology

Data Collection and Preparation

In pooled analysis, the initial phase involves selecting studies suitable for combining their individual participant data (IPD), which consists of raw, participant-level information from multiple primary studies addressing a similar . Inclusion criteria typically emphasize similarity in study design, such as randomized controlled trials with comparable interventions, outcome measures, and participant populations, to ensure data comparability and minimize heterogeneity. Collaboration is often facilitated through consortia or data-sharing agreements, where investigators from eligible studies are contacted systematically via literature searches in databases like , and multiple reviewers independently assess eligibility to promote transparency. Data acquisition follows study selection and centers on obtaining IPD directly from original investigators or sponsors, as this is essential for the pooling process. Requests are typically made through formal letters or emails outlining the , with secure methods like encrypted files to protect . Ethical approvals, such as (IRB) compliance, are required where applicable, alongside data use agreements that specify terms for access, analysis, and publication to address concerns under regulations like GDPR. Efforts aim to include data from upwards of 90% of eligible participants to reduce , with platforms like Vivli or Yale Access Project aiding access when direct collaboration is challenging. Once acquired, the harmonization process standardizes disparate datasets into a unified format to enable pooling. This includes recoding variables for consistency, such as aligning exposure categories across studies or converting measurement units—for instance, standardizing () calculations using WHO guidelines—and resolving discrepancies in definitions like disease staging through consultation with original investigators. is handled via methods like multiple imputation for moderate levels (e.g., under 50% missingness per variable) or complete case exclusion for higher rates, with sensitivity analyses to evaluate impact; original datasets are archived, and all transformations are logged to maintain traceability. Quality checks are integral to verify the prepared data's reliability before , encompassing assessments of , validity, and comparability. Investigators perform checks for duplicates, outliers, and implausible values (e.g., ages outside reasonable ranges), often replicating published to confirm accuracy within a 10% threshold of standardized differences. are cross-verified against original publications, with any inconsistencies resolved through queries, and follows guidelines like PRISMA-IPD to report all steps, ensuring reproducibility and transparency in the pooling effort.

Statistical Analysis Techniques

In pooled analysis, two main approaches are used: one-stage and two-stage. The one-stage approach treats the combined individual participant data from multiple studies as a single , enabling the application of standard models while accounting for study-specific effects. This method allows for incorporating all data simultaneously and handling complex interactions, and is often preferred when exploring subgroups or interactions. The two-stage approach first analyzes IPD within each study separately to obtain study-specific estimates (e.g., treatment effects and variances), then pools these aggregate results using meta-analytic techniques, such as fixed- or random-effects models, to account for between-study heterogeneity. It is computationally simpler and commonly used when one-stage models are complex or for binary outcomes. For binary outcomes, such as disease presence or absence, pooled is commonly employed in the one-stage framework. The model estimates the log-odds of the outcome as a of covariates, incorporating study-specific intercepts to for differences across studies: \text{logit}(P(Y_{ij}=1)) = \beta_0 + \beta_1 X_{ij} + \gamma_i where Y_{ij} is the outcome for participant j in study i, X_{ij} represents covariates (e.g., treatment), and \gamma_i is a study-specific fixed effect. If substantial between-study heterogeneity in covariate effects is anticipated, random effects can be added to the slope terms, such as \beta_1 + u_i where u_i \sim N(0, \tau^2), with \tau^2 quantifying variation. In two-stage approaches for outcomes, study-specific logistic regressions yield ratios, which are then pooled using methods like the Mantel-Haenszel procedure—a fixed-effects weighted assuming a common effect: \hat{OR}_{MH} = \frac{\sum w_k \hat{OR}_k}{\sum w_k} where w_k are weights based on study-specific variances. Random-effects extensions incorporate \tau^2 via methods like DerSimonian-Laird. For time-to-event outcomes, like survival times, the proportional hazards model is the primary technique in the one-stage framework. It assumes proportional hazards and models the hazard function stratified by study or with study indicators: h_{ij}(t) = h_{0i}(t) \exp(\beta_1 X_{ij} + \gamma_i) Here, h_{0i}(t) is the study-specific baseline hazard, and \gamma_i accounts for clustering; random frailty terms u_i can replace \gamma_i for random effects modeling of heterogeneity. Proportionality is typically assessed via Schoenfeld residuals or time-dependent covariates. In two-stage approaches, study-specific hazard ratios are pooled similarly. Continuous outcomes, such as blood pressure measurements, are analyzed using linear regression models on the pooled data in the one-stage approach: Y_{ij} = \beta_0 + \beta_1 X_{ij} + \gamma_i + \epsilon_{ij} with \epsilon_{ij} \sim N(0, \sigma^2), and study effects \gamma_i as fixed or random to address clustering. Two-stage methods involve study-specific linear models followed by meta-analysis of mean differences. Inference in these models adjusts standard errors for clustering within studies using robust or sandwich estimators to ensure valid confidence intervals and p-values. Maximum likelihood or restricted maximum likelihood estimation is standard, implemented in software like R (e.g., coxme package) or Stata. Sensitivity analyses, such as multiple imputation for missing data or study exclusions, evaluate robustness to assumptions like missing at random.

Applications

In Epidemiology

In epidemiology, pooled analysis is commonly employed to investigate rare exposures or outcomes by combining individual-level data from multiple studies, such as case-control investigations of environmental carcinogens like or , where single studies often lack sufficient statistical power. This approach also facilitates adjustment for key confounders, including age, sex, and , across diverse cohorts to enhance the precision of risk estimates at the level. By standardizing data harmonization protocols, researchers can address variations in measurement across studies while preserving the granularity needed for robust epidemiological inference. A prominent example is the Pooling Project of Prospective Studies of Diet and Cancer, initiated in the 1990s by the and collaborators, which harmonizes data from over 20 prospective cohorts encompassing more than 500,000 participants to examine dietary risk factors for various cancers, including colorectal and breast malignancies. Another key application is the INTERPHONE study, conducted in the 2000s across 13 countries, which pooled case-control data from approximately 5,000 cases and 7,000 controls to assess associations between use and or risks, revealing no overall increased risk but highlighting potential trends in heavy users. Pooled analysis significantly boosts statistical power for subgroup analyses, such as stratifying by or geographic , allowing detection of heterogeneous effects that might be obscured in smaller datasets. It further enables detailed exploration of gene-environment interactions, for instance, by integrating genetic variants with exposure data from multiple cohorts to uncover how factors like or modify hereditary risks for diseases such as . More recent applications include a 2025 pooled analysis of 3,741 stool metagenomes from 18 cohorts to identify biomarkers for screening and progression. Methodological adaptations in epidemiological pooled analyses emphasize mitigating observational biases inherent to multi-study designs, particularly arising from differing recruitment criteria or loss to follow-up across cohorts. Strategies include rigorous data standardization to align exposure definitions and outcome ascertainment, as well as sensitivity analyses to evaluate the impact of non-random selection, ensuring that pooled estimates remain representative of broader populations.

In Clinical Research

In clinical research, pooled analysis serves as a powerful method for integrating individual data from multiple randomized controlled trials (RCTs), particularly to increase statistical power for detecting , such as adverse effects in evaluations. This approach enables more precise estimates of effects compared to study-level summaries, as it allows direct access to for analyses and adjustment for confounders. A prominent application is individual patient data meta-analysis (IPD-MA), a subtype of pooled analysis that combines raw participant-level across RCTs to assess treatment outcomes with greater granularity, including interactions and prognostic factors. For example, the Cholesterol Treatment Trialists' (CTT) Collaboration has conducted ongoing IPD-MA of statin trials since the 1990s, pooling from over 170,000 participants in more than 25 RCTs to demonstrate that lowering LDL by 1 mmol/L reduces major vascular events by about 21%, with consistent benefits across diverse subgroups. Similarly, pooled analyses of phase III COVID-19 vaccine trials, such as those for the (ChAdOx1 nCoV-19) vaccine, integrated individual from four international RCTs involving over 23,000 participants, yielding an overall efficacy of 70.4% against symptomatic infection and highlighting efficacy variations by dosing interval. Recent examples include a 2025 IPD meta-analysis of RCTs evaluating inhibitors versus aspirin monotherapy after , assessing cardiovascular outcomes. Pooled analyses in clinical settings require adaptations to handle inter-trial heterogeneity, such as differences in dosing regimens or eligibility criteria, often through stratified analyses or multivariable models that adjust for these protocol variations as covariates. For endpoints involving patient outcomes like , time-to-event analyses—typically employing proportional hazards models on the combined dataset—facilitate estimation of hazard ratios while accounting for censoring and follow-up differences across trials. Regulatory agencies, including the U.S. Food and Drug Administration (FDA) and the (EMA), leverage pooled analyses for post-marketing surveillance to evaluate long-term safety signals and support label expansions or new indications. For instance, the FDA has conducted pooled analyses of individual patient data from multiple RCTs to assess effects in specific subgroups, informing approvals and updates to labeling based on enhanced from combined datasets.

Advantages and Limitations

Advantages

Pooled analysis, which involves combining individual participant data from multiple studies, offers substantial benefits over methods such as standard . One primary advantage is the increased statistical power achieved through the larger effective sample size, which reduces variance in effect estimates and enables the detection of small effect sizes, particularly in studies of rare diseases or uncommon exposures. For instance, this approach has demonstrated up to a sixfold increase in power for identifying differential treatment effects compared to analyses. Another key benefit is enhanced adjustability, as pooled analysis provides direct access to individual-level covariates, allowing for precise control of factors and interactions that minimize ecological —where inferences from group-level data may not accurately reflect individual-level relationships. This individual-level adjustment surpasses the limitations of , which relies on pre-specified and cannot fully account for patient-specific variables. Pooled analysis also affords greater flexibility in hypothesis testing, enabling the exploration of novel questions such as time-dependent effects or non-linear interactions that are infeasible with aggregate data alone. Researchers can standardize outcome definitions across studies and apply complex models, like those incorporating prognostic factors, to uncover insights beyond the original study designs. Finally, it improves precision in estimates by better handling heterogeneity through study-level adjustments and standardized methodologies, resulting in narrower and more reliable confidence intervals. This precision is particularly valuable for informing clinical guidelines, as it reduces between-study variability and enhances the robustness of pooled results compared to traditional meta-analytic approaches.

Limitations and Challenges

One of the primary challenges in conducting pooled analyses, particularly those involving individual participant (IPD), is the difficulty in obtaining access to the necessary . Investigators may be reluctant to share due to concerns over , competitive advantages, or loss of control, while legal and ethical barriers such as compliance with regulations like the General Data Protection Regulation (GDPR) or the Health Insurance Portability and Accountability Act (HIPAA) impose strict requirements for consent, ethical approvals, and . Additionally, practical issues including inability to contact original authors, unclear data ownership, lost datasets, or outright refusals can hinder collection efforts, often resulting in availability bias where only certain studies—potentially those with favorable results—are included. These access constraints can limit the scope of the analysis and introduce selection biases that compromise the representativeness of the pooled . Beyond data acquisition, unresolvable heterogeneity among studies poses a significant hurdle, even after statistical modeling attempts. Differences in study populations, such as varying eligibility criteria, demographic compositions, or geographic factors, may prevent effective pooling and lead to biased estimates if not fully accounted for. Similarly, inconsistencies in measurement protocols, outcome definitions, or methods across studies can introduce variability that undermines the validity of combined results, particularly in high-dimensional or sensitive datasets like clinical trials or studies. When such heterogeneity cannot be resolved through , it may necessitate exclusion of otherwise eligible studies, further reducing the pooled sample size and statistical power. Pooled analyses are also resource-intensive, demanding substantial time, computational resources, and collaborative coordination among multiple stakeholders. The process of data collection, cleaning, harmonization, and analysis is far more complex than aggregate data approaches, often requiring specialized expertise and infrastructure that may not be readily available. This intensity is exacerbated by data privacy concerns, which add layers of administrative burden, and can lead to publication or availability bias if only studies with positive or significant findings are shared, skewing the overall evidence synthesis. In practice, these demands may outweigh the benefits for some research questions, prompting researchers to consider alternatives like partial derivatives meta-analysis when full IPD sharing is infeasible. Key potential pitfalls include in pooled datasets derived from a smaller number of studies than anticipated, due to non-participation or unavailability, which can inflate the risk of spurious associations. , non-standardized variables, and computational challenges in advanced modeling further complicate interpretations, necessitating rigorous analyses to assess the robustness of findings against these issues. Mitigation strategies, such as prospectively defining clear eligibility criteria, establishing data-sharing platforms like ClinicalStudyDataRequest.com, and conducting thorough quality checks, can help address these risks but require upfront planning and investment.

References

  1. [1]
    Pooled analysis of epidemiological studies involving biological ...
    Pooled analysis is a method frequently used in epidemiology when individual studies are too small to allow any definite conclusion.
  2. [2]
    Pooled Analysis - an overview | ScienceDirect Topics
    Pooled analysis is defined as a methodologically appropriate approach that combines data from multiple sites after enrollment and follow-up, allowing for ...
  3. [3]
    Understanding the Basics of Meta-Analysis and How to Read a ...
    Oct 6, 2020 · A pooled analysis is not the same as meta-analysis. In a pooled analysis, individual participant data from 2 or more nearly identical studies ...
  4. [4]
    Traditional reviews, meta-analyses and pooled ... - Oxford Academic
    More reliable results can be expected if individual data are available for a pooled analysis, although some heterogeneity still remains. Large prospective ...
  5. [5]
    Methods for pooled analyses of epidemiologic studies - PubMed
    This paper presents a systematic methodology for the pooling and analysis of previously conducted epidemiologic studies.
  6. [6]
  7. [7]
  8. [8]
    Evaluating the current methodological practices and issues in ...
    Nov 13, 2024 · We aimed to review related pooling studies and evaluate the quality of pooling within the framework of specific methodological guidelines, ...
  9. [9]
    A tutorial on individual patient data meta‐analysis (IPDMA) - PMC
    The term individual patient (or participant) data (IPD) is defined as the information obtained for each specific patient in a given study. It is the most ...Introduction · Dose Data · How To Do The Ipdma Pk...
  10. [10]
    Methods for Pooling Results of Epidemiologic Studies
    These methods include qualitative reviews and quantitative summaries such as meta-analyses of the published literature and pooled analyses of the primary data ( ...
  11. [11]
    Individual participant data (IPD) meta-analysis: An introduction
    Jan 11, 2025 · IPD-MA includes and segregates individual patient data to study new outcomes, identify outcome predictors, and analyse multiple covariate effects on treatments.
  12. [12]
    Collaborative, pooled and harmonized study designs for ...
    Feb 8, 2018 · Typically, ICSs agree to share individual-level data, which are then harmonized and pooled into a single dataset. However, CSDs may also ...Missing: compatible | Show results with:compatible
  13. [13]
    Individual participant data (IPD) meta-analysis: An introduction
    Pooled analysis – Pooled analysis is the aggregation of data from multiple studies with similar studies and statistical designs following a systematic review.
  14. [14]
    Methods for Pooled Analyses of Epidemiologic Studies - Lippincott
    This paper presents a systematic methodology for the pooling and analysis of previously conducted epidemiologic studies. It discusses the methodologic ...
  15. [15]
    Meta-analysis of Observational Studies in Epidemiology: A Proposal ...
    Apr 19, 2000 · Meta-analysis of individual-level data from different studies, sometimes called "pooled analysis" or "meta-analysis of individual patient data," ...
  16. [16]
    Individual participant data meta‐analyses compared with ... - PMC
    Conventional meta‐analyses do not allow proper subgroup analyses, whereas IPD meta‐analyses produce more accurate subgroup effects. Conventional meta‐analysis ...
  17. [17]
    Meta-Analyses of Aggregate Data or Individual Participant Data ...
    A meta-analysis of aggregate data (AD) uses statistical analyses to generate a summary (pooled) estimate using effect estimates of individual studies reported ...
  18. [18]
    The strengths and limitations of meta-analyses based on aggregate ...
    Apr 25, 2005 · The major advantage of IPD as opposed to APD meta-analysis is the ability to study the impact of individual patient level characteristics. It is ...Abstract · C) Data Accuracy And... · E) Survival Data
  19. [19]
    Which is Better for Individual Participant Data Meta-Analysis of Zero ...
    Mar 23, 2023 · IPD meta-analysis offers potential advantages over traditional meta-analysis in the ability to produce estimates of treatment effect that ...
  20. [20]
    A Primer on Individual Participant Data Meta-Analysis and Its ...
    Jul 7, 2025 · In conventional (aggregate data) meta-analysis, the results of many similar studies are statistically combined to yield a single pooled ...
  21. [21]
    Systematic reviews in epidemiology: why are we so far behind?
    Influence of methodologic factors in a pooled analysis of 13 case-control studies of colorectal cancer and dietary fiber. Epidemiology. 1994. ;. 5. : 66. –79.
  22. [22]
    Individual Participant Data (IPD) Meta-analyses of Randomised ...
    Jul 21, 2015 · A key component of appraising an IPD review or meta-analysis of efficacy is determining whether it is part of a systematic review [3–5].
  23. [23]
    Original Article PRIME-IPD SERIES Part 1. The ... - ScienceDirect.com
    We propose five steps of Processing, Replication, Imputation, Merging, and Evaluation to prepare individual participant data for meta-analysis (PRIME-IPD).
  24. [24]
    Meta‐analysis using individual participant data: one‐stage and two ...
    In this tutorial paper, we outline the key statistical methods for one‐stage and two‐stage IPD meta‐analyses, and provide 10 key reasons why they may produce ...
  25. [25]
    Individual participant data meta‐analysis of intervention studies with ...
    Nov 23, 2019 · We summarize published guidance, statistical methods and software for survival analysis using IPD from multiple randomized clinical trials.
  26. [26]
    Odds Ratio Meta-analysis (Mantel-Haenszel and Exact) - StatsDirect
    The Mantel-Haenszel method is used to estimate the pooled odds ratio for all strata, assuming a fixed effects model: - where ni = ai+bi+ci+di.
  27. [27]
  28. [28]
    Pooling Project of Prospective Studies of Diet and Cancer
    The project is an international consortium of cohort studies working together to evaluate comprehensively how dietary factors, body size measurements, ...
  29. [29]
    Brain tumour risk in relation to mobile telephone use: results of the ...
    The INTERPHONE study is the largest case–control study of mobile phones and brain tumours conducted to date, including the largest numbers of users with at ...Methods · Results · Discussion · Acknowledgements
  30. [30]
    Understanding and applying gene–environment interactions - NIH
    Mar 5, 2024 · Although they currently are rare, meta-analyses and pooled analyses of GxEs with sensitivity analysis in subgroups hold promise for insightful ...
  31. [31]
    Current Challenges and New Opportunities for Gene-Environment ...
    Oct 1, 2017 · This paper highlights current and critical issues and themes in G×E research that need additional consideration, including the improved data analytical methods.
  32. [32]
    Review Methodological issues in pooled analysis of biomarker studies
    Pooled analysis seems to provide a relevant improvement over meta-analysis in molecular epidemiology studies, though more research on methodology is needed.
  33. [33]
    Meta-Analysis of Rare Adverse Events in Randomized Clinical Trials
    In this paper, we compare 15 widely-used meta-analysis models under both Bayesian and frequentist frameworks when outcomes are extremely infrequent or rare.Inverse Variance Estimators · Simulation Study · Rosiglitazone Data Analysis
  34. [34]
    Individual patient data meta-analysis of time-to-event outcomes
    Meta-analyses of individual patient data (IPD) provide a strong and authoritative basis for evidence synthesis. IPD are particularly useful when the outcome ...
  35. [35]
    CTT Collaboration
    The Collaboration conducts individual participant data meta-analyses of large-scale (1000 or more participants), long-term (two or more years) unconfounded, ...
  36. [36]
    [PDF] Pooling Clinical Data: Key points and Pitfalls - Lexjansen.com
    i) Firstly standardizing the data at a study and/or domain (dataset) level. This would involve the standardization of data to meet the pooled dataset structure/ ...
  37. [37]
    [PDF] Guidance for Industry Meta-Analyses of Randomized, Controlled ...
    When reviewing the component trials of a meta-analysis, it is important to consider the. 258 possibility of differential follow-up and informative censoring.<|control11|><|separator|>
  38. [38]
    An FDA Pooled Analysis of Patients with Melanoma Treated ... - NIH
    Feb 1, 2019 · Trials were eligible for pooled analysis if they were submitted to the U.S. Food and Drug Administration (FDA) in support of a marketing ...
  39. [39]
    To IPD or not to IPD? Advantages and disadvantages of systematic ...
    To IPD or not to IPD? Advantages and disadvantages of systematic reviews using individual patient data. Eval Health Prof. 2002 Mar;25(1):76-97. doi: 10.1177 ...
  40. [40]
  41. [41]
    Challenges In Performing An Individual Participant–level Data Meta ...
    A challenge with IPDMA is that it often requires considerable effort to obtain all the data, and an inability to obtain data for all the studies can lead to ...
  42. [42]
    Highlighting the Benefits and Disadvantages of Individual ... - PMC
    Mar 8, 2024 · Stewart LA , Tierney JF . To IPD or not to IPD? Advantages and disadvantages of systematic reviews using individual patient data . Eval ...
  43. [43]
    Partial derivatives meta-analysis: pooled analyses when individual ...
    Jun 21, 2025 · However, sharing IPD often has legal, ethical, and logistic constraints for sensitive or high-dimensional data, such as in clinical trials, ...