Fact-checked by Grok 2 weeks ago

Distributed lag

A distributed lag model is an econometric framework used to analyze time series data, where the current value of a dependent is influenced not only by contemporaneous values of independent but also by their past values distributed across multiple time periods, allowing for the modeling of delayed or gradual effects in dynamic systems. These models are particularly valuable in for capturing how changes, investments, or shocks propagate over time rather than exerting immediate full impacts. The concept of distributed lags traces its origins to the early , with foundational work in by economists such as and , who explored lagged responses in business cycles and . Significant advancements occurred in the mid-20th century, including the infinite geometric distributed lag model proposed by Leendert Koyck in 1954, which assumes exponentially declining weights on past values to simplify estimation of long-term effects. In 1965, Shirley Almon introduced the distributed lag technique, a flexible method for finite lags that approximates lag weights using polynomial functions, enabling estimation of arbitrary shapes while reducing the number of parameters. Distributed lag models are classified into finite and infinite types, with finite models limiting the lag length to a specific number of periods and models extending indefinitely, often under restrictive assumptions like geometric decay to ensure tractability. Estimation challenges arise due to among lagged variables, addressed through techniques such as the Koyck for lags or Almon's restrictions for finite ones, which impose structure on the lag coefficients. These models have broad applications in , such as analyzing the lagged effects of on output or expenditures on , and extend to fields like for modeling delayed health impacts.

Fundamentals

Definition

A distributed lag model is a framework in time series where the current value of a dependent is influenced not only by the contemporaneous value of an independent but also by its past values, with the effects dispersed across multiple time periods. This approach accounts for the realistic dynamics in economic and other processes where responses to stimuli occur gradually rather than instantaneously. In contrast to simple lag models, which typically incorporate only one lagged term—either of the dependent variable (autoregressive) or an variable—distributed models emphasize a series of weighted lags for the variables, capturing how impacts accumulate and fade over time. This weighting scheme allows for flexible modeling of and decay in relationships, such as in or decisions. The concept of distributed lags traces its origins to the 1920s in , pioneered by in 1925 and further developed by in , to analyze dynamic responses in macroeconomic systems, such as the propagation of business cycles. For instance, a change, like increased , might boost immediately but continue to exert influence over subsequent quarters, with the strongest effects in the short term and tapering thereafter.

Mathematical Formulation

The distributed lag model provides a for capturing the dynamic effects of an explanatory on a dependent over multiple time periods. In its finite form, the model is expressed as y_t = \alpha + \sum_{j=0}^{q} \beta_j x_{t-j} + \epsilon_t, where y_t is the dependent at time t, \alpha is , x_{t-j} are lagged values of the explanatory up to lag q, \beta_j are the lag coefficients, and \epsilon_t is the error term. This formulation assumes that the effects of x dissipate after q periods, making it suitable for empirical estimation via ordinary least squares when q is small. For cases where effects persist indefinitely, the model extends to an infinite distributed lag: y_t = \alpha + \sum_{j=0}^{\infty} \beta_j x_{t-j} + \epsilon_t, with the condition that the infinite sum converges, typically requiring \sum_{j=0}^{\infty} |\beta_j| < \infty. This infinite form theoretically allows for perpetual but diminishing impacts but necessitates restrictions for practical estimation. Key assumptions underpin these models to ensure valid inference. The variables y_t and x_t are assumed to be stationary, meaning their statistical properties remain constant over time, which facilitates the interpretation of dynamic relationships. The error term \epsilon_t is assumed to have no autocorrelation, exhibiting white noise properties with zero mean and constant variance. Additionally, strict exogeneity holds for the lagged independent variables, implying that the conditional mean of \epsilon_t given current and all past (and future) values of x is zero, ensuring unbiased estimates. The lag coefficients \beta_j represent the marginal effect of a unit change in x_{t-j} on y_t, holding other factors constant; for instance, \beta_0 captures the immediate impact, while subsequent \beta_j quantify delayed responses. In the infinite case, the long-run effect is the sum \sum_{j=0}^{\infty} \beta_j, provided it converges. These models are often compactly represented using the lag operator L, where L x_t = x_{t-1} and L^k x_t = x_{t-k}. The infinite lag structure is then denoted by the lag polynomial \beta(L) = \sum_{j=0}^{\infty} \beta_j L^j, so the model becomes y_t = \alpha + \beta(L) x_t + \epsilon_t. This operator notation highlights the distributed nature of the lags and aids in deriving restricted forms, such as geometric or rational lags.

Types

Finite Distributed Lags

In finite distributed lag models, the impact of an explanatory variable x_t on the dependent variable y_t is modeled as occurring only through current and past values up to a maximum lag length q, with the lag coefficients \beta_j set to zero for all j > q. The general form is given by y_t = \alpha + \sum_{j=0}^q \beta_j x_{t-j} + \epsilon_t, where \alpha is the intercept, \epsilon_t is the error term, and the \beta_j capture the dynamic effects over the finite horizon. These models offer computational simplicity due to the limited number of parameters, making them easier to estimate via ordinary least squares compared to unrestricted forms with longer lags. However, if the true underlying lag structure extends beyond q, the truncation introduces specification bias in the estimates of the \beta_j. Additionally, the regressors x_{t-j} for j = 0, \dots, q exhibit high correlation, resulting in multicollinearity that can inflate variance and reduce the precision of coefficient estimates. To address and reduce the parameter space, common restrictions impose structure on the \beta_j, such as the polynomial distributed lag (PDL) approach introduced by Almon. In this method, the lag weights are constrained to follow a of degree p: \beta_j = \sum_{k=0}^p \gamma_k j^k, \quad j = 0, 1, \dots, q, where the \gamma_k are the parameters to be estimated, typically with p < q to ensure . This approximation assumes smooth variation in the effects over time, often with endpoint constraints like \beta_0 = 0 or \beta_q = 0 to reflect immediate or terminal impacts. The finite nature and parametric restrictions of these models mitigate estimation challenges by limiting and stabilizing inferences, making them well-suited for scenarios where effects are predominantly short-term and dissipate after a few periods. For instance, in analyzing the influence of quarterly expenditures on , a finite distributed lag of order 4 might be used to quantify how promotional efforts affect in the current quarter and the subsequent three, capturing impacts in the short run while assuming negligible longer-term .

Infinite Distributed Lags

Infinite distributed lags model the impact of an explanatory variable on the dependent variable as persisting indefinitely, without a fixed , allowing for theoretically endless lagged effects. The general form is given by y_t = \alpha + \sum_{j=0}^{\infty} \beta_j x_{t-j} + \epsilon_t, where y_t is the outcome at time t, x_{t-j} are lagged values of the explanatory variable, and \epsilon_t is the error term. To address the challenge of estimating infinitely many parameters, these models often impose a geometric decay structure on the coefficients, such that \beta_j = \beta \lambda^j for j = 0, 1, 2, \dots, where $0 < \lambda < 1 ensures the weights diminish over time and the series converges. This assumption reflects exponentially declining influence of past values, common in economic processes like responses. Such models capture long-run in relationships, where early shocks continue to influence outcomes far into the future, but the number of parameters necessitates restrictions like geometric to prevent overparameterization and enable practical . Without these constraints, direct becomes infeasible due to among the lags. A key method for estimating the geometric infinite lag is the Koyck transformation, introduced by Leendert M. Koyck. This involves lagging the original equation by one period, multiplying it by \lambda, and subtracting it from the contemporaneous equation, yielding the finite-parameter autoregressive form: y_t = \alpha (1 - \lambda) + \lambda y_{t-1} + \beta x_t + u_t, where u_t = \epsilon_t - \lambda \epsilon_{t-1} represents a moving average error process of order one. This transformation reduces the model to estimable parameters \lambda, \beta, and the intercept, while preserving the infinite lag structure implicitly. Under the geometric assumption, the long-run multiplier, which measures the total cumulative effect of a unit change in x over all periods, is the sum of the coefficients: \sum_{j=0}^{\infty} \beta_j = \frac{\beta}{1 - \lambda}. This quantity aggregates the persistent impacts, providing insight into steady-state effects. Despite its advantages, the infinite distributed framework with geometric has limitations, as it assumes a constant rate \lambda across all lags, which may not align with empirical realities where influence patterns vary nonlinearly or unevenly. This rigidity can lead to misspecification if the true structure deviates from decline.

Estimation Techniques

Unstructured Methods

Unstructured methods for estimating distributed lag models involve treating each lag coefficient as independent, without imposing any parametric form on their structure. The primary approach uses ordinary (OLS) to estimate the unrestricted finite distributed lag model, given by y_t = \alpha + \sum_{j=0}^{q} \beta_j x_{t-j} + \epsilon_t, where y_t is the dependent variable at time t, x_{t-j} are lagged values of the explanatory variable up to lag q, \alpha is the intercept, the \beta_j are individual lag coefficients estimated separately, and \epsilon_t is the error term assumed to be uncorrelated with the regressors under strict exogeneity. This method applies to both finite and infinite lag models by truncating the latter at a finite q for practical estimation. A key challenge in this estimation is severe among the lagged regressors, particularly when the explanatory variable x_t exhibits high , which inflates the variance of the \hat{\beta}_j estimates and results in large standard errors, making individual coefficients imprecise. Additionally, including many lags reduces the effective sample size and , necessitating a sufficiently long to achieve reliable estimates. To address these issues, lag length q is typically selected using information criteria such as the (AIC) or (BIC), which balance model fit and parsimony by penalizing excessive parameters. If longer lags prove statistically insignificant—often tested sequentially starting from the highest—the model can be truncated to a shorter length without substantial loss of information. These methods are best suited for exploratory where no strong prior assumptions exist about the lag structure, or when the data support strict exogeneity of the regressors with respect to the errors. However, omitting relevant longer lags can introduce finite-sample in the estimated coefficients, as the excluded may correlate with the included lags and distort the overall .

Structured Methods

Structured methods for estimating distributed lag models impose parametric restrictions on the lag coefficients \beta_j to address issues like and proliferation, enhancing and aiding when the true lag structure aligns with the imposed form. These approaches reduce the number of parameters to estimate while allowing recovery of the full lag profile, often drawing on economic for the choice of restrictions. Unlike unstructured methods, which treat each \beta_j as free, structured methods assume smooth or decaying patterns, such as polynomials or exponentials, to parsimoniously capture . One prominent structured approach is the Almon polynomial approximation, which parameterizes the lag coefficients as a low-degree in the lag length j: \beta_j = \gamma_0 + \gamma_1 j + \cdots + \gamma_p j^p, where p is the polynomial degree, typically small (e.g., 2–4) to balance flexibility and parsimony. The \gamma_k parameters are estimated via ordinary least squares (OLS) on the transformed regressors, after which the \beta_j are recovered by evaluation; endpoint constraints, such as \beta_0 = 0 or \beta_L = 0 for finite lag length L, are often imposed to reflect theoretical expectations like no immediate effect or full dissipation. This method, introduced by Shirley Almon, substantially mitigates by estimating only p+1 parameters instead of L, making it suitable for finite distributed lags with smooth profiles. The Koyck geometric restriction assumes an exponentially decaying lag structure, \beta_j = \beta \lambda^j for $0 < \lambda < 1, which implies an infinite distributed lag with geometric decline. To estimate, the model undergoes an AR(1) transformation by lagging the original equation y_t = \alpha + \sum_{j=0}^\infty \beta_j x_{t-j} + \epsilon_t and multiplying by \lambda, then subtracting from the original to yield y_t = \alpha(1-\lambda) + \beta x_t + \lambda y_{t-1} + u_t, where u_t = \epsilon_t - \lambda \epsilon_{t-1} induces autocorrelation. OLS on this transformed equation provides consistent estimates of \lambda and \beta (under strict exogeneity of x_t), from which all \beta_j are derived; the approach is particularly useful for infinite lags where effects persist indefinitely but diminish over time. This formulation, originating from Leendert Koyck's analysis of investment dynamics, ensures the long-run multiplier \sum \beta_j = \beta / (1-\lambda) is finite and interpretable. Other structured methods include partial adjustment models, which extend the Koyck framework by positing that agents adjust gradually toward an equilibrium due to costs, leading to a structure akin to geometric ; follows a similar AR(1) form, yielding short-run and long-run elasticities. Bayesian approaches incorporate priors on rates or smoothness (e.g., via hierarchical models or tree-based smoothing), enabling posterior inference on profiles while regularizing against through shrinkage; for instance, latent variable expansions facilitate variable selection in distributed lags. These methods are applied in contexts requiring , such as environmental impact assessments. The primary advantages of structured methods lie in reducing inherent in lagged regressors, which inflates variance in unstructured estimates, and enabling to unestimated lags (e.g., beyond observed for models); estimators remain consistent if the restrictions hold true, though arises otherwise. For finite lags, forms preserve flexibility while cutting parameters by up to 80% for moderate L, improving precision in small samples. To validate the imposed restrictions, researchers employ F-tests comparing the restricted model's to the unstructured baseline, or likelihood ratio tests in maximum likelihood frameworks, where rejection indicates misspecification and suggests relaxing the structure. These tests assess whether the parsimony gain outweighs fit loss, guiding .

Applications

In Econometrics

Distributed lag models are widely applied in econometrics to capture dynamic relationships between economic variables, particularly in analyzing how policy changes propagate through the economy over time. A prominent application involves modeling investment responses to changes in interest rates, often framed within the accelerator principle, where investment is viewed as a distributed lag function of output growth or sales changes, reflecting gradual adjustments in capital stock. This approach highlights how interest rate hikes can dampen investment with a delay, as firms adjust slowly to higher borrowing costs. Similarly, these models estimate fiscal policy multipliers, quantifying the time-varying impact of government spending or tax changes on aggregate output; for instance, multipliers may peak after several quarters due to lagged consumption and investment responses. Distributed lag models relate closely to autoregressive distributed lag (ARDL) models, which extend the basic framework by incorporating lags of the dependent variable to account for serial correlation and feedback effects, as in the specification y_t = \sum_{i=1}^{p} \phi_i y_{t-i} + \sum_{j=0}^{q} \beta_j x_{t-j} + \epsilon_t, where y_t is the dependent variable, x_t the explanatory variable, and \epsilon_t the error term. ARDL models facilitate cointegration testing through the bounds test developed by Pesaran, Shin, and Smith, which assesses long-run equilibrium relationships without requiring pre-testing for unit roots, making it suitable for mixed-order integrated variables common in economic time series. Historically, distributed lags featured in macroeconometric models like the Klein-Goldberger model, a pioneering quarterly system for the U.S. economy that incorporated lagged adjustments in consumption and investment to simulate dynamic policy effects. They also underpin event studies for policy shocks, where leads and lags of treatment indicators estimate causal impacts, such as the staggered rollout of tax reforms on firm behavior. An illustrative example is estimating the distributed effects of on output using extensions of (VAR) models, where policy shocks—identified via high-frequency surprises in interest rates—are traced through functions to reveal lagged output responses peaking after 1-2 years. However, economic applications face significant challenges, including from reverse causality or omitted variables, which biases lag coefficients; instrumental variables, such as external policy instruments uncorrelated with errors but related to the endogenous regressor, are essential to address this and ensure consistent estimates. General estimation techniques like ordinary least squares can exacerbate these issues in finite samples, necessitating robust methods from prior literature.

In Health and Environmental Studies

Distributed lag models are widely applied in health and environmental studies to assess the cumulative impacts of exposures such as on outcomes like mortality and respiratory diseases, capturing effects that manifest over days or weeks following exposure. For instance, these models evaluate how fine (PM2.5) influences daily admissions by accounting for lagged associations across multiple time periods, revealing short-term risks that extend up to 6 days post-exposure, with delayed effects prominent for respiratory outcomes. In , such approaches help disentangle immediate and delayed biological responses to pollutants, providing insights into vulnerable populations and informing interventions. Key techniques in this domain include distributed lag non-linear models (DLNMs), which flexibly model both non-linear exposure-response relationships and delayed effects, allowing researchers to estimate risk patterns that vary in shape and timing. DLNMs are particularly suited for studies, as they can incorporate basis splines to represent complex lag structures without assuming linearity. Complementing these, case-crossover designs integrated with distributed lag models control for time-invariant confounders and seasonal trends, enhancing in time-series data on pollution-health links. This combination has been used to isolate acute effects of pollutants like PM2.5 on cardiovascular hospitalizations while adjusting for meteorological covariates. A representative example involves exposure, where studies demonstrate peak health risks 1-2 days after exposure, with cumulative effects persisting over 7 days and a 5% increase in per 5 μg/m³ (~2.5 ppb) increment in ozone on the day of or one day prior to acute . These models offer advantages in health research by accommodating incubation periods—the biological delays between exposure and symptom onset—and harvest effects, where short-term mortality spikes deplete frail individuals, potentially masking longer-term impacts. For , this enables quantification of mortality displacement, showing that while immediate PM2.5 effects elevate deaths within days, net impacts may extend weeks due to deferred vulnerabilities. Recent developments integrate distributed lag models with to address geographic variations in environmental exposures, such as varying PM2.5 concentrations across urban areas, improving estimates of localized health risks. Bayesian distributed lag models further enhance this by incorporating and hierarchical structures, as seen in applications to perinatal effects on birth outcomes. Infinite distributed lags may be referenced briefly for modeling persistent environmental effects, such as long-term soil contaminant accumulation, though finite lags suffice for most acute scenarios.

References

  1. [1]
    [PDF] Distributed-Lag Models
    A distributed-lag model is a dynamic model where the effect of a regressor on a variable occurs over time, not all at once.
  2. [2]
    Distributed Lag Linear and Non-Linear Models in R - NIH
    In this setting, delayed effects are elegantly described by distributed lag models (DLMs), a methodology originally developed in econometrics (Almon 1965) and ...
  3. [3]
    Distributed Lags: A Survey - jstor
    THE HISTORY of distributed lag models dates back to the 1930's and the work of Irving Fisher and Tinbergen.
  4. [4]
    [PDF] On the econometrics of the Koyck model∗
    The geometric distributed lag model is often used to investigate the current and carryover effect of advertising on sales. This model makes current sales a ...
  5. [5]
    The Distributed Lag Between Capital Appropriations and Expenditures
    BY SHIRLEY ALMON. THIS PAPER PRESENTS a new distributed lag, very flexible and easy to estimate, and its application to the problem of predicting quarterly ...
  6. [6]
    [PDF] Distributed Lag Models
    Distributed lag models have short-run and long-run coefficients. Autoregressive models can be derived by restricting lags in distributed lag models.
  7. [7]
    [PDF] The Estimation of Distributed Lars
    Koyck's distributed lag has a very special time shape which cannot be justified on purely a priori grounds against other schemes, particularly those which allow ...
  8. [8]
    [PDF] damodar-gujarati-basic-econometrics.pdf
    As in the previous three editions, the primary objective of the fourth edition of Basic Econometrics ... distributed lag models. Substituting this assumption into ...
  9. [9]
  10. [10]
    15.2 Dynamic Causal Effects | Introduction to Econometrics with R
    For estimation of a dynamic causal effect using a distributed lag model, assuming a stronger form termed strict exogeneity may be useful. Strict exogeneity ...Missing: autocorrelation | Show results with:autocorrelation
  11. [11]
    Online Econometrics Textbook - Finite distributed lags
    III.VI.1 Finite distributed lags. We define finite distributed lags by the following underlying model. Online Econometrics Textbook - Regression Extensions ...
  12. [12]
    9 Time-Series: Stationary Variables - Principles of Econometrics with R
    9.2 Finite Distributed Lags. A finite distributed lag model (FDL) assumes a linear relationship between a dependent variable y and several lags of an ...
  13. [13]
    [PDF] Distributed Lag Models - David Aadland's Home Page
    Apr 18, 2013 · The general form of infinite distributed lag model is yt = β0xt + β1xt−1 + β2xt−2... + et. = (β0 + β1L + β2L2 + ...)xt + et. = β(L)xt + et.
  14. [14]
    Robust estimation of the distributed lag model with multicollinearity ...
    The finite distributed lag models include highly correlated variables as well as lagged and unlagged values of the same variables. Some problems are faced ...
  15. [15]
    Testing Distributed Lag Models of Advertising Effect - jstor
    Time series marketing models often involve some notion of distributed lag, especially in studies of the relationship between sales and advertising.
  16. [16]
    [PDF] 2015.220824.Distributed-Lags.pdf
    The general case is that there will be a distributed lag, long or short, in the reaction of y to x. There are several reasons for the reactions of producers or.Missing: Leendert | Show results with:Leendert
  17. [17]
    [PDF] The Causal Effect of Environmental Catastrophe on Long-Run ...
    Long-run marginal cumulative effects for AR(1)-AR(4) models. as noise and will not bias a model but too few lags may generate bias if omitted lags important and ...
  18. [18]
    [PDF] How Much Will Global Warming Cool Global Growth?
    These concerns include omitted lags that can bias coefficients and create the appearance of growth effects, the approach to modeling non- linearities in ...
  19. [19]
    Explaining the Almon Distributed Lag Model
    Jan 6, 2017 · In an earlier post I discussed Shirley Almon's contribution to the estimation of Distributed Lag (DL) models, with her seminal paper in 1965.
  20. [20]
    [PDF] Partial Adjustment Model (PAM) - agus tri basuki
    This model distinguishes between short- term and long-term responses of the dependent variable to one unit of change in the value of the explaining variable.
  21. [21]
    Full article: Bayesian variable selection in distributed lag models
    In this paper we describe how to expand the utility of these models for Bayesian inference by leveraging latent variables.
  22. [22]
    Distributed Lags
    Because of multicollinearity, the individual coefficients in a distributed lag regression usually are poorly determined. For instance, in the unconstrained lag ...
  23. [23]
    [PDF] Coding Almon Polynomial Lags in Econometrics Programs
    Jun 19, 2002 · Almon or polynomial distributed lags are used to reduce the effects of collinearity in distributed lag settings. They impose some particular ...
  24. [24]
    Testing the Restrictions of the Almon Lag Technique
    The usual approach to testing the restrictions is to fit the unrestricted distributed lag equation and then to fit the imposed Almon structure. The consistency ...
  25. [25]
    Further Development of a Distributed Lag Investment Function - jstor
    THIS PAPER represents a modification and extension of the work of Robert. Eisner who has provided evidence on the role of the acceleration principle,.
  26. [26]
    [PDF] Decomposing the Fiscal Multiplier James S. Cloyne, Òscar Jordà ...
    The fiscal “multiplier” seeks to measure how many additional dollars of output are gained or lost for each dollar of fiscal expansion or contraction. In ...
  27. [27]
    Local and aggregate fiscal policy multipliers - ScienceDirect.com
    The study used the cross-sectional autoregressive distributed lags by employing a dynamic common correlated effects estimator to address the cross-sectional ...
  28. [28]
    The Dynamic Properties of the Klein-Goldberger Model - jstor
    These operations left us with a set of equations in which the several error terms appear as parameters, analogous to the exogenous and the lagged variables.
  29. [29]
    [PDF] Distributed Lags And The Effectiveness Of Monetary Policy
    Since price and wage changes are ignored, this model is concerned with the impact of monetary policy on income and employment and thus applies to a situation ...Missing: extensions | Show results with:extensions
  30. [30]
    [PDF] Dealing with Endogeneity in Regression Models with Dynamic ...
    The basic idea behind the con- trol function is to model the dependence of the disturbance term on the endogenous variables in a way that allows us to construct ...
  31. [31]
    Distributed lag non-linear models - PMC - NIH
    May 7, 2010 · The main advantage of this method is that it allows the model to contain a detailed representation of the time-course of the exposure–response ...
  32. [32]
    The Temporal Lag Structure of Short-term Associations of Fine ...
    Lag patterns of PM2.5 constituents, acting as indicators of pollution sources, may provide new insights into the underlying relationships between PM2.5 air ...
  33. [33]
    Time-lagged relationships between a decade of air pollution ...
    We applied distributed lag models and estimated the lagged associations between air pollution and odds of first hospitalization with ADRD. We found ...
  34. [34]
    Distributed lag non-linear models - PubMed
    Sep 20, 2010 · A modelling framework that can simultaneously represent non-linear exposure-response dependencies and delayed effects.Missing: studies | Show results with:studies
  35. [35]
    A Case-Crossover Design with a Distributed Lag Nonlinear Model
    We used DLNMs combined with the case-crossover design, making it possible to fit more sophisticated estimates of the effects of temperature (or air pollution) ...
  36. [36]
    Inverse probability weighted distributed lag effects of short-term ...
    Application of causal distributed lag modeling showed harmful effects of short-term PM2.5 exposure on CVD hospitalizations in a causal way among elderly ...
  37. [37]
  38. [38]
    Associations between Ozone and Fine Particulate Matter and ... - NIH
    Using a two-stage process, the authors fit time-series models and then distributed lag models to account for air pollution effects up to 1 week after exposure, ...
  39. [39]
    (PDF) Generalized additive distributed lag models: Quantifying ...
    Aug 7, 2025 · Climatic extremes, hunger, bad quality of air, and epidemics can be prime factors in the harvesting effect [14][15][16] [17] . Harvesting is ...
  40. [40]
    A spatially varying distributed lag model with application to an ... - NIH
    Jun 26, 2020 · Distributed lag models have been used to identify critical pregnancy periods of exposure (i.e. critical exposure windows) to air pollution ...
  41. [41]
    Bayesian distributed lag interaction models to identify perinatal ... - NIH
    Bayesian distributed lag interaction models (BDLIM) estimate critical windows of vulnerability and exposure effect heterogeneity in children's health, using a ...
  42. [42]
    Spatial Bayesian distributed lag non-linear models (SB-DLNM) for ...
    Apr 19, 2024 · Here we proposed spatial Bayesian DLNMs (SB-DLNMs) as a new framework for the estimation of reliable small-area lagged non-linear associations.