Fact-checked by Grok 2 weeks ago

Latent growth modeling

Latent growth modeling (LGM), also referred to as latent growth curve modeling (LGCM), is a statistical method embedded within structural equation modeling (SEM) that analyzes longitudinal data to estimate and interpret patterns of individual and average change over time. It represents growth trajectories using latent variables to capture key parameters such as the intercept (initial status or starting level) and slope (rate of change), allowing researchers to model both linear and nonlinear developmental processes while accounting for measurement error and individual differences. This approach treats repeated measures as indicators of underlying latent constructs, enabling the separation of within-person change from between-person variability in a single unified framework. The conceptual foundations of LGM trace back to early work on and of change, with Ledyard R. Tucker introducing generalized learning curves in 1958 as a way to parameterize functional relations in longitudinal data using principal components. This idea was formalized and extended by William Meredith and John Tisak in 1990 through latent curve analysis, which applied to model developmental trajectories with and asymptotic tests for parameters. Bengt Muthén and Patrick J. Curran further advanced the technique in 1997 by integrating it fully into the SEM paradigm, emphasizing its flexibility for handling , multilevel structures, and experimental designs in behavioral research. LGM has become a in fields such as , , and for investigating developmental phenomena, intervention impacts, and risk factors over time, offering advantages over traditional methods like repeated-measures ANOVA by explicitly modeling heterogeneity in and covariates' effects on trajectories. Key extensions include multilevel LGMs for nested (e.g., students within schools), time-varying covariates to explain fluctuations, and growth mixture modeling to identify subpopulations with distinct trajectories. By providing interpretable estimates of parameters and their covariances, LGM facilitates hypothesis testing about the , , and predictors of change, making it particularly valuable for longitudinal studies with unequally spaced assessments or incomplete .

Overview

Definition and Purpose

Latent growth modeling () is a statistical technique embedded within the () framework that analyzes longitudinal data by modeling individual trajectories of change over time. It employs latent variables to capture unobserved heterogeneity in growth processes, specifically representing the initial status of a phenomenon (intercept) and its rate of change (slope). This approach treats repeated measures as indicators of these latent factors, allowing researchers to partition observed variability into systematic growth components and random error. The primary purpose of is to estimate population-average patterns of growth, quantify differences in those trajectories, and identify predictors that explain variability in change. By modeling both intra- change and inter- differences, LGM facilitates the examination of how developmental processes unfold and how they are influenced by covariates, such as demographic factors or interventions. In the basic linear growth model, the observed outcome Y_{ti} for i at time t is expressed as: Y_{ti} = \alpha_i + \beta_i t + \varepsilon_{ti} where \alpha_i denotes the random intercept (initial status), \beta_i the random slope (rate of change), t the time coding, and \varepsilon_{ti} the residual disturbance. was developed to overcome key limitations of traditional methods like repeated measures ANOVA, which often treat individual differences as mere error and fail to account for measurement unreliability or latent constructs underlying observed variables. Instead, explicitly models measurement error and latent growth factors, providing a more nuanced understanding of dynamic processes in fields such as , , and .

Relation to Structural Equation Modeling

Latent growth modeling () represents a specialized application within the broader framework of (), wherein the core growth parameters—such as the intercept (initial status) and (rate of change)—are modeled as latent variables that exert direct influences on observed repeated measures across multiple time points. This positioning as a special case of SEM enables LGM to incorporate the analytical rigor of SEM while prioritizing the temporal sequencing and interdependence inherent in longitudinal data, in contrast to general SEM models that may address cross-sectional or non-temporal relationships without such emphasis. Meredith and Tisak (1990) formalized by framing it as a restricted common factor model embedded in , which imposes specific constraints on factor loadings to reflect developmental trajectories over time. Key shared principles between and include the reliance on (CFA) to operationalize and validate latent constructs through their observed indicators, ensuring measurement reliability and reducing bias from measurement error. Additionally, both approaches employ path diagrams to articulate model structures, with latent variables symbolized as ovals or circles connected by arrows to rectangular observed variables, facilitating clear visualization of hypothesized causal pathways and covariances. Distinct adaptations in LGM arise from its emphasis on time as a foundational predictor, where factor loadings are parameterized to encode temporal progression—fixed for forms like linear growth or estimated for flexible shapes—allowing the model to capture individual differences in change patterns. In path diagrams for LGM, this manifests as structured arrows from the latent intercept (with loadings of 1 at all time points) and slope (with loadings of 0, 1, 2, etc., for equally spaced linear occasions) to each observed indicator, often accompanied by curved double-headed arrows denoting variances and potential covariances between factors. A further unique feature is the capacity to include directional paths from latent slope factors to distal outcomes, enabling examination of how rates of change prospectively influence subsequent variables, such as in models predicting later from academic trajectories.

Historical Development

Early Foundations

The foundations of latent growth modeling trace back to mid-20th-century statistical developments in growth curve analysis, which emphasized univariate trajectories and methods to account for individual differences in longitudinal data. In 1958, Ledyard R. Tucker introduced a factor analytic approach to model growth curves, particularly in the context of learning processes, by determining the parameters of functional relations through principal components analysis. This method focused on univariate growth curves, using principal components to identify the number of underlying dimensions needed to represent temporal changes, thereby capturing the shape and variability of individual trajectories without assuming a specific functional form upfront. Tucker's work laid early groundwork for handling unobserved heterogeneity by treating components as latent structures that explain differences in growth patterns across individuals. Concurrently, C. Radhakrishna Rao developed statistical methods for comparing growth curves in , addressing challenges in both balanced and unbalanced longitudinal designs. Rao's score method enabled the estimation of growth parameters, such as rates and intercepts, by leveraging likelihood-based scores to test hypotheses about differences between groups or individuals, even when observations were missing or irregularly spaced. This approach introduced latent elements through the of structures, allowing for the modeling of unobserved sources of variation in trajectories, which proved particularly useful for biological and psychological growth data. Both Tucker's and Rao's contributions centered on univariate analyses but pioneered the use of latent structures to represent heterogeneity, predating the formalization of () while influencing its later extensions to longitudinal applications.

Key Advancements in the 20th Century

In the late , significant progress in latent growth modeling (LGM) emerged through its integration with (SEM), allowing for the analysis of individual growth trajectories using latent variables. McArdle and Epstein (1987) introduced latent growth curves within developmental SEM frameworks, providing a method to model both average and individual-level changes over time by estimating parameters such as intercepts and slopes as latent factors. This approach built on earlier curve-fitting techniques but formalized them within SEM to handle correlated errors and multiple outcomes more robustly. Bollen's (1989) foundational work on structural equations with latent variables further extended these ideas to , enabling the specification of growth models that account for time-dependent covariances and autoregressive processes in longitudinal datasets. A key milestone came with Meredith and Tisak (1990), who established LGM—also termed latent curve analysis—as a direct subset of , emphasizing the estimation of growth curves through where latent intercepts and slopes capture unobserved heterogeneity in trajectories. Their framework introduced unconditional models, which estimate population-level growth without covariates, and conditional models, which incorporate predictors to explain inter-individual differences in change. Muthén and Curran (1997) further advanced the technique by fully integrating it into the paradigm, highlighting its flexibility for handling , multilevel structures, and experimental designs in behavioral research. By the 1990s, LGM had gained substantial traction in for modeling developmental changes, such as cognitive or behavioral trajectories across ages, due to its ability to disentangle within- and between-person variance. Adaptations in software like LISREL and EQS during this period made these models more accessible, supporting for complex longitudinal designs.

Core Concepts

Latent Variables in Growth

In latent growth modeling, the intercept and serve as latent variables that encapsulate the initial status and rate of change, respectively, of an unobserved true growth trajectory, thereby isolating these constructs from measurement error inherent in observed repeated measures. The intercept factor, with loadings fixed at 1 across all time points, represents the baseline level of the outcome at the starting point (often time 0), while the factor, with loadings set to time-specific values (such as 0, 1, 2 for equally spaced linear assessments), models the systematic progression or decline over time. This framework, rooted in principles, enables the estimation of these latent growth factors as random variables that vary across individuals, providing a more precise depiction of developmental processes than observed data alone. The variances of these latent variables quantify inter-individual differences: the intercept variance captures heterogeneity in starting levels, and the slope variance reflects variation in growth rates, both purged of error influences. The covariance between the intercept and slope elucidates the dynamic interplay between initial conditions and change, for example, indicating whether individuals with higher baseline values exhibit steeper or flatter trajectories. These parameters allow researchers to assess both average growth patterns (via factor means) and individual deviations, offering deeper insights into stability and transformation in longitudinal data. A representative application appears in research, where the intercept denotes baseline ability—such as percent correct on verbal subscales like at age 5 (estimated around 49%)—and the slope signifies the across subsequent assessments. In the Louisville Twin Study, slopes for cognitive subscales (e.g., 0.65 percent correct units per interval for from ages 7–10, using a growth model) revealed significant genetic contributions to growth independent of initial levels, with factor loadings fixed to reflect linear progression over assessment intervals. This setup underscores how latent variables facilitate the partitioning of true developmental variance from error, informing etiological models of cognitive maturation.

Time-Invariant vs. Time-Varying Factors

In latent growth modeling (LGM), covariates are incorporated to explain individual differences in growth trajectories, with a key distinction between time-invariant and time-varying factors. Time-invariant covariates are stable characteristics that do not change across measurement occasions for a given individual, such as , , or baseline (SES). These are typically regressed onto the latent intercept and slope factors to account for between-person differences in initial levels and rates of change. For instance, might predict a higher intercept in cognitive ability trajectories for one group compared to another, thereby explaining group-level variations in growth patterns. The incorporation of time-invariant covariates extends the unconditional LGM to a conditional model, where the growth factors are predicted by these stable predictors. This is represented in the equation for the intercept factor as \alpha_i = \gamma_0 + \gamma_1 X_i + \zeta_i, where \alpha_i is the individual-specific intercept, \gamma_0 is the mean intercept, \gamma_1 is the coefficient for the time-invariant covariate X_i, and \zeta_i is the . Similarly, the slope factor can be modeled as \beta_i = \delta_0 + \delta_1 X_i + \xi_i, allowing the covariate to the of change over time. Such modeling reveals how stable s moderate , as seen in studies where baseline SES predicts steeper educational slopes. Centering these covariates at the grand mean or group mean is recommended to facilitate interpretation and reduce . In contrast, time-varying covariates fluctuate across measurement waves within individuals, such as levels, dosage, or environmental exposures at each time point. These are modeled by including direct paths from the covariate at time t to the outcome or at that time, capturing within-person deviations from the overall beyond what the intercept and explain. For example, contemporaneous might predict higher depressive symptoms at specific waves, adjusting the locally without altering the global . Unlike time-invariant covariates, time-varying ones require careful centering—often at the individual mean—to distinguish within- from between-person effects, preventing biased estimates of parameters. This approach is central to conditional LGMs and builds on the core latent factors by addressing dynamic influences.

Model Specification

Linear Growth Model

The linear growth model represents the foundational specification within latent growth modeling, positing that individual trajectories in a longitudinal outcome can be captured by two latent factors: an intercept factor (\alpha_i) denoting the starting level for individual i, and a factor (\beta_i) capturing the rate of linear change over time. This approach models observed repeated measures Y_{ti} at time points t = 1, \dots, T for individual i as a function of these factors plus a residual term, expressed in the measurement model as: Y_{ti} = 1 \cdot \alpha_i + \lambda_t \cdot \beta_i + \varepsilon_{ti}, where \lambda_t are fixed time scores (loadings) that define the linear trajectory, typically coded as \lambda_t = 0, 1, 2, \dots, T-1 to set the intercept at the first occasion and scale the slope in units of change per time interval. The factor loading matrix \Lambda thus has a first column of all 1s for the intercept and the second column as the \lambda_t vector for the slope, ensuring model identification through these constraints. The latent factors follow a structural model where \alpha_i \sim N(\mu_\alpha, \sigma_\alpha^2) and \beta_i \sim N(\mu_\beta, \sigma_\beta^2), with a possible \sigma_{\alpha\beta} between them; residuals \varepsilon_{ti} are assumed N(0, \theta) with diagonal variance- matrix \Theta. Interpretation centers on these parameters: the intercept \mu_\alpha reflects the average initial status across individuals at the time origin (e.g., the first measurement occasion), while the \mu_\beta indicates the average rate of linear change per unit time. The variance \sigma_\alpha^2 quantifies heterogeneity in starting levels, \sigma_\beta^2 captures individual differences in growth rates, and \sigma_{\alpha\beta} describes the extent to which initial status and change are coupled (e.g., positive if higher starters tend to grow faster). Key assumptions underpin the model's validity. It presumes linear change in the outcome over the studied period, such that deviations from would require alternative specifications. Additionally, the repeated measures are assumed to follow a , facilitating of parameters. Residuals \varepsilon_{ti} are further assumed uncorrelated across time (no ) and often homoscedastic (constant variance), though violations can be addressed in extensions. At least three time points are required for estimation, with more enhancing precision.

Nonlinear and Polynomial Extensions

While linear latent growth models assume constant rates of change, polynomial extensions incorporate higher-order terms to represent curvilinear patterns, such as or deceleration in developmental trajectories. A common form is the model, which adds a second latent factor for the quadratic slope alongside the intercept and linear factors. The loadings for the quadratic factor are typically specified as squared time values, for example, 0, 1, and 4 for time points t=0, 1, and 2, allowing the model to capture U-shaped or inverted U-shaped growth. The structural for the observed outcome Y_{ti} at time t for individual i is given by: Y_{ti} = \alpha_i + \beta_{1i} t + \beta_{2i} t^2 + \epsilon_{ti}, where \alpha_i is the intercept, \beta_{1i} the linear slope, \beta_{2i} the quadratic slope, and \epsilon_{ti} the residual. This extension is particularly useful for processes where growth slows over time, such as cognitive development in children, and requires at least four time points for model identification to estimate the additional parameter reliably. Nonlinear extensions further enhance flexibility by relaxing parametric assumptions, enabling the modeling of complex, non-polynomial shapes like asymptotic or irregular trajectories that may occur in biological or psychological . One approach uses freely estimated factor loadings on the slope factor, with the first loading fixed at 0 and the last at 1, while intermediate loadings are estimated from the data to accommodate unspecified nonlinear forms without predefined functional shapes. This latent basis model, building on the linear framework, allows data-driven discovery of growth patterns but demands more time points—typically five or more—for adequate identification and stable estimates due to the increased number of free parameters. Parametric nonlinear models specify predefined functional forms, such as logistic, exponential, or Gompertz curves, to fit or asymptotic growth common in developmental plateaus where rapid initial change levels off toward a maximum. The Gompertz model, originally from , is adapted in latent growth modeling for asymmetric growth approaching a plateau, with the equation: Y_{ti} = \alpha_i + \beta_{1i} \exp\left( -\exp\left( -\gamma (t_i - \mu_i) \right) \right) + \epsilon_{ti}, where \alpha_i is the lower asymptote (approximating initial status), \beta_{1i} the range to the upper , \gamma (fixed) controls the growth rate, \mu_i (random) the (with about 37% of growth completed by then), and the double exponential terms produce an asymmetric S-shaped curve that asymptotes, ideal for modeling skill acquisition that plateaus after early gains. Similarly, the logistic model enforces symmetry around the via: Y_{ti} = \alpha_i + \beta_{1i} \frac{1}{1 + \exp(-\gamma (t_i - \mu_i))} + \epsilon_{ti}, with \gamma (fixed) as the growth rate and \mu_i (random) the midpoint, applied to balanced acceleration-deceleration patterns in longitudinal studies. These forms are estimated using nonlinear structural equation modeling procedures and are especially valuable for capturing real-world developmental processes that deviate from straight lines, though they require careful specification to ensure convergence.

Estimation Procedures

Maximum Likelihood Estimation

Maximum likelihood (ML) estimation serves as the primary method for obtaining parameter estimates in latent growth models (LGMs), treating the model as a special case of (SEM). In this approach, parameters such as growth factors (e.g., intercepts and slopes) are estimated by maximizing the , which assumes multivariate of the observed repeated measures. Full information maximum likelihood (FIML) is commonly employed, utilizing the complete sample information to derive estimates that minimize the discrepancy between the observed S and the model-implied \Sigma(\theta), where \theta represents the vector of free parameters. Model fit in ML-estimated LGMs is evaluated using several standard indices derived from SEM traditions. The chi-square test (\chi^2) assesses absolute fit by comparing the observed and implied covariance structures, with non-significant values (adjusted for degrees of freedom) indicating good fit, though it is sensitive to sample size. Incremental fit is gauged by indices like the Comparative Fit Index (CFI), which compares the target model to a baseline null model and favors values above 0.95, and the Root Mean Square Error of Approximation (RMSEA), which accounts for model parsimony and prefers values below 0.06. These indices provide a multifaceted evaluation, allowing researchers to assess both overall and comparative model adequacy. For comparing nested LGMs—such as a against a more complex nonlinear extension—ML facilitates likelihood ratio tests (LRTs), where the difference in chi-square values follows a with equal to the difference in the number of parameters. Non-significant LRT results support retaining the more restrictive model, indicating that the simpler structure suffices, while significant results favor the less restrictive model, aiding in without underfitting or . Inference in ML-estimated LGMs relies on asymptotic standard errors, which approximate the variability of parameter estimates under large-sample conditions and enable hypothesis testing via z-statistics or confidence intervals. This framework also accommodates unequal time spacing in longitudinal data by specifying time loadings on growth factors to reflect actual measurement occasions, ensuring the model captures the intended temporal structure without assuming equidistance.

Handling Missing Data and Non-Normality

In latent growth modeling (LGM), handling missing data is crucial due to the longitudinal nature of the data, where attrition or intermittent nonresponse is common. Full information maximum likelihood (FIML) estimation addresses this by utilizing all available observations across individuals without resorting to listwise deletion, which can reduce statistical power and introduce bias. This approach maximizes the likelihood based on the observed data patterns, assuming the data are missing at random (MAR), meaning the probability of missingness depends only on observed variables and not on unobserved ones. Under MAR, FIML provides unbiased and efficient parameter estimates in LGM, outperforming traditional methods like mean imputation or pairwise deletion, as demonstrated in simulation studies with structural equation models. Non-normality in growth data, often arising from skewed outcomes or heteroscedastic errors, can inflate Type I error rates and distort standard errors in maximum likelihood estimation. Robust maximum likelihood (MLR), incorporating the Satorra-Bentler correction, adjusts the chi-square test statistic and standard errors to account for multivariate non-normality, yielding a scaled statistic that better approximates the chi-square distribution under violations of normality. This correction, originally developed for structural equation modeling, applies directly to LGM by rescaling the asymptotic covariance matrix, improving model fit evaluation and inference in non-normal settings. Additionally, bootstrapping techniques generate confidence intervals for growth parameters by resampling the data with replacement, providing robust coverage without relying on normality assumptions; the Bollen-Stine bootstrap, in particular, evaluates overall model fit by imposing the fitted model on bootstrap samples. Post-2000 advancements have enhanced the robustness of estimation to combined and non-normality challenges. For instance, Muthén and Muthén (2002) utilized simulations in Mplus to evaluate recovery, standard errors, and coverage in latent variable models with non-normal , showing that robust methods like MLR maintain accuracy even under violations and . These developments have made more applicable to real-world longitudinal datasets, such as those in and , where data irregularities are prevalent.

Bayesian Estimation

Bayesian estimation has emerged as a prominent alternative to ML for LGMs, particularly since the , offering flexibility for complex models and small samples. It treats parameters as random variables with distributions updated by the likelihood to yield posterior distributions, typically estimated via (MCMC) methods. This approach incorporates prior knowledge, provides exact finite-sample inference without asymptotic approximations, and naturally handles non-normality, under MAR or MNAR, and hierarchical structures through informative priors. Simulation studies show Bayesian methods often outperform ML in biased estimates and coverage for distal outcomes or small samples (N < 200), though they require careful prior specification to avoid undue influence. Bayesian LGMs are implemented in software like Mplus, R (e.g., brms package), and Stan, and are increasingly used for growth mixture models and time-varying effects. As of 2025, extensions include multiple-group Bayesian comparisons for testing trajectory differences across populations.

Applications and Examples

In Longitudinal Studies

Latent growth modeling (LGM) is extensively used in longitudinal studies within developmental psychology to examine trajectories of change in psychological and behavioral outcomes over time, particularly in social science contexts like education and mental health. Since the 1990s, following its formal introduction as a method for analyzing developmental processes, LGM has become a common tool for investigating phenomena such as achievement gaps, allowing researchers to model how disparities in academic performance evolve across school years. For example, in studies of elementary students, LGM has revealed that growth rates in reading achievement often lead to closing gaps between low- and high-performing groups, whereas mathematics trajectories tend to maintain persistent disparities, highlighting the need for targeted interventions. A key application involves modeling depression trajectories in adolescents using multi-wave panel data collected over several years. In such analyses, LGM captures the initial levels (intercepts) and rates of change (slopes) in depressive symptoms, often incorporating time-invariant or time-varying predictors to explain individual differences. For instance, family support has been shown to predict the slope of depressive symptom trajectories, with higher levels of perceived support associated with slower increases or even declines in symptoms during early to mid-adolescence. The analytical process typically begins with fitting an unconditional LGM to establish the average growth pattern and variability in intercepts and slopes across the sample, providing a baseline for understanding heterogeneity in developmental paths. This is followed by conditional models that introduce covariates, such as family support, to test their effects on growth parameters and interpret how these factors contribute to differences in trajectories. Through this sequential approach, researchers can identify protective influences that mitigate adverse developmental trends, informing preventive strategies in social science research.

In Multidisciplinary Contexts

In economics, LGM facilitates the examination of income trajectories across career spans, enabling the modeling of heterogeneous paths such as steady ascents, plateaus, or declines influenced by labor market dynamics. By specifying latent intercepts for starting salaries and slopes for wage growth, economists can incorporate time-invariant covariates like education level to predict divergence in earning profiles over decades-long panels. A key application involves panel data from national surveys, where LGM identifies subclasses of workers experiencing rapid early-career gains versus stagnation, informing policy on income inequality. Adaptations of LGM in multidisciplinary settings often involve integrating domain-specific covariates to enhance explanatory power, such as environmental exposures in health studies that moderate growth trajectories. In toxicological research, for example, air pollution levels or chemical stressors are included as time-varying predictors in LGM frameworks to assess their impact on neurodevelopmental outcomes, with latent slopes capturing cumulative effects on cognitive function over childhood. This approach reveals how initial exposure doses interact with genetic factors to alter intercept and slope variances, providing insights into preventive interventions. Post-2020, LGM has seen increased application in analyzing COVID-19 impacts on mental health, addressing discontinuities like pandemic-induced data collection disruptions through piecewise or event-based extensions. Rioux et al. (2021) outline solutions for modeling abrupt shifts in psychological trajectories, such as heightened anxiety slopes during lockdowns, by incorporating "knot points" for policy changes or infection waves in longitudinal mental health surveys. These methods have been pivotal in health economics and epidemiology, quantifying recovery rates in well-being indicators across diverse populations affected by the crisis.

Software and Implementation

Open-Source Tools

OpenMx is a comprehensive open-source R package designed for structural equation modeling (SEM), including full support for latent growth modeling (LGM) through its matrix-based or path specification approaches. It offers advanced capabilities such as nonlinear growth models, simulation studies, and handling of complex constraints, making it suitable for intricate longitudinal analyses. Since its development in the 2000s, OpenMx has been preferred for complex LGM applications due to its flexibility and computational efficiency, with ongoing active development as of 2025. For instance, a linear LGM can be specified using the RAM framework with matrices for asymmetric relationships (A), symmetric covariances (S), filter (F), and means (M), as shown in the following example code for a model with five time points:
r
require(OpenMx)

dataRaw <- mxData(observed=myLongitudinalData, type="raw")

matrA <- mxMatrix(type="Full", nrow=7, ncol=7, free=F, 
                  values=c(0,0,0,0,0,1,0, 0,0,0,0,0,1,1, 0,0,0,0,0,1,2, 0,0,0,0,0,1,3, 0,0,0,0,0,1,4, 0,0,0,0,0,0,0, 0,0,0,0,0,0,0), 
                  byrow=TRUE, name="A")

matrS <- mxMatrix(type="Symm", nrow=7, ncol=7, 
                  free=c(T,F,F,F,F,F,F, F,T,F,F,F,F,F, F,F,T,F,F,F,F, F,F,F,T,F,F,F, F,F,F,F,T,F,F, F,F,F,F,F,T,T, F,F,F,F,F,T,T), 
                  values=c(0,0,0,0,0,0,0, 0,0,0,0,0,0,0, 0,0,0,0,0,0,0, 0,0,0,0,0,0,0, 0,0,0,0,0,0,0, 0,0,0,0,0,1,.5, 0,0,0,0,0,.5,1), 
                  labels=c("residual",NA,NA,NA,NA,NA,NA, NA,"residual",NA,NA,NA,NA,NA, NA,NA,"residual",NA,NA,NA,NA, NA,NA,NA,"residual",NA,NA,NA, NA,NA,NA,NA,"residual",NA,NA, NA,NA,NA,NA,NA,"vari","cov", NA,NA,NA,NA,NA,"cov","vars"), 
                  byrow=TRUE, name="S")

matrF <- mxMatrix(type="Full", nrow=5, ncol=7, free=F, 
                  values=c(1,0,0,0,0,0,0, 0,1,0,0,0,0,0, 0,0,1,0,0,0,0, 0,0,0,1,0,0,0, 0,0,0,0,1,0,0), 
                  byrow=T, name="F")

matrM <- mxMatrix(type="Full", nrow=1, ncol=7, free=c(F,F,F,F,F,T,T), 
                  values=c(0,0,0,0,0,1,1), 
                  labels=c(NA,NA,NA,NA,NA,"meani","means"), name="M")

exp <- mxExpectationRAM("A","S","F","M", dimnames=c(names(myLongitudinalData),"intercept","slope"))
funML <- mxFitFunctionML()

growthCurveModel <- mxModel("Linear Growth Curve Model Matrix Specification", 
                            dataRaw, matrA, matrS, matrF, matrM, exp, funML)

growthCurveFit <- mxRun(growthCurveModel)
This setup estimates intercept and slope factors with linear loadings (0, 1, 2, 3, 4), supporting maximum likelihood estimation for raw data. lavaan is another prominent open-source R package for SEM, particularly user-friendly for beginners implementing LGM due to its intuitive syntax and dedicated functions like growth(). It integrates seamlessly with the tidyverse ecosystem through packages such as tidySEM and broom, enabling tidy data manipulation, model tidying, and visualization workflows. Additionally, lavaan supports multilevel extensions for clustered longitudinal data, allowing hierarchical LGM specifications. A simple linear LGM example with four time points can be fitted as follows:
r
model <- ' i =~ 1*t1 + 1*t2 + 1*t3 + 1*t4
           s =~ 0*t1 + 1*t2 + 2*t3 + 3*t4 '
fit <- growth(model, data=Demo.growth)
Here, i represents the intercept factor with fixed loadings of 1, and s the slope with linear loadings; the function automatically handles mean structures and estimation.

Commercial Packages

Several commercial software packages facilitate the implementation of (LGM), offering proprietary features such as intuitive interfaces, robust estimation options, and integration with broader statistical environments to support researchers in applied settings. Mplus, developed by Muthén & Muthén, is a leading commercial tool specialized for LGM and advanced structural equation modeling (SEM). It supports a variety of growth models, including linear, nonlinear, and latent basis variants, with flexible syntax for model specification. Key features include Bayesian estimation via Markov chain Monte Carlo methods, which is particularly useful for complex models with non-normal data or small samples, and robust handling of missing data through full information maximum likelihood under missing at random assumptions or Bayesian approaches for non-random missingness. Mplus has been widely adopted in applied research on LGM since the early 2000s, owing to its comprehensive modeling capabilities and extensive output diagnostics. IBM SPSS AMOS provides a graphical user interface for SEM, enabling users to construct LGM path diagrams visually without extensive coding, which enhances accessibility for linear growth models. It supports maximum likelihood estimation and bootstrapping for inference, with seamless integration into the SPSS ecosystem for data preparation and visualization. However, AMOS is more limited in handling nonlinear growth trajectories compared to syntax-driven packages, often requiring workarounds for advanced specifications.

Advantages and Limitations

Strengths Over Traditional Methods

Latent growth modeling (LGM) provides distinct advantages over traditional statistical approaches like repeated measures by explicitly accounting for measurement error through latent variables representing initial status and growth rates. This separation of true growth parameters from random error in observed indicators yields more precise estimates of individual trajectories and reduces bias in parameter interpretation. Unlike , which aggregates individual differences into residual error, LGM models inter-individual variability in intercepts and slopes as substantive latent factors, enabling researchers to examine heterogeneity in developmental processes directly. LGM's flexibility extends to incorporating predictors of growth, including time-invariant covariates like demographics and time-varying factors like environmental influences, even in designs where predictors are correlated (non-orthogonal). This allows for comprehensive tests of how external variables shape trajectories without the restrictive assumptions of orthogonality often required in traditional methods. In comparison to repeated measures ANOVA, LGM handles unequal variances and covariances across time points by permitting residual variances to differ freely, accommodating the heteroscedasticity common in longitudinal data and improving model fit. For analyzing simple, homogeneous growth trajectories, LGM is more parsimonious than growth mixture models, which estimate separate parameters for multiple latent classes to capture unobserved heterogeneity and thus involve greater complexity and computational demands. Simulations demonstrate LGM's superior statistical power for detecting intervention effects or changes over time, particularly in smaller samples; for example, it achieves 80% power with a sample size of 525 for a moderate effect size (0.20), compared to 725 required by ANCOVA. LGM can also be briefly extended to multilevel frameworks for handling nested data structures, further broadening its utility beyond basic longitudinal designs.

Common Challenges and Criticisms

One major challenge in applying latent growth modeling (LGM) is the requirement for large sample sizes to achieve stable parameter estimates and adequate statistical power. Simulation studies indicate that sample sizes of at least 200 are typically needed for reliable detection of growth trajectories, particularly in mediation analyses or models with multiple time points, as smaller samples (e.g., N < 100) lead to biased estimates, poor model fit, and reduced power. LGM is also sensitive to the choice of time coding, which can alter the interpretation of growth parameters such as intercepts and slopes without changing the underlying data structure. For instance, recoding time from chronological age to deviations around a mean can shift the intercept from representing initial status to an average value across occasions, affecting covariances among growth factors and the precision of predictor effects. This sensitivity may lead to misinterpretations if not carefully considered, with implications for multicollinearity and statistical power varying across different codings. Nonlinear LGM specifications present additional estimation difficulties, including frequent non-convergence due to the need for precise starting values and the complexity of multiplicative random effects. In practice, these models often require reduced quadrature points or iterative fitting from simpler linear versions to achieve convergence, and even large samples (N > 20,000) may yield non-positive definite matrices or unreliable likelihood ratio tests in extensions. A key criticism of is its assumption of continuous latent growth trajectories within a single homogeneous , which may overlook discrete subpopulations exhibiting qualitatively distinct patterns better captured by mixture models. Standard LGM fails to account for unobserved heterogeneity, such as varying onset timings in developmental outcomes, potentially oversimplifying complex data and leading to biased average growth estimates. Furthermore, traditional LGM has been critiqued as outdated in the big data era without extensions, as its reliance on structured equation modeling limits scalability to high-dimensional or unstructured datasets where neural network-based latent variable models offer greater flexibility. As of 2025, LGM shows limited integration with machine learning techniques, hindering its application to dynamic, high-volume longitudinal data; however, emerging hybrid approaches combining LGM with partial least squares structural equation modeling or deep latent variable frameworks have been proposed to enhance predictive analytics in panel surveys.

References

  1. [1]
    An Introductory Guide to Latent Variable Growth Curve Modeling - NIH
    This paper presents a basic introduction to a latent growth modeling approach for analyzing repeated measures data.
  2. [2]
    [PDF] Exploratory Latent Growth Models in the Structural Equation ... - Mplus
    Oct 17, 2013 · Interestingly, looking back at our history, the foundations of latent growth modeling, especially the original work by Tucker (1958, 1966), was ...
  3. [3]
    [PDF] Latent Growth Curve Models - Quantpsy.org
    Latent growth curve modeling(LGM), the subject of this chapter, is one application of SEM to the analysis of change. In LGM, repeated measures of a variable. ( ...
  4. [4]
    Latent curve analysis | Psychometrika
    Oct 27, 1988 · As a method for representing development, latent trait theory is presented in terms of a statistical model containing individual parameters and a structure.
  5. [5]
    Latent growth curves within developmental structural equation models
    Latent growth curves within developmental structural equation models. Child Dev. 1987 Feb;58(1):110-33. Authors. J J McArdle, D Epstein. PMID: 3816341. Abstract.
  6. [6]
    Latent Growth Curves within Developmental Structural Equation ...
    McArdle and Epstein 113 score ratios are the essential statistics of LGM ... parameters of the individual growth curve (cf. Rogosa & Willett, 1985a ...
  7. [7]
    Latent curve analysis. - APA PsycNet
    Presents latent trait theory as a method for representing development in terms of a statistical model containing individual parameters and a structure.
  8. [8]
    [PDF] Latent Growth Curve Analysis
    ... LISREL, EQS, MPLUS, etc. To conduct latent growth analyses, we lay out our data in multivariate format, in which there is a single row in the dataset for ...<|control11|><|separator|>
  9. [9]
  10. [10]
    [PDF] A Latent Variable Framework for Analysis and Po
    Muthen, B. (1993). Latent variable modeling of growth with missing data and multilevel data. In C. M. Cuadras &. C. R. Rao (Eds.), Multivariate analysis ...
  11. [11]
    None
    ### Summary of Time-Invariant vs. Time-Varying Covariates in Latent Growth Modeling
  12. [12]
    [PDF] chapter 6 - examples: growth modeling, survival analysis, and n=1 ...
    The difference between this example and Example 6.1 is that time- invariant and time-varying covariates as shown in the picture above are included in the model ...
  13. [13]
    [PDF] Mplus Short Courses Day 2 Growth Modeling With Latent Variables ...
    • Time-invariant covariates—vary across individuals not time, explain the variation in the growth factors. • Time-varying covariates—vary across individuals and.
  14. [14]
  15. [15]
  16. [16]
    [PDF] Chapter 19 - Mplus
    This chapter gives an overview of recent advances in latent variable analysis. Emphasis is placed on the strength of modeling obtained by using a flexible ...
  17. [17]
    Twelve Frequently Asked Questions About Growth Curve Modeling
    Muthén & Curran, 1997); as such, the total number of person-by-time observations plays an important role in model estimation and statistical power as well.
  18. [18]
    [PDF] Latent variable growth modeling with multilevel data
    As shown in Meredith and Tisak (1984, 1990), the random coefficient model of the previous section can be formulated as a latent variable model (for applications ...Missing: seminal | Show results with:seminal
  19. [19]
    [PDF] Latent Growth Curve Modeling - Statistical Horizons
    Structural equation modeling is a process that allows for the assessment of (typically causal) theories involving measured and possibly latent variables to ...
  20. [20]
    Evaluating fit indices in a multilevel latent growth curve model
    Dec 10, 2018 · One common approach to model evaluation uses fit indices (e.g., the root mean square error of approximation [RMSEA], comparative fit index [CFI] ...
  21. [21]
    Evaluating fit indices in a multilevel latent growth model ... - Frontiers
    May 2, 2024 · This study informed researchers about the performance of different level-specific and target-specific model fit indices in the Multilevel Latent Growth Model ( ...Abstract · Introduction · Literature review · Methods
  22. [22]
    Chapter 59 Estimating Change Using Latent Growth Modeling
    Latent growth modeling (LGM) (or latent growth model (LGM)) is a latent variable modeling approach and is part of the structural equation modeling (SEM) family ...<|control11|><|separator|>
  23. [23]
    Fit Indices in Structural Equation Modeling and Confirmatory Factor ...
    Jul 17, 2024 · This research explores the essential aspects of reporting fit indices in Structural Equation Modeling (SEM), focusing on their significance, methodologies for ...<|control11|><|separator|>
  24. [24]
    A comparison of Bayesian to maximum likelihood estimation for ...
    Jan 10, 2020 · Maximum likelihood (ML), the most common approach for estimating LGMs, can fail to converge or may produce biased estimates in complex LGMs ...
  25. [25]
    [PDF] Solutions for Missing Data in Structural Equation Modeling
    Another method of using maximum likelihood to estimate missing data is the Full-Information. Maximum Likelihood (FIML) method. “The FIML method uses all of ...<|separator|>
  26. [26]
    [PDF] The Relative Performance of Full Information Maximum Likelihood ...
    Jul 1, 2001 · A Monte Carlo simulation examined the performance of 4 missing data methods in structural equation models: full information maximum likelihood ...
  27. [27]
    Full article: Addressing Missing Data in Latent Class Analysis When ...
    FIML addresses missing data by maximizing the likelihood function of the observed data while accounting for individual missing data patterns.
  28. [28]
    Understanding Robust Corrections in Structural Equation Modeling
    The original SEM development, due to Satorra and Bentler (1988, 1994), was to account for the effect of nonnormality.
  29. [29]
    [PDF] Alternative Estimation Methods
    The scaled chi-square and “robust” standard errors using ML estimation is a method suggested by Satorra and Bentler (1988; 1994). It appears to be a good ...
  30. [30]
    Using the Bollen-Stine Bootstrapping Method for Evaluating ... - NIH
    The Bollen-Stine (B-S) method (Bollen & Stine, 1993) provides a way of imposing the model on the sample data so that bootstrapping is done under that model.
  31. [31]
    [PDF] A Note On Non-Normal Missing Data In Latent Variable Models
    Mar 22, 2002 · This note discusses the use of Mplus Monte Carlo simulations to study parameter esti- mates, standard errors, and coverage in latent ...
  32. [32]
    Fitting Latent Growth Models with Small Sample Sizes and Non ...
    This study investigates the performance of robust ML estimators when fitting and evaluating small sample latent growth models (LGM) with non-normal missing ...
  33. [33]
    Exploring gains in reading and mathematics achievement among ...
    One tool for tracking progress is latent growth modeling (LGM; Willett & Sayer, 1994). In this study, we used LGM to simultaneously study reading and ...
  34. [34]
    The dynamic interdependence between family support and ... - NIH
    Latent growth curve techniques were used to investigate the degree to which family support predicts changes in youth depressive symptoms and/or depressive ...
  35. [35]
    An introduction to latent growth models for developmental data ...
    Introduces latent growth curve model (LGM) analysis using structural equation modeling (SEM) techniques for longitudinal data in developmental research.
  36. [36]
    OpenMx: Home
    OpenMx is free and open source software for use with R that allows estimation of a wide variety of advanced multivariate statistical models.
  37. [37]
    Time Series, Matrix Specification — OpenMx 2.3.1 documentation
    R. Latent Growth Curve Model¶. The latent growth curve model is a variation of the factor model for repeated measurements. For a set of ...
  38. [38]
  39. [39]
    Growth curves – lavaan.org
    Growth modeling is often used to analyze longitudinal or developmental data. In this type of data, an outcome measure is measured on several occasions.
  40. [40]
    tidySEM - Tidy Structural Equation Modeling
    A tidy workflow for generating, estimating, reporting, and plotting structural equation models using lavaan, OpenMx, or Mplus. Throughout this workflow ...
  41. [41]
    Multilevel SEM – lavaan.org
    If the data is clustered, one way to handle the clustering is to use a multilevel modeling approach. In the SEM framework, this leads to multilevel SEM.
  42. [42]
    A Review of Eight Software Packages for Structural Equation Modeling
    Aug 6, 2025 · The eight packages—Amos, SAS PROC CALIS, R packages sem, lavaan, OpenMx, LISREL, EQS, and Mplus—can help users estimate parameters for a model ...
  43. [43]
    Muthén & Muthén, Mplus Home Page
    Upcoming Mplus Short Courses. Psychometrics, Online Course, November 12 - 14, 2025. Latent Growth Curve Modeling, Online Course, December 10 - 12, 2025.Pricing · Mplus User's Guide Excerpts · Version History · Mplus Demo VersionMissing: commercial AMOS
  44. [44]
    Mplus Features - MODELING WITH MISSING DATA
    Mplus provides maximum likelihood estimation under MCAR (missing completely at random), MAR (missing at random), and NMAR (not missing at random)
  45. [45]
    examples: missing data modeling and bayesian analysis - Mplus
    Mplus provides estimation of models with missing data using both frequentist and Bayesian analysis. Descriptive statistics and graphics are available for ...
  46. [46]
    Growth Modeling With Latent Variables Using Mplus - ResearchGate
    Six latent growth models were estimated to adequately describe the growth trajectory for each scale. First, a linear growth trajectory was estimated.
  47. [47]
    IBM SPSS Amos
    IBM SPSS Amos lets you easily use structural equation modeling (SEM) to test hypotheses on complex variable relationships and gain new insights.Missing: growth Mplus
  48. [48]
    [PDF] IBM® SPSS® Amos™ 29 User's Guide
    Amos integrates an easy-to-use graphical interface with an advanced computing engine for SEM. The publication-quality path diagrams of Amos provide a clear ...Missing: growth | Show results with:growth
  49. [49]
    [PDF] Latent Curve Analysis
    Tucker (1958) developed the idea of determining parameters of a functional relation by factor analysis. Concurrently, Rao (1958) sketched out similar procedures ...Missing: Muthén | Show results with:Muthén
  50. [50]
    [PDF] An Introduction to Latent Class Growth Analysis and Growth Mixture ...
    The purpose of this paper is to provide an overview of LCGA and GMM, compare the different techniques of latent growth modeling, discuss current debates and ...
  51. [51]
    Latent Growth Curve Models for Biomarkers of the Stress Response
    Jun 6, 2017 · For instance, Cheong (2011) found that LGCMs had adequate power to model mediation relationships when the sample size was 200 and there were at ...
  52. [52]
    [PDF] The Role of Coding Time in Estimating and Interpreting Growth ...
    The coding of time in growth curve models has important implications for the interpretation of the resulting model that are sometimes not transparent.
  53. [53]
    Non-linear Growth Models in Mplus and SAS - PMC - NIH
    In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling ...Missing: seminal | Show results with:seminal
  54. [54]
    Nonlinear Structured Growth Mixture Models in Mplus and OpenMx
    The purpose of this paper is to describe how NSLCMs and GMMs can be combined to study unobserved classes that follow nonlinear change functions.
  55. [55]
    Latent variable models in the era of industrial big data - ScienceDirect
    Among them, lightweight deep LVM is a novel latent variable model aiming at overcoming some of the limitations of traditional LVM, deep LVM, and current neural ...
  56. [56]
    Integrated PLS-SEM-Latent Growth Curve Model - ResearchGate
    Oct 25, 2025 · The findings indicate that: The integrated PLS-SEM–LGCM framework supports simultaneous analysis of constructs, trajectories, and predictors.
  57. [57]
    Integrating multimodal cancer data using deep latent variable path ...
    Jul 22, 2025 · In this study, we introduce a deep-learning-based method for path modelling called deep latent variable path modelling (DLVPM). This method ...