Fact-checked by Grok 2 weeks ago

Mixed model

A mixed model, also known as a mixed-effects model, is a statistical framework that integrates both fixed effects—parameters that represent consistent, population-level relationships—and random effects—parameters that capture variability across groups, individuals, or clusters to account for data dependencies. These models extend traditional by handling hierarchical or repeated-measures data structures, such as longitudinal observations or nested designs, where observations within the same unit are correlated. By incorporating random effects, mixed models improve estimation efficiency and provide more accurate inferences in scenarios with unbalanced or . The origins of mixed models trace back to agricultural and research in the mid-20th century, where Charles R. Henderson developed the foundational mixed model equations in the late 1940s and 1950s to estimate variance components in unbalanced datasets. Henderson's approach, formalized in his 1953 paper and subsequent works, enabled best linear unbiased predictions (BLUP) for random effects alongside fixed effect estimates, revolutionizing . This methodology was later adapted for broader applications, notably in longitudinal by Nan M. Laird and James H. Ware in their seminal 1982 paper, which introduced random-effects models to model serial correlations in repeated measures while accommodating through empirical Bayes and . Mixed models have since become essential across disciplines, including for analyzing clustered environmental data, for multilevel studies of behavior, and for longitudinal clinical trials. Linear mixed models (LMMs) form the core for continuous outcomes, while generalized linear mixed models (GLMMs) extend to non-normal responses like binary or count data, often fitted using software such as R's lme4 package or PROC MIXED. Key advantages include flexibility in modeling structures and the ability to test hypotheses on both fixed and random components, though challenges like and computational intensity persist.

Overview

Definition

A mixed model, also known as a mixed-effects model, is a statistical framework that incorporates both fixed effects—parameters that represent constant influences across the population—and random effects—parameters that capture variability across groups or individuals, typically assumed to follow a . This approach is particularly suited for analyzing hierarchical, clustered, or longitudinal data where observations within units are correlated. The general formulation of the linear mixed model, as introduced in the seminal work on random-effects models, can be expressed as
\mathbf{y}_i = \mathbf{X}_i \boldsymbol{\beta} + \mathbf{Z}_i \mathbf{b}_i + \boldsymbol{\epsilon}_i,
where \mathbf{y}_i is the response for the i-th individual or group, \mathbf{X}_i \boldsymbol{\beta} denotes the fixed effects contribution with \mathbf{X}_i and parameter \boldsymbol{\beta}, \mathbf{Z}_i \mathbf{b}_i represents the random effects with \mathbf{Z}_i and random \mathbf{b}_i \sim N(\mathbf{0}, \mathbf{D}), and \boldsymbol{\epsilon}_i \sim N(\mathbf{0}, \mathbf{R}_i) is the residual error . The covariance structure of the \mathbf{y}_i is then \mathbf{V}_i = \mathbf{Z}_i \mathbf{D} \mathbf{Z}_i' + \mathbf{R}_i.
Unlike purely fixed-effects models, which assume all parameters are constant and focus solely on population-level inferences, or purely random-effects models that emphasize variability without fixed components, mixed models integrate both to enable inferences about overall trends while accounting for subject-specific deviations. Key assumptions underlying the model include the of the random effects \mathbf{b}_i and residuals \boldsymbol{\epsilon}_i, as well as their mutual independence and independence across individuals or groups. These assumptions facilitate and allow the model to handle unbalanced or effectively.

Qualitative Description

Mixed models provide a flexible way to analyze where observations are not , such as measurements repeated over time on the same or clustered by groups like or families. Fixed effects capture the overall, predictable relationships between variables, similar to those in standard , while random effects account for the unique variations or "noise" introduced by different groups or individuals, treating them as draws from a . This combination allows for more accurate estimates of effects and better handling of complex structures compared to simpler models that ignore dependencies.

Fixed and Random Effects

Fixed Effects

In mixed models, fixed effects, denoted as the parameter vector \beta, represent invariant predictors or factors whose levels are of primary interest and are assumed to apply to the entire under . These effects capture systematic, non-random influences on the response variable, such as conditions, covariates like or dosage, or experimental manipulations that do not vary across repeated studies or clusters. The role of fixed effects is to estimate population-level averages, providing a stable framework for inferring how specific predictors influence the outcome across all observations, distinct from variability introduced by grouping structures. The interpretation of fixed effect coefficients focuses on their marginal impact on the response. Each coefficient \beta_j quantifies the expected change in the response variable for a one-unit increase in the corresponding predictor, holding all other fixed effects constant; for instance, the intercept \beta_0 serves as the baseline fixed effect representing the response when all predictors are zero. These estimates reflect average effects over the , adjusted for the model's structure, enabling inferences about overall trends or differences, such as the mean difference between treatment groups. Fixed effects are typically estimated using generalized least squares (GLS), which accounts for the covariance structure induced by random components to yield unbiased and efficient estimates of \beta. This method minimizes the weighted sum of squared residuals, where weights are derived from the inverse of the variance-covariance matrix, as formalized in Henderson's mixed model equations. Key assumptions underlying fixed effects estimation include linearity of the parameters in the model (i.e., the response is a linear function of the fixed effects) and independence from random effects, meaning fixed predictors do not systematically correlate with the random variability across clusters. For example, in a evaluating drug efficacy, the fixed effect for dosage level might estimate the average increase in response per unit dose across all participants, allowing researchers to assess overall benefits while controlling for patient-specific factors.

Random Effects

In mixed models, random effects, often denoted as \gamma, represent unobserved random variables that follow a specified , typically multivariate normal, to account for group- or subject-specific deviations from the overall structure. These effects capture heterogeneity across clustering units, such as individuals, schools, or time points, allowing the model to accommodate variability that is not explained by fixed predictors. By incorporating random effects, the model induces correlations among observations within the same group, which is essential for handling clustered or longitudinal data where independence assumptions of fail. Random effects can take various forms, including random intercepts and random slopes. Random intercepts model varying baselines or average levels across groups, permitting each unit to have its own deviation from the fixed intercept. In contrast, random slopes allow the effects of predictors to vary across groups, reflecting differences in how relationships between variables manifest at the unit level. These types enable flexible modeling of both constant and varying influences within the data structure. The covariance structure of random effects is characterized by a variance- \mathbf{[G](/page/G)}, which quantifies the variability and correlations among the random coefficients. This matrix induces dependence in the response variables for observations within clusters, with diagonal elements representing variances of individual random effects and off-diagonal elements capturing covariances between them, such as between intercepts and slopes. Interpretation of random effects involves empirical Bayes shrinkage, where individual-level estimates are pulled toward the grand mean, balancing specific data with overall trends to improve precision in small samples. Variance components from \mathbf{G} indicate the degree of heterogeneity; larger variances suggest substantial between-group differences, while near-zero values imply minimal random variation. For instance, in educational studies analyzing student test scores, random intercepts for schools model deviations in average performance across institutions, accounting for unmeasured school-level factors like resources or culture that correlate scores within schools. This contrasts with fixed effects, which provide the stable mean structure across all units.

Historical Development

Origins and Early Contributions

The foundations of mixed models trace back to early 20th-century statistical work in and . Ronald A. Fisher introduced the concept of random effects in his 1918 paper "The Correlation Between Relatives on the Supposition of ," which modeled variability in traits among relatives as arising from numerous small genetic effects plus environmental factors. Practical development accelerated in during the mid-20th century. Charles R. Henderson formulated the core mixed model equations in the late 1940s and early 1950s to estimate variance components in unbalanced datasets from agricultural experiments. His 1953 paper and later works established the framework for simultaneous estimation of fixed effects and prediction of random effects via best linear unbiased predictions (BLUP), transforming and breeding value estimation. In the late , mixed models gained prominence in longitudinal and repeated-measures analysis. Nan M. and James H. Ware's influential 1982 paper applied random-effects models to handle correlations in serial observations, incorporating through empirical Bayes shrinkage and , thus broadening the models' utility beyond .

Modern Advancements

The 1980s and saw extensions to accommodate diverse data types and improve inference. Generalized linear mixed models (GLMMs), which combine mixed effects with generalized linear models for non-normal outcomes like or data, emerged prominently following Peter McCullagh and John A. Nelder's 1989 book Generalized Linear Models. A pivotal advancement came in 1993 with Norman E. Breslow and David G. Clayton's paper on approximate inference in GLMMs, introducing penalized methods to make estimation computationally feasible for complex scenarios. Further refinements in the 1990s and 2000s included widespread adoption of (REML) estimation, originally proposed by Henderson, for unbiased variance component estimation. Bayesian implementations, leveraging techniques, addressed challenges in hierarchical modeling and , particularly from the early 2000s onward. As of 2025, ongoing developments focus on scalable methods for , such as in and integrations, enhancing mixed models' role in interdisciplinary research while tackling computational demands.

Model Formulation

General Linear Mixed Model

The general linear mixed model (LMM) provides a for analyzing data with both fixed and random effects, particularly suited for clustered or hierarchical structures such as longitudinal studies. In notation, the model is expressed as \mathbf{Y}_{n \times 1} = \mathbf{X}_{n \times p} \boldsymbol{\beta}_{p \times 1} + \mathbf{Z}_{n \times q} \boldsymbol{\gamma}_{q \times 1} + \boldsymbol{\epsilon}_{n \times 1}, where \mathbf{Y} is the response vector, \mathbf{X} \boldsymbol{\beta} captures the fixed effects, \mathbf{Z} \boldsymbol{\gamma} represents the random effects, and \boldsymbol{\epsilon} denotes the residual errors. The random effects \boldsymbol{\gamma} are assumed to follow a \boldsymbol{\gamma} \sim N(\mathbf{0}, \mathbf{G}_{q \times q}), where \mathbf{G} is a positive definite , and the errors \boldsymbol{\epsilon} \sim N(\mathbf{0}, \mathbf{R}_{n \times n}), with \mathbf{R} typically structured to account for within-cluster dependence, such as \mathbf{R} = \sigma^2 \mathbf{I}_n for residuals or more general forms like autoregressive . The of the response arises from integrating out the random effects, yielding \mathbf{Y} \sim N(\mathbf{X} \boldsymbol{\beta}, \mathbf{V}), where the variance-covariance is \mathbf{V} = \mathbf{Z} \mathbf{G} \mathbf{Z}' + \mathbf{R}. This formulation enables the for parameter estimation, based on the multivariate of \mathbf{Y}. Key assumptions include , meaning the response is a of the fixed and random effects; of both the random effects and errors; and homoscedasticity within levels, implying constant variance of residuals conditional on the random effects for observations within the same . These assumptions ensure that the model captures the hierarchical structure without bias in the fixed effects estimates. For predicting the random effects, the best linear unbiased predictor (BLUP) is given by \hat{\boldsymbol{\gamma}} = \mathbf{G} \mathbf{Z}' \mathbf{V}^{-1} (\mathbf{Y} - \mathbf{X} \hat{\boldsymbol{\beta}}), where \hat{\boldsymbol{\beta}} is the estimated vector; this shrinks individual predictions toward the population mean, balancing empirical Bayes principles. The BLUP leverages the full structure to provide efficient predictions, particularly useful in settings like or longitudinal data where random effects represent subject-specific deviations. Special cases of the general LMM include the random intercept model, where \mathbf{Z} is a column of ones, allowing each (e.g., ) to have a unique intercept drawn from N(0, \sigma_b^2), while slopes are fixed; this is common for modeling baseline differences across groups. Another prominent case is the growth curve model, which incorporates random intercepts and slopes for a time covariate, such as \mathbf{Z}_{ij} = [1, t_{ij}]' for j in i, enabling the estimation of individual trajectories in longitudinal data. These cases illustrate the flexibility of the LMM in handling correlated data while maintaining the core linear and normality assumptions.

Extensions to Generalized and Nonlinear Forms

Generalized linear mixed models (GLMMs) extend the framework of linear mixed models to handle response variables whose conditional distributions belong to the , enabling analysis of non-normal such as binary outcomes, counts, or proportions. In this setup, the mean \mu of the response y is linked to the linear predictor via a monotonic link , expressed as g(\mu_{ij}) = x_{ij}'\beta + z_{ij}'b_i, where x_{ij}'\beta captures fixed effects, z_{ij}'b_i incorporates random effects for the i-th , and b_i \sim N(0, D) with D denoting the variance-covariance matrix of random effects. The conditional distribution y_{ij} \mid b_i follows an form, such as the with logistic link for binary , allowing the model to accommodate heteroscedasticity and non-constant variance through a variance V(\mu). This facilitates modeling correlated while relaxing the Gaussian assumption on the response, as introduced in foundational work on approximate inference for such models. Nonlinear mixed models (NLMMs) provide a further generalization by permitting nonlinear relationships between the response and predictors, often specified in implicit form as f(y_{ij}, \phi_i, t_{ij}) = \epsilon_{ij}, where \phi_i are subject-specific parameters modeled linearly as \phi_i = \eta + \gamma_i with fixed effects \eta and random effects \gamma_i \sim N(0, \Sigma), and t_{ij} denotes covariates like time. This structure is particularly suited to dynamic processes where the mean response follows a nonlinear , such as or decay models in , where individual trajectories vary due to random heterogeneity in parameters. The approach combines for fixed effects estimation with maximum likelihood for random effects, enabling flexible handling of repeated measures with nonlinear trends. A primary computational challenge in both GLMMs and NLMMs arises from the , which requires over the unobservable random effects, resulting in high-dimensional intractable integrals that cannot be evaluated analytically. These integrals are commonly approximated using Laplace methods, which expand the integrand around its to yield a Gaussian , or numerical techniques that discretize the space, though both methods can introduce in small samples or complex structures. Such approximations are essential for feasible , particularly when random effects follow multivariate normal distributions with unstructured covariances. Practical applications of GLMMs include models for ecological count , such as analyzing the number of fruits produced by plants under varying and clipping treatments, where random effects at the level account for hierarchical variation and is addressed via adjustments. For NLMMs, dose-response experiments in often employ sigmoidal four-parameter logistic curves with random effects on parameters like the half-maximal inhibitory concentration () to capture inter-subject variability across lines or individuals, enhancing power for detecting treatment differences compared to independent curve fits. These extensions maintain key assumptions of of observations given the random effects and of the random effects , while relaxing the marginal of the response to better fit real-world structures.

Estimation Procedures

Likelihood-Based Methods

Likelihood-based methods form the cornerstone of parameter estimation in mixed models, with maximum likelihood (ML) and (REML) being the most prevalent. These approaches maximize the likelihood of the observed under the model assumptions, typically using iterative numerical optimization such as Newton-Raphson or expectation-maximization () algorithms, as closed-form solutions are generally unavailable. Maximum likelihood estimation treats both fixed effects \beta and variance components \theta (encompassing the matrices for random effects and residuals) as fixed unknowns. The likelihood for the response vector y in a linear mixed model is L(\beta, \theta \mid y) = (2\pi)^{-n/2} |V|^{-1/2} \exp\left(-\frac{1}{2} (y - X\beta)^T V^{-1} (y - X\beta)\right), where V = ZD Z^T + R is the marginal , Z links observations to random effects, D is the random effects , and R is the residual . ML provides consistent estimates but tends to underestimate variance components, particularly in small samples, due to not accounting for the loss of when estimating fixed effects. Restricted maximum likelihood addresses this bias by focusing on the likelihood of the residuals after adjusting for fixed effects. It maximizes the adjusted likelihood L_R(\theta \mid y) \propto |V|^{-(n-p)/2} |X^T V^{-1} X|^{-1/2} \exp\left(-\frac{1}{2} (y - X\hat{\beta})^T V^{-1} (y - X\hat{\beta})\right), where p is the number of fixed effects and \hat{\beta} is the estimate. This "restricted" form transforms the data to remove fixed effect influences, yielding unbiased variance component estimates. REML is preferred in practice for its better small-sample properties and is the default in many software implementations, though it complicates fixed effects testing due to non-standard likelihood ratios. Both methods extend to generalized linear mixed models (GLMMs) via penalized quasi-likelihood or integral approximations.

Alternative Approaches

Bayesian estimation treats the parameters of mixed models, including fixed effects \beta and variance components \theta, as random variables with specified distributions, enabling via (MCMC) methods such as . This approach samples from the joint posterior distribution p(\beta, \theta, u \mid y), where u denotes random effects and y the observed data, naturally incorporating uncertainty in variance estimates and allowing for complex hierarchies. Priors on \beta are often , while those on \theta (e.g., inverse-gamma for variances) facilitate conjugate updating in simple cases, though non-conjugate priors require Metropolis-Hastings steps within the MCMC chain. This framework is particularly useful for generalized linear mixed models (GLMMs), where the likelihood may not permit closed-form solutions. The method of moments provides an alternative by estimating variance components through equating observed moments (e.g., sums of squares from ANOVA tables) to their population expectations, yielding explicit formulas for balanced designs in one-way or layouts. For a balanced one-way , the between-group estimates \sigma^2 + n\sigma_u^2, while the within-group estimates \sigma^2, allowing direct solving for \sigma_u^2. This technique is computationally straightforward and avoids iterative optimization, making it suitable for initial approximations or large datasets where efficiency trumps precision. However, it assumes and , producing biased estimates in unbalanced or complex scenarios. Empirical Bayes methods bridge frequentist and Bayesian paradigms by estimating hyperparameters \theta from the marginal likelihood p(y \mid \theta) = \int p(y \mid \beta, u, \theta) p(\beta, u \mid \theta) d\beta du, then using these to compute posterior modes for random effects, akin to best linear unbiased predictors (BLUPs). In longitudinal settings, this shrinks individual random effects toward the grand mean, with shrinkage factors depending on estimated variances. Unlike full Bayesian analysis, it treats \theta as fixed post-estimation, reducing computational burden while still regularizing estimates. Bayesian approaches offer advantages in handling complex, non-standard models (e.g., spatial correlations or non-conjugate priors) and providing full posterior distributions for , outperforming likelihood methods in small samples or with informative priors. In contrast, method of moments excels in simplicity and speed for balanced data but suffers from lower statistical efficiency, potential negative variance estimates, and poor performance in unbalanced designs compared to benchmarks. For example, MCMC via has been applied to GLMMs for outcomes in multi-response settings, using latent representations to sample from non-conjugate posteriors like those for logistic links, as demonstrated in applications.

Random Effects Specification

Structure Selection

Selecting the appropriate structure for random effects in mixed models is crucial for capturing the hierarchical or clustered nature of data while avoiding misspecification. The random effects structure typically includes decisions on whether to include random intercepts, random slopes for predictors, and correlations among them, guided by statistical criteria and exploratory analyses. Likelihood ratio tests (LRT) are commonly used to compare nested models differing only in their random effects structure, where the test statistic follows a chi-squared distribution under the null hypothesis of no additional random effects. However, caution is advised when testing variance components at the boundary (e.g., zero variance), as the standard chi-squared approximation may not hold, leading to inflated Type I error rates; in such cases, a mixture of chi-squared distributions (e.g., 50:50 mix of chi-squared with 0 and 1 degrees of freedom) is recommended for p-value computation. For non-nested models, information criteria such as the Akaike Information Criterion (AIC) or Bayesian Information Criterion (BIC) provide a penalty for model complexity and have shown high accuracy in selecting the true random effects structure across various simulations, though BIC tends to favor simpler models more than AIC. A practical strategy begins with a simple random intercept model to account for clustering, then incrementally adds random slopes or correlations based on data exploration, such as computing the intraclass correlation coefficient (ICC) to assess the proportion of variance attributable to the grouping factor; an ICC near zero may suggest omitting random effects for certain levels. This bottom-up approach helps build parsimonious structures while evaluating improvements via the aforementioned criteria. Model diagnostics play a key role in validating the selected structure, including residual plots to check for patterns indicating omitted random effects or heteroscedasticity, and quantile-quantile (QQ) plots to assess of residuals and random effects. Additionally, approximate F-tests can evaluate the significance of variance components, providing an alternative to LRT when boundary issues arise, though their performance depends on the number of groups and . Common pitfalls include , where excessively complex structures lead to singular fits (zero or near-zero variance estimates), often due to insufficient data per group or among random effects, resulting in unstable estimates. Under-specification, such as ignoring correlations between random effects, can bias fixed effect estimates and underestimate standard errors, particularly in clustered designs. For example, in longitudinal studies tracking outcomes over time, one might start with a random intercept for subjects and test for a random on time using LRT; if significant (accounting for boundary constraints), it indicates heterogeneous rates of change across individuals, improving model fit as evidenced by lower AIC compared to the intercept-only model.

Variance Component Estimation

Variance component estimation in linear mixed models involves determining the parameters that define the variance-covariance matrices G for random effects and R for residuals, typically parameterized by a vector θ. This process is integrated into maximum likelihood (ML) or restricted maximum likelihood (REML) estimation, where the goal is to find θ that maximizes the (restricted) likelihood by solving for the covariance matrix V = Z G Z' + R, with Z denoting the design matrix for random effects. REML, introduced by Patterson and Thompson, adjusts for the loss of degrees of freedom due to fixed effects estimation, yielding less biased variance component estimates compared to ML, particularly in balanced designs. The optimization proceeds iteratively using algorithms like the scoring method or Newton-Raphson, which rely on the observed or expected Fisher information matrix to update θ until convergence. Once estimated, variance components are interpreted through measures like the intraclass correlation coefficient (), defined as ρ = σ_γ² / (σ_γ² + σ_ε²), where σ_γ² is the between-group (random effect) variance and σ_ε² is the within-group () variance. This coefficient quantifies the strength of clustering or dependence induced by the random effects, with values near 0 indicating negligible grouping and values near 1 suggesting strong homogeneity within groups. Higher-order ICCs can extend this to multiple levels, partitioning total variance into components attributable to each random effect structure. Confidence intervals for variance components are constructed using profile likelihood methods, which profile out nuisance parameters to obtain intervals based on the likelihood ratio statistic, or parametric bootstrap approaches that resample from the fitted model to approximate the . Profile likelihood intervals are asymptotically valid and handle the positive definiteness constraint naturally, while parametric bootstrap provides robust coverage in moderate samples by generating replicates under the estimated model parameters. Key challenges in variance component estimation include downward in small samples, where estimates tend to underestimate true variances more severely than REML, potentially leading to inflated Type I error rates in tests. Additionally, estimates on the boundary (e.g., zero variance for a random effect) complicate , as standard likelihood ratio tests require adjusted p-values due to the non-regular distribution under the null; methods like permutation tests or fiducial generalized p-values address this by accounting for the boundary constraint. For example, in a two-level multilevel dataset with students nested in schools, the total variance in student outcomes can be decomposed into between-school variance σ_γ² (capturing school-level differences) and within-school variance σ_ε² (individual-level variation), allowing researchers to assess the relative contribution of contextual factors to overall heterogeneity.

Applications and Examples

Multilevel Data Analysis

Multilevel data analysis employs mixed models to handle hierarchical or nested data structures, prevalent in fields like social sciences and education, where lower-level units (e.g., students) are grouped within higher-level units (e.g., schools), inducing dependence among observations within the same group. Random effects in these models capture the variability attributable to the higher level (level-2), preventing underestimation of standard errors that would occur if treating data as independent. This approach is essential for accurately modeling clustered data, as ignoring the hierarchy can lead to biased inferences about relationships. A foundational setup is the two-level random intercept model, expressed as: Y_{ij} = \beta_0 + u_j + \beta_1 X_{ij} + \varepsilon_{ij}, where Y_{ij} is the outcome for individual i in group j, \beta_0 is the fixed , u_j \sim N(0, \sigma_u^2) represents the group-specific random deviation in the , \beta_1 is the fixed for predictor X_{ij}, and \varepsilon_{ij} \sim N(0, \sigma^2) is the level-1 residual. This formulation partitions the total variance into within-group and between-group components, enabling precise estimation of group-level effects. The primary benefits of multilevel mixed models include accounting for non-independence in nested data, which adjusts for and yields valid p-values and confidence intervals, unlike ordinary . Additionally, they facilitate cross-level interactions, such as how school resources moderate the effect of student on outcomes, allowing for nuanced exploration of contextual influences. These models also improve predictive accuracy by borrowing strength across groups, particularly useful when group sizes vary. In interpretation, fixed effects describe population-level trends—e.g., \beta_0 as the outcome when X_{ij} = 0, and \beta_1 as the change in Y per unit increase in X—while random variances like \sigma_u^2 quantify clustering, with the coefficient \rho = \sigma_u^2 / (\sigma_u^2 + \sigma^2) indicating the proportion of total variance due to groups. This dual structure supports both generalizable insights and group-specific adjustments. A representative example is predicting achievement in , where a two-level model uses student-level variables (e.g., prior test scores) as fixed effects and -level random intercepts to model between-school variability. In analyses of large-scale datasets, such models reveal that school effects often account for 10 to 20% of variance in outcomes like math proficiency, highlighting the role of institutional factors beyond individual traits.

Longitudinal and Repeated Measures Studies

Longitudinal and repeated measures studies use mixed models to analyze data collected over time on the same subjects, such as in clinical trials or developmental research, where measurements within individuals are correlated due to inherent subject-specific traits. Random effects account for this within-subject dependence, allowing models to flexibly specify trajectories over time while accommodating unbalanced designs with varying numbers of observations or missing data at random. This is crucial for capturing individual growth patterns and testing time-varying effects, which standard regression methods cannot handle without bias. A basic random intercept model for longitudinal data is: Y_{ij} = \beta_0 + \beta_1 t_{ij} + u_i + \varepsilon_{ij}, where Y_{ij} is the outcome for i at time t_{ij}, \beta_0 is the fixed intercept, \beta_1 is the fixed slope for time, u_i \sim N(0, \sigma_u^2) is the subject-specific random intercept, and \varepsilon_{ij} \sim N(0, \sigma^2) is the residual. Extensions can include random slopes v_i t_{ij} to allow varying rates of change across subjects. These models partition variance into between-subject and within-subject components, enabling estimation of average trends and individual deviations. Key advantages include robust handling of through , which avoids bias from listwise deletion, and the ability to model complex covariance structures like autoregressive processes for serial correlation. Mixed models also support hypothesis testing on fixed effects (e.g., impacts) and variance components (e.g., heterogeneity in rates), providing more efficient estimates than generalized estimating equations in many cases. For interpretation, fixed effects represent overall patterns, such as \beta_1 as the change per time unit, while random variances \sigma_u^2 and any slope variances quantify individual variability. This framework is particularly valuable in fields like and for studying change processes. A common example is analyzing measurements in a , where a mixed model with time since as a fixed effect and random intercepts for patients models individual baseline differences and average effects over follow-up visits. Such analyses often reveal that patient-specific factors account for 30-50% of total variance in trajectories, underscoring the importance of personalized responses.

Software Tools

R Packages

Several prominent R packages facilitate the fitting of mixed models, enabling researchers to handle hierarchical and clustered data structures efficiently within the ecosystem. These packages provide interfaces for specifying fixed and random effects, supporting both linear and generalized linear mixed models ( and GLMMs), and offer tools for estimation, diagnostics, and inference. The lme4 package is widely used for fitting and GLMMs, leveraging efficient linear algebra via the Eigen library for improved performance on large datasets. It employs maximum likelihood (ML) and (REML) estimation, with functions like lmer() for linear models and glmer() for generalized cases, though it has limitations in handling complex GLMMs such as those with intricate structures. Basic usage involves a syntax, such as lmer(Y ~ X + (1|group)), where fixed effects precede random intercepts or slopes specified in parentheses; model diagnostics can be performed using plot() for residual checks or resid() for extracting residuals. While scalable for moderately large data, lme4 may encounter issues or computational demands in extremely large datasets exceeding millions of observations. In contrast, the nlme package specializes in linear and nonlinear mixed-effects models, particularly excelling in longitudinal through advanced correlation structures like AR(1) or spatial s. It uses lme() for linear models and nlme() for nonlinear ones, supporting features such as groupedData() for data organization and lmList() for preliminary fixed-effects fits per group, which aid in for repeated measures studies. relies on ML or REML, making it suitable for datasets with complex variance-covariance patterns, though it is generally slower than lme4 for purely linear cases without nonlinear components. For Bayesian approaches, the brms package provides an intuitive interface to fit multilevel models using the probabilistic programming language, supporting a broad range of distributions and link functions in an lme4-like syntax. It enables full via (MCMC), ideal for incorporating prior knowledge in hierarchical models, such as random effects with user-specified priors; use cases include in GLMMs for small samples or complex priors. The glmmTMB package extends GLMM capabilities, particularly for zero-inflated or hurdle models common in ecological and count data applications, built on the Template Model Builder (TMB) for rapid maximum likelihood fitting. It handles extensions like Conway-Maxwell-Poisson distributions and provides syntax similar to glmer(), e.g., glmmTMB(Y ~ X + (1|group), family=poisson(), ziformula=~1) for zero-inflation; it balances speed and flexibility but shares scalability challenges with other packages for massive datasets. Other implementations in languages like or offer complementary tools for non-R environments.

Other Implementations

Several prominent statistical software packages implement mixed-effects models, offering alternatives to for researchers in fields like social sciences, , and . These tools vary in their syntax, computational efficiency, and support for advanced features such as generalized linear mixed models (GLMMs) or Bayesian estimation. In , the PROC MIXED procedure fits a broad range of mixed linear models, including those with fixed and random effects, and supports inference via maximum likelihood or (REML) estimation. It handles complex structures and is particularly useful for analyzing unbalanced data in clinical trials and agricultural experiments. PROC MIXED also accommodates generalized linear mixed models through integration with other procedures like PROC GLIMMIX. Stata's mixed command enables fitting of multilevel mixed-effects models, treating data as hierarchical with fixed effects analogous to standard regression coefficients and random effects capturing group-level variation. It supports two-way interactions, crossed random effects, and post-estimation tests for model comparison, making it suitable for econometric and epidemiological applications. For nonlinear responses, Stata extends this via commands like melogit and meprobit. Python's statsmodels library provides the MixedLM class within its mixed linear models module, designed for on dependent such as longitudinal or clustered observations. It uses REML or and allows specification of random intercepts and slopes, with built-in support for comparing models via likelihood ratio tests. Additionally, statsmodels offers GLM Mixed for generalized linear mixed effects, extending to non-normal responses like or . MATLAB's Statistics and Toolbox includes the fitlme function, which fits linear mixed-effects models using formula-based syntax similar to R's lme4, incorporating fixed and random effects with customizable types (e.g., diagonal, unstructured). It facilitates model diagnostics, such as plots and random effects predictions, and is optimized for integration with MATLAB's matrix operations in simulation-heavy workflows. For matrix-based inputs, fitlmematrix provides an alternative for large-scale computations. Julia's MixedModels.jl package supports fitting linear and generalized linear mixed-effects models with high performance, leveraging Julia's speed for large datasets. It defines models via formula interfaces, estimates parameters using optimization algorithms like Newton-Raphson, and outputs detailed summaries including variance components and adjustments for . This implementation is particularly advantageous for reproducible research due to Julia's dynamic scripting capabilities.

References

  1. [1]
    Introduction to Linear Mixed Models - OARC Stats - UCLA
    Linear mixed models are an extension of simple linear models to allow both fixed and random effects, and are particularly used when there is non independence ...
  2. [2]
    6.5 - Introduction to Mixed Models | STAT 502
    Treatment designs can be comprised of both fixed and random effects. When we have this situation, the treatment design is referred to as a mixed model.Missing: definition | Show results with:definition
  3. [3]
    Mixed-effects model: a useful statistical tool for longitudinal ... - NIH
    Mixed-effects models combine fixed and random variables. · Fixed effects do not change over time; their levels have explicit values that are not correlated.
  4. [4]
    CHARLES ROY HENDERSON | Biographical Memoirs: Volume 73
    What Henderson knew but did not publish until later was that the mixed model equations can be augmented to include equations for the animals without records.
  5. [5]
    [PDF] Charles Roy Henderson, 1911−1989: A Brief Biography1
    Sep 18, 1998 · The mixed model equations result from a simple modification of least squares equations. The simple modification results in best linear unbiased.
  6. [6]
    Random-effects models for longitudinal data - PubMed
    Models for the analysis of longitudinal data must recognize the relationship between serial observations on the same unit.
  7. [7]
    Multi-Level Modeling
    Mixed models (aka random effects models or multilevel models) are an attractive option for working with clustered data.<|control11|><|separator|>
  8. [8]
    [PDF] A Simple, Linear, Mixed-effects Model - cs.wisc.edu
    Mixed-effects models or, more simply, mixed models are statistical models that incorporate both fixed-effects parameters and random effects. Because of the way ...
  9. [9]
    [PDF] An introduction to fitting and evaluating mixed-effects models in R
    Mixed-effects modeling is a multidimensional statistical analysis capable of modeling complex relationships between predictor and outcome variables while ...
  10. [10]
    [PDF] Random-effects models for longitudinal data. - Semantic Scholar
    A unified approach to fitting these models, based on a combination of empirical Bayes and maximum likelihood estimation of model parameters and using the EM ...
  11. [11]
    Linear Mixed Models - NCBI - NIH
    Jan 14, 2022 · Linear mixed models (LMM) are flexible extensions of linear models in which fixed and random effects enter linearly into the model.
  12. [12]
    [PDF] Chapter 15 Mixed Models - Statistics & Data Science
    The term mixed model refers to the use of both fixed and random effects in the same analysis. As explained in section 14.1, fixed effects have levels that are.
  13. [13]
    [PDF] An Introduction to Linear Mixed Models in R and SAS
    Mixed-effect models (aka, “mixed models”) are like classical statistical models, but with some regression parameters (“fixed effects”) replaced by “random ...
  14. [14]
    Mixed Models Theory - SAS Help Center
    Sep 29, 2025 · Least squares is no longer the best method. Generalized least squares (GLS) is more appropriate, minimizing. left-parenthesis bold y minus ...
  15. [15]
    An alternative derivation method of mixed model equations ... - NIH
    Henderson, Kempthorne, Searle, and von Krosigk (1959) proved that the solutions of fixed effects in MME are identical to the solutions of generalized least‐ ...
  16. [16]
    [PDF] Random-Effects Models for Longitudinal Data Nan M. Laird
    Oct 7, 2007 · A generalization of the growth curve model which allows missing data. Journal of Multivariate Analysis 3, 117-124. Laird, N. M. (1982).
  17. [17]
    [PDF] Guidelines for Selecting the Covariance Structure in Mixed Model ...
    Mixed Models, i.e. models with both fixed and random effects arise in a variety of research situations. Split plots, strip plots, repeated measures, multi-site ...
  18. [18]
    6.6 - Mixed Model: School Example | STAT 502
    We can use a mixed model in which we model teacher as a random effect nested within the factorial fixed treatment combinations of Region and School type.
  19. [19]
  20. [20]
    Approximate Inference in Generalized Linear Mixed Models
    Approximate inference in GLMMs uses PQL for mean parameters and pseudo-likelihood for variances, with PQL being of practical value.
  21. [21]
    Nonlinear mixed effects models for repeated measures data - PubMed
    We propose a general, nonlinear mixed effects model for repeated measures data and define estimators for its parameters.
  22. [22]
    [PDF] Feasible estimation of generalized linear mixed models (GLMM ...
    Oct 6, 2010 · The core idea is to deal with the intractable integrals in the likelihood function by multivariate Taylor's approximation. The accuracy of ...
  23. [23]
    [PDF] Generalized linear mixed models: a practical guide for ecology and ...
    Generalized linear mixed models (GLMMs) provide a more flexible approach for analyzing nonnormal data when random effects are pre- sent. The explosion of ...
  24. [24]
    Nonlinear mixed effects dose response modeling in high throughput ...
    To overcome this limitation, we considered NLME models to analyze multiple dose response curves simultaneously to account for the variability across the cell ...
  25. [25]
    [PDF] generalized linear mixed models: theory and practice
    Structure of the GLMM. We start with the conditional distribution of y given b. We assume conditional independence of the elements of y with each distribution.
  26. [26]
    [PDF] Sampling-Based Approaches to Calculating Marginal Densities Alan ...
    Nov 17, 2007 · June 1990, Vol. 85, NO. 410, Theory and Methods. Page 3 ... Gelfand and Smith: Sampling-Based Approaches to Calculating Marginal Densities.
  27. [27]
    [PDF] MCMC Methods for Multi-Response Generalized Linear Mixed Models
    Jan 18, 2010 · MCMCglmm is an R package using Markov chain Monte Carlo for multi-response generalized linear mixed models, supporting various distributions ...
  28. [28]
    [PDF] Methods of variance component estimation
    We describe methods of variance component esti- mation for the simplest case, the one-way ANOVA, and demonstrate most of them by some data sets.
  29. [29]
    A comparison of Bayesian and frequentist methods in random ...
    Jan 18, 2020 · The RMSE was comparable between methods but larger in indirectly than in directly estimated treatment effects.
  30. [30]
    [PDF] SUGI 28: An Introduction to the Analysis of Mixed Models
    Disadvantages: (a) There is no unique way in which to form an ANOVA table when the data are not balanced. (b) The procedure can produce negative estimates of ...
  31. [31]
    [PDF] Model Selection in Linear Mixed Models - arXiv
    We arrange, implement, discuss and compare model selection methods based on four major ap- proaches: information criteria such as AIC or BIC, shrinkage methods.
  32. [32]
    [PDF] Likelihood ratio tests in linear mixed models with one variance ...
    Mar 31, 2003 · We consider the problem of testing null hypotheses that include restrictions on the variance component in a linear mixed model with one variance ...
  33. [33]
    Restricted likelihood ratio testing in linear mixed models with ...
    Abstract: We consider the problem of testing for zero variance compo- nents in linear mixed models with correlated or heteroscedastic errors. In.
  34. [34]
    Specifying the Random Effect Structure in Linear Mixed Effect ...
    Furthermore, AIC and BIC were found to select the true model in the majority of cases, although selection accuracy varied by LMEM random effect structure.
  35. [35]
    The Intraclass Correlation Coefficient in Mixed Models
    The ICC, or Intraclass Correlation Coefficient, can be very useful in many statistical situations, but especially so in Linear Mixed Models.
  36. [36]
    6: Random Effects and Introduction to Mixed Models | STAT 502
    Calculate and interpret the intraclass correlation coefficient. Combining fixed and random effects in the mixed model. Work with mixed models that include ...
  37. [37]
    Level-specific residuals and diagnostic measures, plots, and tests ...
    Mar 1, 2022 · These diagnostic plots can be used to explore missing random effects not captured by the model. For example, a scatter plot of residuals versus ...
  38. [38]
    [PDF] Residual Analysis for Linear Mixed Models - Statistics & Data Science
    Summary. Residuals are frequently used to evaluate the validity of the assumptions of statistical models and may also be employed as tools for model ...<|control11|><|separator|>
  39. [39]
    (PDF) On the performance of the exact F-test in linear mixed models ...
    Feb 8, 2022 · Testing zero variance components in linear mixed models can be performed using an exact F-test. The test statistic is derived using a ...
  40. [40]
    Testing variance components in linear mixed modeling using ...
    The goal of this paper is to introduce a practically feasible permutation method to make inferences about variance components while considering the boundary ...<|separator|>
  41. [41]
    Fixed or random? On the reliability of mixed‐effects models for ... - NIH
    Jul 24, 2022 · We analyzed the consequences of treating a grouping variable with 2–8 levels as fixed or random effect in correctly specified and alternative models.
  42. [42]
    [PDF] Selecting the Best Linear Mixed Model under REML - GitHub Pages
    order to assess the performance of the criteria in choosing the proper random effects structure, three candidate models were fit for each generated dataset ...
  43. [43]
    Accessible analysis of longitudinal data with linear mixed effects ...
    May 6, 2022 · We provide an interactive Shiny App to enable accessible and appropriate analysis of longitudinal data using LME models.
  44. [44]
    [PDF] Longitudinal Data Analysis via Linear Mixed Models
    Random Effects Models: Focus on how regression coefficients vary over individuals. C.J. Anderson (Illinois). Longitudinal Data Analysis via Linear Mixed Models.
  45. [45]
    [PDF] A Reduced Bias Method of Estimating Variance Components in ...
    Variance component estimation in generalized linear mixed models has not received the same attention as for Gaussian linear mixed models. In LMMs, maximum.Missing: papers | Show results with:papers
  46. [46]
    Likelihood Ratio Testing for Zero Variance Components in Linear ...
    In this paper, we consider the problem of testing the null hypothesis of a zero variance component in a linear mixed model (LMM).
  47. [47]
    What are multilevel models and why should I use them?
    Multilevel models recognize data hierarchies, allowing for residuals at each level, such as grouping child outcomes within schools.
  48. [48]
    [PDF] Multilevel (hierarchical) modeling: what it can and can't do∗
    Jun 1, 2005 · The multilevel model is highly effective for predictions at both levels of the model but could easily be misinterpreted for causal inference.
  49. [49]
    Advantages and pitfalls in the application of mixed model ... - NIH
    The advantages of mixed linear model association (MLMA) include preventing false-positive associations due to population or relatedness structure, and ...
  50. [50]
    [PDF] 1 Mixed-Effects Location Scale Models for Joint Modelling School ...
    The traditional model is a mixed-effects linear regression of student current achievement on student prior achievement, background characteristics, and a school ...
  51. [51]
    [PDF] lme4: Linear Mixed-Effects Models using 'Eigen' and S4
    lme4 includes generalized linear mixed model (GLMM) capabilities, via the glmer function. • lme4 does not currently implement nlme's features for modeling ...
  52. [52]
    [PDF] lme4: Mixed-effects modeling with R - ETH Zürich
    In this book I describe the use of R for examining, managing, visualizing, and modeling multilevel data. Madison, WI, USA,. Douglas Bates. October, 2005 iii ...
  53. [53]
    [PDF] nlme: Linear and Nonlinear Mixed Effects Models
    Lindstrom, M.J. and Bates, D.M. (1990) "Nonlinear Mixed Effects Models for Repeated Measures. Data", Biometrics, 46, 673-687. Pinheiro, J.C. and Bates., D.M. ...
  54. [54]
    How to choose nlme or lme4 R library for mixed effects models?
    Dec 10, 2010 · Both packages use Lattice as the backend, but nlme has some nice features like groupedData() and lmList() that are lacking in lme4 (IMO).Overview of lme4 and linear effects model? - Cross ValidatedR's lmer cheat sheet - Cross Validated - Stack ExchangeMore results from stats.stackexchange.comMissing: limitations | Show results with:limitations
  55. [55]
    Chapter 3 A tutorial for using the lme function from the nlme package.
    The `lme` function fits linear mixed-effects models, allowing for nested random effects, correlated errors, and unequal variances. It uses maximum likelihood ...
  56. [56]
    [PDF] brms: An R Package for Bayesian Multilevel Models using Stan
    The brms package implements Bayesian multilevel models in R using the probabilis- tic programming language Stan. A wide range of distributions and link ...
  57. [57]
    brms: An R Package for Bayesian Multilevel Models Using Stan
    Aug 29, 2017 · The brms package implements Bayesian multilevel models in R using the probabilistic programming language Stan. A wide range of distributions ...
  58. [58]
    [PDF] Getting started with the glmmTMB package
    glmmTMB is an R package built on the Template Model Builder automatic differentiation engine, for fitting generalized linear mixed models and exten-.
  59. [59]
    glmmTMB Balances Speed and Flexibility Among Packages for Zero ...
    Nov 30, 2017 · One unique feature of glmmTMB (among packages that fit zero-inflated mixed models) is its ability to estimate the Conway-Maxwell-Poisson ...
  60. [60]
    Generalized Linear Mixed Models using Template Model Builder
    glmmTMB is an R package for fitting generalized linear mixed models (GLMMs) and extensions, built on Template Model Builder, which is in turn built on CppAD ...
  61. [61]
    Overview: MIXED Procedure - SAS Help Center
    Sep 29, 2025 · The MIXED procedure fits a variety of mixed linear models to data and enables you to use these fitted models to make statistical inferences ...
  62. [62]
    [PDF] Linear Mixed Models in Clinical Trials using PROC MIXED
    This paper mainly illustrates how to use PROC MIXED to fit linear mixed models in clinical trials. We first introduce the statistical background of linear ...
  63. [63]
    [PDF] Multilevel mixed-effects linear regression - Stata
    Example 1: Two-level random intercept model. Consider a longitudinal dataset ... Example 3: Two-level model with correlated random effects. We generalize ...
  64. [64]
    [PDF] me — Introduction to multilevel mixed-effects models - Stata
    We let mecmd stand for any mixed-effects command, such as mixed, melogit, or meprobit, except menl. menl models the mean function nonlinearly and thus has a ...
  65. [65]
    Linear Mixed Effects Models - statsmodels 0.14.4
    Linear Mixed Effects models are used for regression analyses involving dependent data. Such data arise when working with longitudinal and other study designs.
  66. [66]
    Generalized Linear Mixed Effects Models - statsmodels 0.14.4
    Generalized Linear Mixed Effects (GLIMMIX) models are generalized linear models with random effects in the linear predictors.
  67. [67]
    fitlme - Fit linear mixed-effects model - MATLAB - MathWorks
    This MATLAB function returns a linear mixed-effects model, specified by formula, fitted to the variables in the table or dataset array tbl.Description · Examples · Input Arguments · Name-Value Arguments
  68. [68]
    fitlmematrix - Fit linear mixed-effects model - MATLAB - MathWorks
    This MATLAB function creates a linear mixed-effects model of the responses y using the fixed-effects design matrix X and random-effects design matrix or ...Description · Examples · Input Arguments · Name-Value Arguments
  69. [69]
    A Julia package for fitting (statistical) mixed-effects models - GitHub
    This package defines linear mixed models ( LinearMixedModel ) and generalized linear mixed models ( GeneralizedLinearMixedModel ).