Fact-checked by Grok 2 weeks ago

Main effect

In statistics, particularly within the framework of analysis of variance (ANOVA), a main effect refers to the direct influence of a single independent variable on the dependent variable, calculated by averaging across the levels of any other independent variables in a experimental . This concept is fundamental to understanding how individual factors contribute to variability in outcomes, such as differences in response times or performance scores, without considering interactions between variables. Main effects are typically assessed in multi-factor experiments, like two-way or higher-order ANOVA, where each independent variable (factor) has multiple levels, and the analysis partitions the total variance into components attributable to each factor, interactions, and error. The significance of a main effect is tested using an F-statistic, which compares the mean square for the factor (MS_factor) to the mean square error (MS_error); a low p-value (e.g., < 0.05) indicates that the factor reliably affects the dependent variable, assuming the null hypothesis of equal population means across levels is false. For instance, in a study examining the impact of teacher expectations and student age on IQ scores, the main effect of teacher expectations would reveal overall differences in scores attributable to expectations alone, averaged over age groups. Interpretation of main effects must account for potential interactions, as a significant interaction between factors can qualify or alter the meaning of individual main effects; in such cases, the effect of one variable may depend on the levels of another, rendering simple main effect interpretations incomplete without further simple effects analysis. Thus, researchers often examine interaction terms first in ANOVA output to ensure accurate conclusions about main effects. This approach underscores the additive versus multiplicative nature of effects in experimental designs, promoting robust inference in fields like psychology, agriculture, and engineering.

Fundamentals

Definition

In statistical analysis, particularly in the context of analysis of variance (ANOVA), a main effect represents the independent influence of a single factor on the response variable in experimental designs, quantified as the average difference in response means across the levels of that factor, while marginalizing over (i.e., averaging across) the levels of all other factors in the design. This isolates the overall contribution of the factor, assuming no interactions unless separately assessed. Unlike marginal effects in regression models or observational studies—which typically denote the average change in the response for a unit increment in a predictor while holding other covariates fixed—main effects in ANOVA pertain specifically to categorical factors in controlled experiments and emphasize balanced averaging across combinations of factors rather than conditional holding. The concept of main effects emerged from Ronald A. Fisher's pioneering work on ANOVA and factorial designs for agricultural experiments at Rothamsted Experimental Station in the 1920s, where he formalized the decomposition of variance into components attributable to individual factors. In basic notation, for a factor A with a levels (i = 1 to a), the main effect at level i is expressed as \alpha_i = \bar{y}_{i..} - \bar{y}_{...}, the deviation of the marginal mean for level i from the grand mean across all observations.

Role in Experimental Design

In the context of experimental design, main effects represent the individual contributions of treatment factors to the overall response in an analysis of variance (ANOVA) framework, as pioneered by in his development of factorial experiments during the early 20th century. Within additive models for ANOVA, the total variation in the response variable decomposes into main effects for each factor plus higher-order interaction terms, allowing researchers to partition the sources of variability systematically. This decomposition, expressed conceptually as the response model Y = \mu + \sum \text{main effects} + \sum \text{interactions} + \epsilon, underscores the role of main effects as the baseline components that capture the average influence of each factor across all levels of the others, independent of confounding from uncontrolled variables. Factorial designs particularly leverage main effects to evaluate the isolated impact of each independent variable on the dependent variable without confounding by other factors, enabling efficient assessment of multiple treatments within a single experiment. For instance, in a two-factor design, the main effect of one factor averages its effect across the levels of the second, providing clear insights into individual factor potency while maximizing experimental efficiency compared to one-factor-at-a-time approaches. This structure, originating from Fisher's agricultural experiments, facilitates the identification of key drivers of outcomes in fields like psychology and biology. Unlike blocking, which accounts for nuisance variables through randomization within blocks to reduce error variance, or covariates in ANCOVA that adjust for continuous predictors, main effects specifically target the direct effects of categorical treatment factors in crossed designs. Main effects thus emphasize controlled manipulations of interest, distinguishing them from strategies for handling extraneous influences. Interpreting main effects requires careful consideration of interactions; they are meaningful primarily when interactions are absent or non-significant, as significant interactions indicate that a factor's effect varies by levels of another, rendering isolated main effect interpretations potentially misleading. In such cases, researchers must prioritize interaction analysis before drawing conclusions about individual factors, ensuring robust inference in experimental outcomes.

Estimation Methods

In One-Way Designs

In one-way designs, the estimation of main effects occurs within the context of a single factor, or treatment, with k fixed levels, where each level has n independent observations, assuming a balanced design for simplicity. This setup is foundational to analysis of variance (ANOVA), originally developed by Ronald A. Fisher to partition observed variability into components attributable to the factor and random error. Here, the main effect represents the only systematic effect present, as no other factors or interactions are considered. The statistical model for a one-way fixed-effects ANOVA is given by Y_{ij} = \mu + \tau_i + \epsilon_{ij}, where Y_{ij} is the j-th observation under the i-th level of the factor (i = 1, \dots, k; j = 1, \dots, n), \mu is the overall population mean, \tau_i is the fixed effect of the i-th level, and \epsilon_{ij} are independent random errors normally distributed with mean 0 and variance \sigma^2. The main effect estimate for level i is then \hat{\tau}_i = \bar{Y}_{i..} - \bar{Y}_{...}, where \bar{Y}_{i..} denotes the sample mean for level i and \bar{Y}_{...} is the grand mean across all observations. This least-squares estimator measures the deviation of each level's mean from the grand mean, providing a point estimate of the factor's influence. To quantify the overall variability due to the main effect, the sum of squares for the factor (often denoted SS_A) is calculated as SS_A = n \sum_{i=1}^k (\bar{Y}_{i..} - \bar{Y}_{...})^2. This term captures the between-group variation scaled by the sample size per level, serving as the basis for further analysis in the ANOVA table. The associated degrees of freedom for the main effect is k - 1, reflecting the number of independent comparisons among the k levels. Interpretation of the main effect estimates focuses on their sign and magnitude relative to the grand mean, which acts as a baseline. A positive \hat{\tau}_i indicates that level i elevates the response above the average, while a negative value suggests a depressive effect; the absolute size quantifies the strength of this directional influence. These estimates assume the constraint \sum_{i=1}^k \tau_i = 0 for identifiability in the fixed-effects model.

In Multi-Factor Designs

In multi-factor designs, such as , the estimation of a for a given factor involves marginalizing over the levels of all other factors to isolate its independent contribution to the response variable. This averaging process ensures that the effect attributed to the factor of interest is not confounded by the specific combinations of other factors. For instance, in a balanced with factor A having a levels and factor B having b levels, the main effect of A is computed by first obtaining the marginal means for each level of A, which average the cell responses across all levels of B. The least squares estimator for the main effect of level i of factor A is given by \hat{\alpha}_i = \frac{1}{b} \sum_{j=1}^b \bar{Y}_{ij.} - \bar{Y}_{...}, where \bar{Y}_{ij.} is the sample mean of the observations in the cell corresponding to level i of A and level j of B, and \bar{Y}_{...} is the grand mean of all observations. This formula represents the deviation of the marginal mean for level i of A from the overall mean, effectively capturing the average difference attributable to A while averaging out B's influence. The approach aligns with the one-way estimation as a special case when B has only one level. To quantify the total variation explained by the main effect of A, the sum of squares is calculated as SS_A = n \sum_{i=1}^a (\bar{Y}_{i..} - \bar{Y}_{...})^2, with degrees of freedom df_A = a - 1, where \bar{Y}_{i..} is the marginal mean for level i of A (averaged over B), and n denotes the number of observations per marginal level in the balanced case. This measure partitions the total variability in a way that attributes to A the squared deviations of its marginal means from the grand mean, scaled appropriately by the design structure. This estimation procedure generalizes seamlessly to higher-order factorial designs, such as three-way or more complex layouts, where the main effect for a specific factor is estimated by averaging the response over all combinations of the remaining factors, thereby disregarding higher-order interactions during the marginalization step. In cases of unequal cell sizes, or unbalanced designs, direct averaging is adjusted using least squares estimation to obtain unbiased parameter estimates or weighted averages proportional to sample sizes, ensuring the main effects reflect the underlying population differences without distortion from the imbalance.

Statistical Testing

Hypothesis Testing Procedures

In hypothesis testing for main effects within analysis of variance (ANOVA) frameworks, the null hypothesis posits that there is no effect of the factor on the response variable, meaning all associated population means are equal or, equivalently, all main effect parameters are zero. For a factor A with a levels, this is formally stated as H_0: \alpha_1 = \alpha_2 = \dots = \alpha_a = 0, where \alpha_i represents the main effect parameter for level i. The alternative hypothesis H_A asserts that at least one \alpha_i \neq 0, indicating a significant main effect. The primary inferential tool for testing this null hypothesis is the F-test, which compares the variability attributable to the main effect against the unexplained error variability. The test statistic is calculated as F = \frac{MS_A}{MS_E}, where MS_A is the mean square for factor A, given by MS_A = \frac{SS_A}{a-1} with SS_A as the sum of squares for A, and MS_E is the mean square error representing residual variability. Under the null hypothesis, this F-statistic follows an F-distribution with a-1 numerator degrees of freedom and N - a denominator degrees of freedom, where N is the total sample size. Ronald Fisher introduced this F-test in the context of ANOVA to assess variance partitions in experimental designs. In multi-factor designs, such as two-way ANOVA, separate F-tests are conducted for each main effect, with the test for a given factor analogous to the one-way case but using the appropriate sums of squares and degrees of freedom. For instance, in a two-way design with factors A and B, the main effect F-test for A uses MS_A / MS_E with degrees of freedom (a-1, N - ab), where b is the number of levels of B. If an interaction term is present, its significance is typically tested first; a significant interaction may qualify the interpretation of main effects, though main effect tests proceed independently under the fixed-effects model. The p-value from the F-test is the probability of observing an F-statistic at least as extreme as the calculated value assuming the null hypothesis is true. A common decision rule rejects H_0 if the p-value is less than a pre-specified significance level \alpha, such as 0.05, indicating sufficient evidence of a main effect. This threshold controls the Type I error rate at \alpha. Power analysis for detecting main effects relies on effect size measures to quantify the magnitude of non-null effects and inform sample size requirements. Eta-squared (\eta^2), defined as the proportion of total variance explained by the main effect (\eta^2 = SS_A / SS_{total}), serves as a key effect size metric, with guidelines classifying values of 0.01 as small, 0.06 as medium, and 0.14 as large. Partial eta-squared extends this for multi-factor designs by isolating the effect relative to other sources of variance. Higher effect sizes increase statistical power, the probability of correctly rejecting H_0 when a true main effect exists, typically targeted at 0.80 or higher in planning.

Assumptions and Limitations

The analysis of main effects in experimental designs, particularly through analysis of variance (ANOVA), relies on several key assumptions to ensure valid inference. These include the independence of observations, which requires that data points are collected such that the value of one observation does not influence another, often achieved through random sampling or blocking in experimental setups. Additionally, the residuals (errors) should be normally distributed within each group, and the variances across groups must be homogeneous (). Violations of these assumptions can compromise the reliability of hypothesis tests for main effects, as outlined in standard statistical procedures. Homogeneity of variances can be assessed using Levene's test, which evaluates whether the spread of data is similar across factor levels; a non-significant result (typically p > 0.05) supports the assumption. Non-normality of errors may lead to inflated Type I error rates, particularly in small samples or with skewed distributions, potentially resulting in false positives for main effects. Similarly, heteroscedasticity (unequal variances) can bias the F-statistic used in ANOVA, increasing error rates especially in unbalanced designs. In such cases, robust alternatives like Welch's ANOVA are recommended, as it adjusts to accommodate unequal variances and maintains control over Type I errors without requiring , making it suitable for main effect estimation in violated conditions. A significant limitation of main effect arises when interactions between are present, as the average effect of a may obscure or mislead interpretations of group differences. For instance, qualitative interactions—where the of the main effect reverses across levels of another —can render the overall main effect uninterpretable, as it averages opposing trends. Quantitative interactions, involving differences in magnitude but not , may also qualify main effects, emphasizing the need to test and report interactions first. Traditional ANOVA focuses primarily on significance testing, often overlooking measures such as partial eta-squared, which quantifies the proportion of variance explained by a main effect while partialling out other ; values of 0.01, 0.06, and 0.14 indicate small, medium, and large effects, respectively, providing context beyond p-values. Main effect analysis should be avoided or deprioritized in designs exhibiting strong interactions, where interpreting the interaction term takes precedence to avoid misleading conclusions about individual factors. Overall, while ANOVA is robust to mild violations in large samples, persistent breaches necessitate transformations, non-parametric tests, or robust methods to safeguard the validity of main effect inferences.

Applications and Examples

Illustrative Example

Consider a hypothetical experiment examining the effects of dose (factor A: low or high) and exposure time (factor B: 1 hour or 2 hours) on plant growth measured in centimeters, with three replicates per treatment combination for a total of 12 observations. This balanced two-way allows and testing of the main effect of dose while controlling for time. The means, computed as the average growth within each dose-time combination, are as follows:
Dose \ Time1 hour2 hoursMarginal Mean (A)
Low6.58.57.5
High11.513.512.5
Marginal Mean (B)9.011.0Grand mean: 10.0
The main effect estimates for factor A are obtained by subtracting the from each marginal mean for dose: \hat{\alpha}_1 = 7.5 - 10.0 = -2.5 for the low dose level and \hat{\alpha}_2 = 12.5 - 10.0 = 2.5 for the high dose level. These values represent the average deviation in plant growth attributable to dose, averaged across times. To test the significance of this main effect, perform a two-way ANOVA, partitioning the total variability into components for dose, time, their interaction, and error. The sums of squares (SS) are calculated using standard formulas: for dose, SS_A = (number of observations per dose level) × (deviation of each marginal mean from )² = 6 × [(-2.5)² + (2.5)²] = 75. (df) for A is 1, so the (MS_A) = 75 / 1 = 75. Assuming SS_error = 73 (df_error = 8, from total df = 11 minus 3 for factors and interaction), MS_error = 73 / 8 = 9.125. The F-statistic for the main effect of dose is then MS_A / MS_error = 75 / 9.125 ≈ 8.22. The complete ANOVA table is:
SourcedfSSMSFp-value
Dose (A)175758.220.02
Time (B)112121.30.28
A × B10001.00
Error8739.125
Total11160
This table shows the F-statistic of 8.22 for dose, with a of 0.02 (obtained from the with df = 1, 8), rejecting the of no main effect at α = 0.05. The significant main effect indicates that plant growth differs substantially by fertilizer dose, with an average increase of 5 cm under high dose compared to low dose, averaged over exposure times (F(1,8) = 8.22, p = 0.02). No significant effects are found for time or the . A plotting the marginal means for dose (7.5 cm for low, 12.5 cm for high) would visually emphasize this difference, with representing the of the means to convey variability.

Extensions and Variations

Non-parametric methods provide alternatives to traditional ANOVA for assessing main effects when data violate assumptions such as or homogeneity of variances. The Kruskal-Wallis test serves as a rank-based analog to the one-way ANOVA, evaluating the main effect of a single factor across multiple independent groups by ranking all observations and comparing the mean between groups. For multi-factor designs, the aligned rank transform (ART) extends this approach by aligning data for each effect of interest—such as main effects—before applying a and conducting a standard ANOVA on the ranks, enabling nonparametric analysis of factorial structures without assuming distributions. In hierarchical or clustered data, mixed-effects models incorporate main effects of fixed factors while accounting for random effects from grouping variables, such as subjects or sites, to handle dependencies in repeated measures or nested designs. These models estimate main effects through fixed-effect coefficients in a linear predictor, with the lmer function in R's lme4 package facilitating fitting via maximum likelihood or , as demonstrated in applications to longitudinal studies where random intercepts capture variability across clusters. Beyond significance testing, reporting effect sizes for main effects quantifies their practical magnitude in ANOVA contexts. Cohen's f measures the standardized difference in means across groups for a main effect, with benchmarks indicating small (f = 0.10), medium (f = 0.25), and large (f = 0.40) effects based on conventions. Alternatively, omega-squared (ω²) provides an unbiased estimate of the proportion of variance explained by a main effect, preferred over eta-squared for its correction for and bias in small samples. Recent advancements integrate main effect concepts with and Bayesian frameworks. In tree-based models like random forests, SHAP (SHapley Additive exPlanations) values approximate main effects by attributing the average marginal contribution of each feature to predictions, offering interpretable insights into feature importance akin to ANOVA main effects in non-linear settings. Bayesian estimation of main effects employs priors on effect sizes or variances to compute posterior distributions, enabling credible intervals and model comparisons via Bayes factors, as implemented in packages like brms for to handle uncertainty in ANOVA-like designs.

References

  1. [1]
    9.2 Interpreting the Results of a Factorial Experiment
    A main effect is the effect of one independent variable on the dependent variable—averaging across the levels of the other independent variable. Thus there is ...<|control11|><|separator|>
  2. [2]
    [PDF] Chapter 11 Two-Way ANOVA - Statistics & Data Science
    Each main effect p-value corresponds to the null hypothesis that population means of the outcome are equal for all levels of the factor ignoring the other ...
  3. [3]
    [PDF] Main Effects & Interactions page 1
    A statistical interaction occurs when the effect of one ... As these examples demonstrate, main effects and interactions are independent of one another.
  4. [4]
    [PDF] Interaction Effects in ANOVA
    When interaction effects are present, it means that interpretation of the main effects is incomplete or misleading. Kinds of Interactions. For example ...
  5. [5]
    Main effects and interactions | Statistical Modeling, Causal Inference ...
    Mar 27, 2009 · It is generally good practice to examine the test interaction first, since the presence of a strong interaction may influence the interpretation of the main ...
  6. [6]
  7. [7]
    Multi-Factor Between-Subjects Designs - Online Statistics Book
    A main effect of an independent variable is the effect of the variable averaging over the levels of the other variable(s). It is convenient to talk about main ...<|control11|><|separator|>
  8. [8]
    1.1 - A Quick History of the Design of Experiments (DOE) | STAT 503
    R. A. Fisher & his co-workers; Profound impact on agricultural science; Factorial designs, ANOVA. The first industrial era, 1951 – late 1970s. Box & Wilson ...
  9. [9]
    5: Multi-Factor ANOVA - STAT ONLINE
    The main effect of factor A is the effect of A on the response ignoring the effect of all other factors. The main effect of a given factor is equivalent to the ...
  10. [10]
    ONE WAY ANOVA - Information Technology Laboratory
    In a fixed effects model, ANOVA is used to test the hypothesis that τ 1 = τ 2 = ... dat - the level id, the level mean, the effect estimate and the standard ...
  11. [11]
    [PDF] anova.pdf - 15. Analysis of Variance - Online Statistics Book
    One way to evaluate the main effect of Diet is to compare the weighted mean for the low-fat diet (-26) with the weighted mean for the high-fat diet (-4).
  12. [12]
    One-way fixed effects ANOVA- Principles - InfluentialPoints
    One-way fixed effects ANOVA tests if sample means come from identical populations, comparing means between fixed levels of a single treatment factor.
  13. [13]
    10: One-Way ANOVA - STAT ONLINE
    In this lesson, we will learn how to compare the means of more than two independent groups. This procedure is known as a one-way between groups analysis of ...
  14. [14]
  15. [15]
    7.4.3.7. The two-way ANOVA - Information Technology Laboratory
    In a factorial experiment with factor at levels and factor at levels, the model for the general layout can be written as Y i j = μ + τ i + β j + γ i j + ϵ i j ...
  16. [16]
    [PDF] Unbalanced two-factor ANOVA
    The term “unbalanced” means that the sample sizes nkj are not all equal. A balanced design is one in which all nkj = n. In the unbalanced case, there are 2 ...
  17. [17]
    One-way ANOVA | When and How to Use It (With Examples) - Scribbr
    Mar 6, 2020 · The null hypothesis (H0) of ANOVA is that there is no difference among group means. The alternative hypothesis (Ha) is that at least one group ...
  18. [18]
    3.5 - The Analysis of Variance (ANOVA) table and the F-test
    The ANOVA F-test for the slope parameter β​​ The null hypothesis is H0: β1 = 0. The alternative hypothesis is HA: β1 ≠ 0. The test statistic is F^*=\frac{MSR}{ ...
  19. [19]
    One-Way ANOVA | Introduction to Statistics - JMP
    The goal of hypothesis testing is to determine whether there is enough evidence to support a certain hypothesis about your data. Recall that with ANOVA, we ...
  20. [20]
    Understanding Analysis of Variance (ANOVA) and the F-test
    May 18, 2016 · Analysis of variance (ANOVA) can determine whether the means of three or more groups are different. ANOVA uses F-tests to statistically test ...
  21. [21]
    [PDF] Statistical Methods For Research Workers Thirteenth Edition
    Page 1. Statistical Methods for. Research Workers. BY. Sir RONALD A. FISHER, sg.d., f.r.s.. D.Sc. (Ames, Chicago, Harvard, London), LL.D. (Calcutta, Glasgow).
  22. [22]
    Two-Way ANOVA | Examples & When To Use It - Scribbr
    Mar 20, 2020 · A two-way ANOVA is used to estimate how the mean of a quantitative variable changes according to the levels of two independent variables.Missing: \hat | Show results with:\hat
  23. [23]
    ANOVA 3: Hypothesis test with F-statistic (video) - Khan Academy
    Aug 4, 2016 · So let's say the significance level that we care about, for our hypothesis test, is 10%. 0.10 -- which means that if we assume the null hypothesis, there is ...
  24. [24]
    Statistical notes for clinical researchers: Sample size calculation 3 ...
    Eta squared is expressed as sum of squares between groups (SSeffect) divided by the total sum of squares of the dependent variable (SStotal), η 2 = SS effect SS ...
  25. [25]
    What is Eta Squared? (Definition & Example) - Statology
    Eta squared is a measure of effect size that is commonly used in ANOVA models. It measures the proportion of variance associated with each main effect and ...
  26. [26]
    Eta squared and partial eta squared as measures of effect size in ...
    Eta squared measures the proportion of the total variance in a dependent variable that is associated with the membership of different groups defined by an ...
  27. [27]
    Assumptions of the Factorial ANOVA - Statistics Solutions
    Here's a simplified and expanded explanation of these assumptions: (1) interval data of the dependent variable, (2) normality, (3) homoscedasticity, and (4) no ...1. Type Of Data Required · 3. Homoscedasticity · Importance Of Variation In...<|control11|><|separator|>
  28. [28]
    What is ANOVA (Analysis of Variance), Types, Assumptions and Uses
    Mar 23, 2023 · The assumptions of ANOVA are as follows: Normality: The data within each group should be normally distributed. Homogeneity of variance: The ...
  29. [29]
    Testing the Assumptions of ANOVAs - CRAN
    Aug 24, 2025 · Let us take a look at all three empirically testable assumptions in detail. ANOVAs are often robust to light violations to the homogeneity of variances ...
  30. [30]
    One-Way ANOVA Assumptions, Interpretation, and Write Up
    The dependent variable (the variable of interest) needs to be a continuous scale (i.e., the data needs to be at either an interval or ratio measurement).
  31. [31]
    [PDF] CONSEQUENCES OF ASSUMPTION VIOLATIONS REGARDING ...
    Assumption violations in one-way ANOVA, like non-normality and heteroscedasticity, mainly affect type I errors, especially in the F-test.
  32. [32]
    [PDF] Effects of violating the assumptions of equal variance and ...
    The variability of the curves' positions can be regarded as the main effect of the participants and it is character- ized by the parameter (σβ. )2 in the Yu ...
  33. [33]
    Robust alternatives to traditional analysis of variance: Welch W ...
    Welch's test is a robust alternative to the classical ANOVA and it can be used even if the data violate the assumption of homogeneity of variances (Reed and ...
  34. [34]
    Benefits of Welch's ANOVA Compared to the Classic One-Way ANOVA
    Welch's ANOVA is an alternative to the traditional analysis of variance (ANOVA) and it offers some serious benefits. One-way analysis of variance determines ...
  35. [35]
    6.1: Main Effects and Interaction Effect - Statistics LibreTexts
    Jan 8, 2024 · The main effect of Factor A (species) is the difference between the mean growth for Species 1 and Species 2, averaged across the three levels of fertilizer.Main Effects and Interaction... · Example 6 . 1 . 1 · Assumptions
  36. [36]
    Chapter 15 Two-Way Analysis of Variance (ANOVA)
    To find the main effects, find the mean of each column. If there are differences in these means, there is a significant main effect for one of the factors. Next ...
  37. [37]
    Partial Eta Squared - Statistics Resources - National University Library
    Oct 27, 2025 · Partial eta squared is telling us how large of an effect the independent variable(s) had on the dependent variable.
  38. [38]
    One-way ANOVA - Violations to the assumptions of this test and how ...
    The one-way ANOVA is considered a robust test against the normality assumption. This means that it tolerates violations to its normality assumption rather well.
  39. [39]
    Bayesian estimation of explained variance in ANOVA designs - PMC
    Our secondary goal is to estimate the squared multiple correlations using Bayesian methods.