Fact-checked by Grok 2 weeks ago

Factorial experiment

A factorial experiment is an experimental design in that simultaneously investigates the effects of two or more variables, known as factors, on a dependent or response, by testing all possible combinations of the factor levels to evaluate both individual main effects and their interactions. This approach contrasts with one-factor-at-a-time methods by allowing researchers to efficiently study multiple variables and detect how factors influence each other, making it a cornerstone of modern experimental design in fields such as , , and clinical trials. The origins of factorial experiments trace back to the work of British statistician Ronald A. Fisher in the 1920s and 1930s at the Rothamsted Experimental Station, where he developed these designs to optimize agricultural research by accounting for multiple soil and treatment variables. Fisher's foundational book, (1935), formalized the factorial approach, emphasizing , replication, and blocking to minimize and enhance validity. This innovation dramatically improved the efficiency of experiments compared to earlier sequential testing methods, enabling broader generalizability of results. In a factorial experiment, each factor is varied at specified levels—typically two or more discrete values, such as low and high dosages in a trial—and the is denoted by the number of levels per (e.g., a 2×3 for two factors with 2 and 3 levels, respectively, yielding 6 combinations). Main effects assess the independent impact of each , while interactions reveal whether the effect of one depends on the level of another, which is crucial for understanding complex real-world phenomena. Experiments often incorporate replication across runs and of order to control for extraneous variables and ensure statistical robustness. Factorial designs are classified as full factorial, which include all combinations for complete resolution of effects, or fractional factorial, which use a subset of combinations to reduce experimental costs while still estimating key effects, particularly useful when many factors are involved. Analysis typically employs analysis of variance (ANOVA) to partition variance into components attributable to main effects and interactions, with significance tested via . These designs are highly efficient, often requiring fewer runs than separate one-factor experiments, and their supports clear separation of effects for reliable conclusions.

Fundamentals

Definition

A factorial experiment is a type of experimental design in which multiple variables, known as factors, are varied simultaneously across their specified levels to evaluate their (main) effects and combined () effects on a dependent variable, referred to as the response . This approach allows researchers to systematically test all possible combinations of factor levels, providing a comprehensive of how factors influence the response in isolation and together, which is essential for understanding complex systems in fields such as , , and . In a factorial experiment, each is an independent variable that can take on two or more levels, such as low and high settings for or dosage. The unique combinations of these levels across all factors are called runs or combinations, and in a full design, every possible run is executed to ensure complete crossing of factors. For k factors with n_1, n_2, \dots, n_k levels respectively, the total number of runs required is the product n_1 \times n_2 \times \dots \times n_k; for instance, a two-level design with k factors yields $2^k runs. Unlike one-factor-at-a-time (OFAT) designs, which vary only one factor while holding others constant and thus require more runs to explore interactions, factorial experiments test multiple factors concurrently, enhancing by detecting both main effects and interactions within fewer total experiments. This is particularly valuable in resource-limited settings, as it provides broader inferential power and reduces the risk of interactions that OFAT approaches often miss. Factorial experiments are a form of controlled , where researchers deliberately manipulate levels to observe causal relationships on the response , distinguishing them from observational studies that merely record associations without intervention. The response is the measurable outcome of interest, such as in an agricultural or in a clinical test, which must be quantifiable to enable statistical analysis of effects.

Notation

In factorial experiments, the is commonly described using notation that specifies the number of levels for each . For symmetric designs where all factors have the same number of levels, the notation p^k indicates k factors each with p levels, resulting in p^k combinations; for example, a $2^3 design involves three factors at two levels each, yielding eight combinations. Asymmetric designs, with varying levels across factors, are denoted by products such as $2 \times 3 \times 2, representing one at two levels, another at three, and a third at two, for a total of 12 combinations. This notation facilitates clear communication of the experiment's structure and scale. Factors are coded to simplify analysis and interpretation. For quantitative factors in two-level designs, levels are typically coded as -1 for the low setting and +1 for the high setting, centering the values around zero and scaling the range to 2 units; this coding is particularly useful for estimating effects via contrasts. Qualitative factors are often labeled with uppercase letters (e.g., A, B, C), with levels denoted by lowercase letters or descriptive terms (e.g., a_1 for the first level of A, or A_low and A_high); an alternative binary coding of 0 and 1 may be used for computational convenience in some software implementations. Treatment combinations are represented either as tuples specifying levels (e.g., (low, high, low) for three factors) or as multiplicative codes using the numeric values (e.g., -1 \times +1 \times -1); in Yates' standard order for two-level designs, combinations are listed systematically, such as (1) for all low levels, a for high A and low others, b for high B and low others, and ab for high A and B with low others. The response , representing the measured outcome, is denoted as Y, with subscripts indicating the specific treatment combination and replication; for a three-factor design, this is often Y_{ijk\ell}, where i, j, and k index the levels of factors A, B, and C, respectively, and \ell denotes the replicate within that combination. Regarding , the total for the experiment equals the number of runs minus 1 (i.e., total df = N - 1, where N is the total observations). For a factor with l levels, the is l - 1; in two-level designs, this simplifies to 1 df per main effect and per .

Historical Development

Origins

Early multi-factor experiments that foreshadowed the development of factorial designs can be traced to the mid-19th century, when John Bennet Lawes and Joseph Henry Gilbert established the Rothamsted Experimental Station in 1843 to systematically study the effects of fertilizers on crop yields. Their pioneering long-term field trials, including the Broadbalk wheat experiment starting in 1843 and the Hoosfield barley experiment in 1852, examined interactions between soil conditions and various nutrient applications, such as mineral salts with and without , across multiple plots under continuous cropping and rotation systems. These efforts demonstrated the essential role of combined nutrients like , , and in plant growth, establishing foundational evidence for multi-factor influences on agricultural productivity. This work addressed limitations in earlier classical approaches, exemplified by Justus von Liebig's 1840 mineral theory, which focused on individual essential elements (, , and potassium) as limiting factors without fully accounting for their interactions. Lawes and Gilbert's multi-treatment designs revealed inefficiencies in such single-factor perspectives by showing, for instance, that nitrogen responses depended on phosphorus availability and that could fix atmospheric , countering Liebig's emphasis on soil-derived . By the early , agricultural research increasingly recognized the need for systematic multi-factor trials to capture real-world complexities beyond isolated variables, driven by expanding expertise in fields like and at institutions such as Rothamsted. This transition set the stage for Ronald A. Fisher's formalization of factorial methods during his tenure there from 1919 onward. In papers from 1921 to 1926, including "Statistical Methods for Research Workers" (1925) and "The Arrangement of Field Experiments" (1926), Fisher championed designs as superior to single-factor tests, arguing they enabled efficient detection of interactions using fewer experimental units while controlling for variability in field conditions. The term "factorial design" first appeared in print in Fisher's seminal 1935 book, , which dedicated a chapter to its principles and application in scientific inquiry.

Key Advancements

Frank Yates made significant contributions to experimental in , particularly through his development of methods for and fractional designs, which addressed the challenges of implementing large-scale experiments with limited resources. His work on partial confounding allowed researchers to estimate main effects while sacrificing some higher-order interactions, making designs more feasible for practical applications in and . Yates' seminal 1937 publication, The Design and Analysis of Factorial Experiments, provided systematic procedures for constructing and analyzing these designs, including tables for efficient confounding patterns. In the mid-20th century, factorial designs gained widespread adoption in industrial experimentation, notably through the integration with (RSM) introduced by and K.B. Wilson in 1951. Their approach built on factorial structures to model quadratic responses and optimize processes, such as in , by using sequential experimentation to approximate optimal conditions. This extension enabled the exploration of curved response surfaces beyond linear main effects, facilitating improvements in manufacturing efficiency. The computational era from the to marked a pivotal advancement with the integration of computers into factorial analysis, exemplified by the development of GENSTAT software at Rothamsted Experimental Station. First released in 1968, GENSTAT automated the complex calculations for unbalanced and confounded factorial designs, reducing manual errors and enabling larger datasets in agricultural and biological research. This software's capabilities for variance analysis and design generation democratized access to sophisticated factorial methods, supporting their broader application in experimental sciences. Post-2000 developments have incorporated designs into and advanced optimization frameworks, enhancing their utility in , , and . In contexts, fractional factorials efficiently identify key factors among many variables, as demonstrated in stem cell viability studies where they screened additive combinations rapidly. Concurrently, Bayesian approaches to designs, emerging prominently in the , have introduced probabilistic modeling to handle uncertainty and interactions more robustly, particularly in time-course experiments and industrial optimization. These methods, often leveraging for posterior inference, allow adaptive designs that update based on accumulating data, improving efficiency over classical frequentist analyses.

Benefits and Limitations

Advantages

Factorial experiments provide significant efficiency in estimating effects by allowing researchers to assess multiple factors and their interactions using fewer experimental runs compared to conducting separate one-factor-at-a-time (OFAT) studies for each factor. This approach reduces the total number of trials required while maintaining or improving the precision of estimates for main effects and interactions. A key advantage is the ability to detect interaction effects, which reveal how the influence of one on the response depends on the levels of other factors— that OFAT designs cannot capture, as they examine factors in isolation. By jointly varying all factors, factorial designs uncover synergistic or antagonistic relationships that might otherwise go unnoticed, leading to more accurate models of the underlying process. Factorial experiments promote an economy of experimentation through the reuse of data across multiple analyses; for instance, in a full 2^k design, each experimental run contributes information to the estimation of all k main effects and higher-order interactions, maximizing the informational yield per trial. This is particularly valuable in resource-constrained settings, such as industrial or , where minimizing runs without sacrificing insight is essential. The designs also enhance robustness by producing results that are generalizable across the tested levels of all factors simultaneously, providing a broader understanding of the system's behavior than isolated tests. Furthermore, factorial setups facilitate the inclusion of replicates within the to estimate experimental error, improving the reliability of inferences about effects and interactions. An illustrative example is an at , the Swedish bearing manufacturer, where statistician Christer Hellstrand designed a 2^3 experiment on factors including inner ring , outer ring osculation, and cage . The experiment revealed that increasing outer ring osculation and inner ring together extended bearing life fivefold, saving tens of millions of dollars.

Disadvantages

One primary disadvantage of full factorial experiments is the exponential growth in the number of experimental runs required, which scales as s^k for k factors each at s levels, leading to substantial resource demands even for moderate numbers of factors. For instance, a design with 5 factors at 3 levels each demands 243 runs, while one with 10 binary factors requires runs, often rendering such experiments impractical without extensive time and materials. This escalation in scale quickly overwhelms budgets and timelines, particularly in or settings where each run involves costly procedures or . While full factorial designs can estimate all effects, they often assume negligible higher-order interactions for practical interpretation (sparsity ), but significant higher-order interactions can complicate and may necessitate shifting to fractional designs for feasibility. In unreplicated designs, where only one per combination is collected to minimize runs, estimating experimental becomes challenging without replicates, as higher-order interactions must be pooled as the error term, potentially results if those interactions are not truly negligible. Additionally, the overall cost and time demands intensify with increasing factors, as blocking may be required to control for extraneous variables, further complicating execution without adequate resources. To mitigate these limitations, researchers often employ fractional designs or screening experiments, which reduce the number of runs—such as using a $2^{k-p} to estimate main effects and low-order interactions—while preserving essential information at a of the cost. These approaches balance the trade-off between comprehensive interaction detection and practical constraints, allowing initial screening of vital factors before committing to fuller designs if needed.

Design and Implementation

Basic Factorial Designs

Basic factorial designs, also known as full designs, involve testing all possible combinations of levels for a set of factors, providing a complete examination of main effects and interactions in experimental settings. These designs are particularly useful for two-level factors, resulting in $2^k experimental runs for k factors, and form the foundation of (DOE) methodologies. Developed in the early 20th century by statisticians like R. A. Fisher, they emphasize systematic construction to ensure and efficiency in . The construction of basic $2^k factorial designs proceeds recursively, starting from simpler designs and building upward. For instance, a $2^1 design consists of two runs: one at the low level and one at the high level of the single factor. To create a $2^2 design, duplicate the $2^1 runs and flip the levels of the new second factor in the duplicated set, yielding four unique combinations. This process extends to higher k; for a $2^3 design, duplicate the $2^2 design and flip the levels of the third factor in the copy, producing eight runs that cover all combinations. Such recursive duplication ensures the design remains balanced and allows for the estimation of all effects without aliasing in full factorials. Replication is incorporated by repeating each combination multiple times, which provides for estimating experimental error and improves precision. A minimum of two replicates per condition is typically required for basic variance estimation, though more may be used depending on the desired . For example, in a $2^2 design with two replicates, the total number of runs becomes eight, allowing separation of systematic effects from random variation. Randomization plays a crucial role in basic factorial designs by assigning the run order randomly to experimental units, thereby minimizing systematic biases from uncontrolled factors like time trends or environmental changes. This practice, integral to valid , ensures that observed differences are attributable to the factors under study rather than extraneous influences. is employed to group runs into subsets that control for known factors, such as shifts or batches, enhancing the design's robustness without increasing the number of full factorials. For instance, runs might be divided into blocks corresponding to day and night operations, with applied within each block to further reduce variability. A simple example is a $2 \times 2 factorial design examining the effects of (factor A: low, high) and (factor B: low, high) on a chemical reaction yield. The four treatment combinations, or runs, are:
Run (A) (B)
1LowLow
2LowHigh
3HighLow
4HighHigh
These runs would be randomized in order and potentially replicated or blocked as needed.

Advanced and Fractional Designs

Fractional designs extend basic two-level experiments to larger numbers of factors by selecting a fraction of the full $2^k runs, denoted as $2^{k-p} where p is the number of factors "fractionated" out, allowing estimation of main effects and low-order interactions with reduced experimental effort. For instance, a $2^{4-1} requires only 8 runs to study 4 factors, equivalent to a half-replicate of a $2^3 full , where one factor is defined as the product of the others (e.g., D = ABC). The structure in these arises from the defining relation, which specifies aliases between effects; in the $2^{4-1} design with I = ABCD, two-factor interactions are aliased pairwise, such as AB = CD, meaning the estimate for AB includes any effect of CD. classifies these designs by the length of the shortest word in the defining relation, with higher minimizing IV designs, for example, ensure main effects are unaliased with two-factor interactions and two-factor interactions are unaliased among themselves, making them preferable for detecting interactions. For screening many factors at two levels, Plackett-Burman designs provide efficient asymmetric alternatives, approximating a $2^k structure but using N runs where N is a multiple of 4 (e.g., 12 runs for up to 11 factors), focusing on main effects while assuming higher-order interactions are negligible. These non-regular designs are particularly useful in early-stage experimentation to identify dominant factors with minimal resources. Modern extensions include , which employ orthogonal arrays—special fractional factorials—to enhance design robustness against noise, optimizing both mean response and variability for applications. Post-1990s advancements in computer-generated designs use algorithmic optimization criteria, such as D-optimality (maximizing of the information for ), to create custom fractional factorials tailored to specific models and constraints beyond classical catalogs. These advanced designs address gaps in high-dimensional settings, such as production, where fractional factorials optimize processes in mammalian cells like ovary () cells to enhance recombinant protein yield amid numerous variables. In experiments during the , they facilitate hyperparameter tuning and model robustness testing in pipelines with vast factor spaces.

Effects and Interactions

Main Effects and Contrasts

In factorial experiments, a main effect represents the average difference in the response variable associated with a change in the level of a single factor, while averaging over the levels of all other factors. For a two-level factor, this is typically computed as the difference between the average response at the high level and the average response at the low level. This decomposition allows researchers to isolate the independent contribution of each factor to the overall variation in the response. Main effects are often estimated using , which are linear combinations of the response values designed to isolate specific effects. A contrast C takes the general form C = \sum c_i y_i, where y_i are the observed responses, and c_i are coefficients that sum to zero, typically balanced with equal numbers of positive and negative values (e.g., +1 and -1 for two-level designs). These contrasts provide an for partitioning the in the data, enabling precise estimation of effects without overlap in balanced designs. For a two-level factorial design with k factors, the main effect of factor A, denoted Effect_A, is estimated as \text{Effect}_A = \frac{1}{2^{k-1}} \left( \sum_{\text{high A}} y_i - \sum_{\text{low A}} y_i \right), where the sums are over the responses at the high and low levels of A, respectively, and each level appears in $2^{k-1} treatment combinations. This arises from the coefficients, which assign +1 to high levels and -1 to low levels of A, with the scaling factor ensuring the estimate reflects the effect per unit change. In replicated designs, averages replace the sums accordingly. Each main effect for a factor with l levels has l - 1 degrees of freedom, corresponding to the number of independent contrasts needed to describe the variation across levels. For two-level factors, this simplifies to 1 degree of freedom per main effect. In balanced factorial designs, the contrasts for different main effects (and interactions) are orthogonal, meaning their coefficients satisfy \sum c_{j} c_{m} = 0 for distinct effects j and m. This orthogonality ensures that estimates of different effects are uncorrelated, facilitating efficient partitioning of variance and simplifying statistical analysis.

Interaction Effects

In factorial experiments, interaction effects capture the non-additive joint influence of multiple factors on the response variable, where the magnitude or direction of one factor's effect varies depending on the level of another factor. This departs from the assumption of additivity in main effects, highlighting synergies or antagonisms among factors that cannot be predicted from individual effects alone. For a two-way interaction AB in a 2×2 factorial design, the effect is quantified using the contrast formula: AB = \frac{1}{2} \left[ (\bar{y}_{A\text{ high},B\text{ high}} + \bar{y}_{A\text{ low},B\text{ low}}) - (\bar{y}_{A\text{ high},B\text{ low}} + \bar{y}_{A\text{ low},B\text{ high}}) \right], where \bar{y} represents the response at the indicated factor levels. This measure indicates how the in responses between levels of A changes across levels of B (or vice versa). Higher-order interactions, such as the three-way ABC, extend this concept by measuring deviations from the additivity of main effects and two-way interactions; specifically, they occur when a two-way (e.g., AB) differs across levels of the third C. In general, for any involving a of factors in a $2^k design with coded levels \pm 1, the effect is the average of the response values weighted by the product of the for those factors across all runs. Main effects serve as the foundational building blocks upon which these interactions are assessed. A conceptual example appears in agricultural studies of , where and interact such that high s diminish yields more under low (dry conditions) than under high , due to exacerbated stress on like pollen viability and silk emergence in . plots visualize this by plotting mean yields for levels at each level, revealing non-parallel lines that confirm the dependency; parallel lines would indicate no .

Analysis Methods

Statistical Procedures

The statistical analysis of factorial experiments is grounded in a framework, expressed as Y_{ijk\dots} = \mu + \sum \tau_p + \sum \gamma_{pq} + \sum \delta_{pqr} + \dots + \epsilon, where Y is the response, \mu is the grand , \tau_p denotes main effects for factor p, \gamma_{pq} represents two-way interactions, higher-order terms like \delta_{pqr} capture further interactions, and \epsilon is the random error. Factors are typically coded as orthogonal variables, such as -1 and +1 for two-level designs, which simplifies of effects and ensures interpretability in terms of deviations from the . This full model allows for the of variance attributable to each component, assuming additivity unless interactions are present. Analysis of variance (ANOVA) serves as the cornerstone for inference in factorial designs, partitioning the total sum of squares (SST) into sums of squares for main effects (e.g., SSA, SSB), interaction effects (e.g., SSAB), and residual (SSE), such that SST = SSA + SSB + SSAB + \dots + SSE. are similarly allocated, and significance is assessed via F-tests, F = \frac{MS_{effect}}{MS_E}, where for an is its sum of squares divided by its , and MS_E is the . These tests evaluate whether or interactions explain a significant portion of the variability beyond chance, with p-values derived from the under the of no . ANOVA assumes balanced designs for optimal power but can accommodate imbalances through Type II or III sums of squares. As an alternative to traditional ANOVA, multiple linear regression can fit the factorial model directly, specified as Y = \beta_0 + \sum \beta_p X_p + \sum \beta_{pq} X_p X_q + \sum \beta_{pqr} X_p X_q X_r + \dots + \epsilon, with predictors X coded as \pm 1 to yield coefficients \beta that correspond to half the effect sizes for main effects and interactions. This regression approach yields identical F-tests and estimates to ANOVA for balanced designs but offers flexibility for unbalanced data, continuous covariates, or testing via t-tests on coefficients. It is particularly advantageous in software environments that emphasize regression-based workflows. These procedures rest on key assumptions: the errors \epsilon are normally distributed with mean zero, observations are independent, and variances are homoscedastic across all factor combinations. Diagnostics typically involve plotting residuals—such as Q-Q plots to check , scatterplots of residuals versus fitted values for homoscedasticity, and tests like Levene's for variance equality or Durbin-Watson for —to identify violations, which may require data transformations (e.g., ) or non-parametric alternatives if severe. Contemporary implementations leverage statistical software for efficient computation; R's aov() function fits full factorial ANOVA models, handling crossed factors and providing summary tables with F-statistics, while Python's statsmodels library offers anova_lm() for regression-based ANOVA on factorial data, supporting both balanced and unbalanced cases. Post-2020 developments have incorporated machine learning extensions, such as Gaussian process regression within factorial frameworks, to address non-linear effects and improve predictions in complex experimental settings.

Practical Analysis Example

A practical example of analyzing a $2^3 full factorial design involves a filtration process in a chemical pilot plant, where the goal is to maximize the filtration rate (measured in gallons per hour). The three factors are A: (low: 10 psig, high: 15 psig), B: concentration of (low: 2%, high: 4%), and C: (low: 24°C, high: 35°C), with the stirring rate held constant at its low level (15 rpm) to focus on these variables. This design requires 8 experimental runs, conducted in standard order, yielding the following response data:
RunA (Pressure)B (Concentration)C (Temperature)Filtration Rate (gal/h)
1---45
2--+71
3+--48
4+-+65
5-+-68
6-++60
7++-80
8+++65
The Yates algorithm provides a manual method to compute the effects by iteratively summing and differencing the responses in a tabular format. Starting with the responses in standard order, the first column lists the yields; subsequent columns alternate between sums of pairs and differences (high minus low), with the final column divided by $2^{k-1} = 4 (for k=3) to obtain the effects. Applying this yields the main effects: A = 3.5, B = 11, C = 5; two-factor interactions: AB = 5, AC = -4, BC = -16.5; and three-factor interaction: = 0.5. These values indicate the change in rate for moving each factor or interaction from low to high, averaged over the other factors. For unreplicated designs, while ANOVA can be performed assuming the highest-order interaction as error, it is common to use probability plots of effects to visually identify significant ones, as formal tests have limited power with df=1. Analysis of variance (ANOVA) quantifies the significance of these effects, assuming the three-factor interaction (ABC) serves as the estimate of experimental error due to its small magnitude in unreplicated designs. The sums of squares (SS) for each effect are calculated as (contrast)^2 / 8, where the contrast is the difference in sums for the + and - levels of the effect column. With 7 degrees of freedom (df) total, the model assigns 1 df per effect, leaving 1 df for error. The ANOVA table is as follows (p-values approximate using F(1,1) distribution):
SourceSSdfMSFp-value
A24.5124.549.00.090
B242.01242.0484.00.030
C50.0150.0100.00.064
AB50.0150.0100.00.064
AC32.0132.064.00.080
BC544.51544.51089.00.019
ABC0.510.5--
Error0.510.5--
Total943.57---
Thus, factor B (concentration) and the BC interaction are significant at \alpha = 0.05, as their F-statistics exceed the critical value (approximately 161.4 for F_{1,1}). Other effects are not significant at this level but may be considered if pooling terms or using graphical methods. Interpreting these results, the main effect plots show increasing trends for A, B, and C, with average responses at high levels of 64.5, 68.25, and 65.25 gal/h, respectively, versus 61, 57.25, and 60.25 gal/h at low levels. However, the interaction plots reveal non-parallel lines, confirming the significance of BC: for example, the BC plot shows that high concentration boosts the rate markedly at low temperature (difference of 12 gal/h) but less so at high temperature (difference of -8 gal/h), indicating antagonism. A Pareto chart of absolute effect sizes orders them as |BC| = 16.5 (dominant), |B| = 11, |C| = 5, |AB| = 5, |A| = 3.5, |AC| = 4, and |ABC| = 0.5, emphasizing that the concentration-temperature interaction drives most variation and should be prioritized for process optimization, along with the main effect of concentration. The optimal settings for maximizing filtration rate are high pressure (A+), high concentration (B+), and low temperature (C-), achieving 80 gal/h in run 7. In modern practice, statistical software like facilitates this analysis, but for the full model on unreplicated , aov provides estimates without p-values due to zero df. The following snippet computes the model (effects can be extracted manually or via other packages like DoE.base for normal plots):
data <- data.frame(
  Rate = c(45,71,48,65,68,60,80,65),
  Pressure = factor(c(1,1,2,2,1,1,2,2)),
  Concentration = factor(c(1,1,1,1,2,2,2,2)),
  Temperature = factor(c(1,2,1,2,1,2,1,2))
)
model <- aov(Rate ~ Pressure * Concentration * Temperature, data = data)
summary(model)
This aligns with manual calculations for effects but requires additional steps for significance assessment in unreplicated cases.

References

  1. [1]
    Factorial Design Explained: Testing Multiple Factors - Statistics By Jim
    A factorial design is an experiment that simultaneously assesses more than one factor. Each run involves a random combination of conditions.
  2. [2]
    [PDF] Chapter 8 Factorial Experiments - IIT Kanpur
    Factorial experiments involve simultaneously more than one factor and each factor is at two or more levels. Several factors affect simultaneously the ...
  3. [3]
    Lesson 5: Introduction to Factorial Designs | STAT 503
    Factorial designs are the basis for another important principle besides blocking - examining several factors simultaneously.
  4. [4]
    [PDF] DA Brief Introduction to Design of Experiments - Johns Hopkins APL
    BRIEF HISTORY. Design of experiments was invented by Ronald A. Fisher in the 1920s and 1930s at Rothamsted Experi- mental Station, an agricultural research ...
  5. [5]
    R. A. Fisher - Amstat News - American Statistical Association
    Mar 4, 2025 · Fisher's work in experimental design, including factorial experiments, dramatically improved the efficiency of agricultural research. Despite ...
  6. [6]
    Chapter: Appendix B: A Short History of Experimental Design, with ...
    (Factorial designs were developed by Fisher and Yates at Rothamsted.) So, for example, if one has four factors involving five levels each, a factorial ...
  7. [7]
  8. [8]
    Factorial and fractional factorial designs - Minitab - Support
    A factorial design is type of designed experiment that lets you study of the effects that several factors can have on a response.What Is A Factorial Design? · What Is A Full Factorial And... · How Minitab Orders The...<|control11|><|separator|>
  9. [9]
    Implementing Clinical Research Using Factorial Designs: A Primer
    In a full factorial experiment, factors are completely crossed; that is, the factors and their levels are combined so that the design comprises every possible ...
  10. [10]
    1. What is a Factorial Design of Experiment? - The Open Educator
    In a Factorial Design of Experiment, all possible combinations of the levels of a factor can be studied against all possible levels of other factors.
  11. [11]
    3.1: Factorial Designs - Statistics LibreTexts
    Jul 14, 2025 · Factorial experiments can include manipulated independent variables or a combination of manipulated and non-manipulated independent variables.Factorial Designs · Overview · Assigning Participants to...
  12. [12]
    OFAT (One-Factor-at-a-Time). All You Need to Know - SixSigma.us
    May 24, 2024 · One of the key advantages of factorial designs is their ability to estimate interaction effects, which are often overlooked or confounded ...
  13. [13]
    [PDF] Chapter 3: Two-Level Factorial Design - Stat-Ease
    Factorial design offers two additional advantages over OFAT: Wider inductive basis, i.e., it covers a broader area or volume of X-space from which to draw ...
  14. [14]
    7.3 Factorial designs and interaction effects - Fiveable
    Factorial designs are more efficient than one-factor-at-a-time (OFAT) experiments, requiring fewer total runs to estimate main effects and interactions ...
  15. [15]
  16. [16]
    5.3.3.3.1. Two-level full factorial designs
    Standard Order for a 2k Level Factorial Design​​ The third (X3) column starts with -1 repeated 4 times, then 4 repeats of +1's and so on. In general, the i-th ...Missing: notation | Show results with:notation
  17. [17]
    Informal introduction to factorial experimental designs - CADIO
    The purpose of this page is to clarify some concepts, notation, and terminology related to factorial experimental designs.
  18. [18]
    5.3.3.9. Three-level full factorial designs
    In such cases, main effects have 2 degrees of freedom, two-factor interactions have 22 = 4 degrees of freedom and k-factor interactions have 2k degrees of ...
  19. [19]
    A Brief History of Rothamsted Experimental Station from 1843 to 1901
    Founded by John Bennet Lawes with Dr J H Gilbert · The first experiments · Founder of the fertilizer industry · Start of collaboration with Dr J H Gilbert – 1843.
  20. [20]
    The importance of long‐term experiments in agriculture: their ... - NIH
    Evidence in the Rothamsted archives suggests that Lawes contemplated stopping the experiments because of their cost, once the responses by the different crops ...
  21. [21]
    The historical development of studies on soil-plant interactions
    In 1840, German chemist J. F. Liebig reported that nitrogen, phosphate, and potassium were the three elements required for plant growth [1] .
  22. [22]
    The Mineral Theory - jstor
    Finding evidence that legumes probably could obtain "their nitro- gen from the atmosphere rather than from the soil," Lawes and Gilbert freely acknowledged ...
  23. [23]
    Charting the history of agricultural experiments
    Jul 24, 2015 · Agricultural experimentation is a world in constant evolution, spanning multiple scientific domains and affecting society at large.
  24. [24]
    R. A. Fisher and the Design of Experiments, 1922-1926 - jstor
    the radically new form and efficiency of factorial block designs, shows the further advantages accruing to factorial arrangements through confounding, and ...
  25. [25]
    The arrangement of field experiments - Rothamsted Repository
    Fisher, RA 1926. The arrangement of field experiments. Journal of the Ministry of Agriculture. 33, pp. 503-515.Missing: factorial | Show results with:factorial
  26. [26]
    Factorial design - Oxford Reference
    Sir Ronald Fisher introduced the term 'factor' in 1929 and used the description 'factorial design' as a chapter heading in his seminal work The Design of ...
  27. [27]
    The design and analysis of factorial experiments
    Yates, F. 1937. The design and analysis of factorial experiments. Harpenden Imperial Bureau of Soil Science. Authors, Yates, F.
  28. [28]
    [PDF] Optimizing Processes with Design of Experiments | JMP
    Later in the 1930s, Frank Yates simplified the analysis of DOE data by introducing the Yates algorithm, which is the reason why many designs are still coded on ...
  29. [29]
    Response Surface Methodology - an overview | ScienceDirect Topics
    Response surface methodology (RSM) was developed by Box and Wilson (1951) to improve production processes in the chemical industries. The main objective was ...
  30. [30]
    Introduction to Box and Wilson (1951) On the Experimental ...
    Their cooperation in this particular study set the foundation for the entire present field of response surface methodology.
  31. [31]
    A Review of Response Surface Methodology: A Literature Survey
    The principal papers dealing with experimental designs for RSM are Box and Wilson (1951), which introduced central composite designs; Box and. Hunter (1957), ...
  32. [32]
    Early history: Releases 1-4 - Genstat Knowledge Base 2024 • - VSNi
    Jun 19, 2019 · Modern agricultural research began at Rothamsted in 1843, when Sir John Bennet Lawes started the Broadbalk wheat experiment which is now the world's longest- ...Missing: software | Show results with:software
  33. [33]
    Genstat: a general statistics program - Rothamsted Repository
    The computer analysis of factorial experiments: in memoriam Frank Yates. ... A history of the experiment, details of the treatments and yields of the crops.
  34. [34]
    High Throughput Screening of Additives Using Factorial Design to ...
    Nov 18, 2018 · We have successfully developed a high throughput method for screening the individual and combined effect of media additives on CES during ...
  35. [35]
    High Throughput Screening of Additives Using Factorial Design to ...
    Nov 18, 2018 · In this study, we aimed to explore a method to identify additives that diminish the decrease in the viability of stored undifferentiated ...
  36. [36]
    Bayesian modeling of factorial time-course data with applications to ...
    Our methodology was successfully applied to discover transcriptional networks in a microarray data set comparing the transcriptomic changes that occurred during ...
  37. [37]
    Bayesian Analysis of Two Factors Full Factorial Design in Industrial ...
    Nov 18, 2020 · PDF | Bayesian approach is compatible with factorial experimental when studying interactions. Thus, the effect interaction is important ...
  38. [38]
    Factorial Experiments: Efficient Tools for Evaluation of Intervention ...
    Aug 1, 2014 · This article offers an introduction to factorial experiments aimed at investigators trained primarily in the RCT.
  39. [39]
    Factorial Designs Help to Understand How Psychological Therapy ...
    May 14, 2020 · Advantage 1: Directly Testing Individual Components and Their Interactions. The factorial experiment provides direct evidence about the effects ...<|separator|>
  40. [40]
    [PDF] 1. Discrete treatments 2. Dose response 3. Factorial designs 4 ...
    Often experiments are designed to compare discrete treatments such as varieties, brands, sources, etc. ... Advantages of factorial designs include more efficient ...
  41. [41]
    Lesson 14: Factorial Design | STAT 509
    Factorial designs offer certain advantages over conventional designs. There are a number of ways that you could look at these groups. This lesson will ...Missing: sources | Show results with:sources
  42. [42]
    Do Interactions Matter? by George Box - William G. Hunter
    Figure 2(b) shows a 23 factorial experiment reported by Christer Hellstrand (1) a former graduate of the University of Wisconsin who now works for SKF. The ...
  43. [43]
    [PDF] Design and analysis of experiments
    ... Factorial Designs. I have expanded the material on factorial and fractional factorial designs (Chapters 5–9) in an effort to make the material flow more ...
  44. [44]
  45. [45]
    Design of Experiments with Multiple Independent Variables - NIH
    An investigator who plans to conduct experiments with multiple independent variables must decide whether to use a complete or reduced factorial design.
  46. [46]
    [PDF] A First Course in Design and Analysis of Experiments Gary W. Oehlert
    Library of Congress Cataloging-in-Publication Data. Oehlert, Gary W. A first course in design and analysis of experiments ... factorial designs (see Chap- ter 8).
  47. [47]
    5.3.3.2. Randomized block designs
    ### Summary of Basic 2^k Full Factorial Designs from NIST/SEMATECH e-Handbook (2024)
  48. [48]
    [PDF] The design of experiments
    Statistical methods for research workers. Oliver and Boyd Ltd., Edinburgh. R. A. FISHER (1956, 1959) Statistical methods and scientific inference.
  49. [49]
    Using design of experiments to guide genetic optimization of ...
    Design of experiments (DoE) is a term used to describe the application of statistical approaches to interrogate the impact of many variables on the ...
  50. [50]
    A Helping Hand: A Survey About AI-Driven Experimental Design for ...
    AI-driven experimental design uses AI to model relationships, propose strategies, and improve by learning, offering advantages over traditional trial-and-error.<|control11|><|separator|>
  51. [51]
    5.1 - Factorial or Crossed Treatment Design | STAT 502
    The main effect of factor A is the effect of A on the response ignoring the effect of all other factors. The main effect of a given factor is equivalent to the ...
  52. [52]
    [PDF] 4.1 Two Factor Factorial Designs
    The effect of a factor is defined to be the average change in the response associated with a change in the level of the factor. This is usually called a main ...
  53. [53]
    [PDF] Lecture 6 2 Factorial Design
    Main effect is defined in a different way than the factorial modeling. But they are connected and equivalent. 6-3. Page 5. Interaction. • Interaction between A ...
  54. [54]
    Lesson 6: The \(2^k\) Factorial Design - STAT ONLINE
    The simplest case is 2 k where k = 2 . We will define a new notation which is known as Yates notation. We will refer to our factors using the letters A, B, C, D ...
  55. [55]
    6.1 - The Simplest Case | STAT 503
    The simplest case has two factors with two levels each, resulting in four combinations. Yates notation uses "(1)" for low, "a" for high A, "b" for high B, and ...
  56. [56]
    Degrees of Freedom For a Factorial ANOVA
    Apr 15, 2001 · For the main effect of a factor, the degrees of freedom is the number of levels of the factor minus 1. To understand this intuitively, note ...
  57. [57]
    4 Factorial Treatment Structure – ANOVA and Mixed Models
    Hence, the main effects have a − 1 and b − 1 degrees of freedom, respectively, “as before”. For the interaction effect we have to make sure that it contains ...
  58. [58]
    8.7 - Constructing Orthogonal Contrasts | STAT 505
    So contrasts A and B are orthogonal. Similar computations can be carried out to confirm that all remaining pairs of contrasts are orthogonal to one another.
  59. [59]
    Performing Contrast Analysis in Factorial Designs: From NHST to ...
    Oct 6, 2016 · Contrast analysis is helpful because it can be used to test specific questions of central interest in studies with factorial designs.
  60. [60]
    Understanding Interactions
    An interaction occurs when the effect of one independent variable depends on the level of another, requiring a factorial design.
  61. [61]
    9.2 Interpreting the Results of a Factorial Experiment
    A main effect is the effect of one independent variable on the dependent variable—averaging across the levels of the other independent variable. Thus there is ...
  62. [62]
    [PDF] Topic 9: Factorial treatment structures Introduction
    Relationship between factorial experiments and experimental design. Experimental designs are characterized by the method of randomization: how were the.
  63. [63]
    [PDF] Describing Two-Way Interactions
    A two-way interaction occurs when the effect of one independent variable depends on the level of another, such as a 2x2 design with two IVs each having 2 ...
  64. [64]
    Experiments with Several Crossed Factors
    What does the three way ABC interaction mean? If the interaction effects of any pair (say, A and B) depends on the level of the third factor, then ABC ...
  65. [65]
    4. Contrast, Effect, Estimate, Sum of Square, and ANOVA Table 22
    Contrast, Effect, Sum of Square, Estimate Formula, ANOVA table for 2K factorial design of experiment. ... interaction effect can be calculated using the following ...
  66. [66]
    7.2 - The \(2^3\) Design | STAT 503
    In the table above we have defined our seven effects: three main effects {A, B, C}, three 2-way interaction effects {AB, AC, BC}, and one 3-way interaction ...
  67. [67]
    Impacts of high temperature, relative air humidity, and vapor ...
    The combination of heat and drought stress is becoming a greater threat to crop yields under the warming climate (Liu et al. 2020; Lesk and Anderson 2021).
  68. [68]
    [PDF] Interactions and Factorial ANOVA - Department of Statistical Sciences
    Interaction between explanatory variables means “It depends.” • Relationship between one explanatory variable and the response variable depends on the value of ...Missing: definition | Show results with:definition
  69. [69]
    3.2 - Assumptions and Diagnostics | STAT 502
    The ANOVA has assumptions that must be met. Failure to meet these assumptions means any conclusions drawn from the model are not to be trusted.
  70. [70]
    aov function - Fit an Analysis of Variance Model - RDocumentation
    This provides a wrapper to lm for fitting linear models to balanced or unbalanced experimental designs.
  71. [71]
    ANOVA - statsmodels 0.14.4
    Analysis of Variance models containing anova_lm for ANOVA analysis with a linear OLSModel, and AnovaRM for repeated measures ANOVA, within ANOVA for balanced ...
  72. [72]
    Design of experiments and machine learning with application to ...
    Mar 26, 2023 · The paper aims firstly to review the most suitable designs and ML models to use jointly in an Active Learning (AL) approach.