Fact-checked by Grok 2 weeks ago

Multiple factor analysis

Multiple factor analysis (MFA) is a multivariate statistical technique that extends to handle datasets consisting of multiple blocks or groups of —typically a mix of quantitative and qualitative types—measured on the same set of observations. By normalizing each block individually (often by scaling to unit first eigenvalue) and then concatenating them for a global , MFA balances the influence of disparate groups, enabling the identification of common structures across blocks while assessing their individual contributions and relationships. This method provides factor scores, loadings, and visualizations that summarize complex, multitable data in a unified framework, making it particularly suited for exploratory analysis where traditional might be biased by imbalanced sets. Developed by statisticians Brigitte Escofier and Jérôme Pagès in the late 1980s and early 1990s, MFA emerged as a synthesis of earlier multivariate approaches, including canonical analysis, rotation, and individual differences scaling (INDSCAL), to address the challenges of integrating heterogeneous data tables. Their seminal work, detailed in a 1994 publication, formalized MFA as a weighted tool capable of processing both numerical and categorical variables on shared individuals, with implementations available in software packages like AFMULT. Subsequent refinements, such as extensions for incomplete data and hierarchical structures, have built on this foundation, enhancing its applicability in modern computational environments. At its core, MFA operates in two main stages: first, performing separate (or multiple correspondence analyses for categorical blocks) on each normalized data table to derive partial factor scores; second, aggregating these scores into a composite table for an unweighted global PCA, which yields overall dimensions representing consensus across blocks. This process not only reveals how observations cluster but also quantifies the similarity between blocks through metrics like RV coefficients, allowing researchers to evaluate whether certain variable groups align or diverge in explaining variance. Dual formulations of MFA exist for cases where the same variables are observed across different samples, further broadening its utility. MFA finds extensive use in fields requiring the integration of diverse data sources, such as sensory science—where it analyzes panels of descriptors alongside instrumental measurements for products like wines or foods—and in for combining consumer surveys with demographic profiles. It has also been applied in to diagnose species relationships from morphological and genetic blocks, in for multiblock ecological data, and in social sciences for exploring multifaceted survey responses. These applications highlight MFA's strength in providing interpretable, balanced insights into complex systems without requiring prior assumptions about variable importance.

Introduction

Definition and Objectives

Multiple factor analysis (MFA) is a principal component method designed for the simultaneous analysis of multiple groups of variables, which can be numerical and/or categorical, measured on the same set of observations.90135-X) It aims to identify common underlying structures across these groups while evaluating the balance or relative contributions of each group to the overall analysis. The primary objectives of MFA include summarizing complex multi-table data into a lower-dimensional representation, detecting redundancies or complementarities between variable groups, and achieving a unified dimensionality reduction that accounts for the individual inertias of each group. By normalizing and weighting the groups appropriately, MFA facilitates the exploration of shared patterns without one group dominating the results due to scale differences.90135-X) This approach builds on principal component analysis (PCA) for continuous variables and multiple correspondence analysis (MCA) for categorical ones, adapting their principles to multi-group settings. Key benefits of MFA lie in its ability to handle mixed data types without requiring homogeneity across groups, enabling direct comparisons of the importance of different variable sets, and supporting exploratory analyses in diverse fields such as sensory evaluation and multi-omics studies. In sensory evaluation, for instance, it allows integration of assessor ratings and physicochemical measurements to assess product perceptions holistically. Similarly, in multi-omics research, MFA integrates datasets like and to uncover coordinated biological variations. The main outputs of MFA consist of a global factor map representing the compromise across all groups, partial factor maps illustrating each group's specific structure projected onto the global axes, and balance indicators that quantify the contributions and s of individual groups. These visualizations and metrics provide insights into both the consensus and discrepancies among the data tables.90135-X)

Relation to Other Factorial Methods

Multiple factor analysis (MFA) extends () to the analysis of multiple data tables describing the same set of observations, addressing the limitations of standard when applied to single-block data by incorporating a normalization step for each group to ensure balanced contributions. In , the focus is on maximizing variance within a single table through eigenvalue decomposition, whereas MFA first performs a on each individual group (or block) of variables, scales the data by dividing by the of the first eigenvalue of that group's to normalize , and then concatenates the normalized tables for a global . This adaptation prevents any single group with larger variance from dominating the analysis, allowing for a joint representation that respects the structure of heterogeneous data sets. For groups involving categorical variables, MFA integrates principles from () by treating such data as and adjusting for category frequencies to align the scaling with continuous variables, effectively performing within each qualitative group before . Unlike standalone , which analyzes categorical data using chi-squared distances to handle the double inherent in multiple categories, MFA embeds this within the multi-group framework, representing categories by their centers of gravity rather than disjunctive coding alone, which facilitates integration with quantitative groups without distorting the overall factor space. This hybrid approach ensures that categorical and continuous variables contribute comparably to the global factors after by the first of the group's . MFA distinguishes itself from other multi-table methods, such as STATIS, which seeks a compromise between tables by optimizing weights to maximize the similarity of observation factor scores across groups via RV coefficients, whereas MFA employs a fixed scheme to promote without iterative weighting. In contrast to multi-block PCA variants like SUM-PCA, which concatenate blocks after simple variance and may allow dominant blocks to overshadow others, MFA's inertia-based and emphasis on eigenvalue ratios—comparing each group's first eigenvalue to the global principal component—explicitly assess and enforce equilibrium across groups, making it particularly suited for mixed data types. These differences position MFA as a balanced extension of methods for multi-block settings, assuming of 's variance maximization and MCA's metrics.

Data Structure and Preparation

Organization of Multiple Variable Groups

Multiple factor analysis (MFA) requires a multi-table where a set of I observations is described by K distinct groups of variables, with the k-th group comprising J_k variables organized as an I × J_k matrix. These matrices are conceptually concatenated horizontally to form a global I × ∑J_k data set, though each group is analyzed separately in the initial stages to account for its internal structure. For instance, in of food products, one group might include physical attributes like and density, while another covers sensory attributes such as sweetness and bitterness. A fundamental requirement is that the same I observations must appear across all K groups, ensuring comparability and alignment in the analysis. Missing values can be handled in standard MFA: for numerical variables, often imputed with column means; for categorical variables, treated as an additional category or coded as absent in the disjunctive table. Complete data is ideal for accuracy, and advanced imputation methods are available for complex cases. Groups must also be conceptually distinct, representing different aspects or domains of the observations (e.g., quantitative measurements versus qualitative descriptors), to facilitate the identification of shared and unique patterns. Preprocessing begins with centering all numerical variables within each group by subtracting the group-specific , which removes effects and focuses on variance. For categorical variables, they are transformed into disjunctive tables or indicator matrices, where each category becomes a column (1 if present, 0 otherwise), enabling factorial treatment akin to (). If variables within a group exhibit differing scales, they may be scaled to unit variance prior to analysis to ensure equitable contribution during group factorization. When defining groups, practitioners should aim for a relatively balanced number of variables (J_k) across the K groups to prevent any single group from disproportionately influencing the global structure, although subsequent normalization steps in MFA mitigate imbalances. Each group is typically analyzed using for quantitative variables or for categorical ones, providing the foundation for integration.

Handling Different Variable Types

In Multiple Factor Analysis (MFA), data are organized into groups of , where the treatment of variable types within each group is essential for equitable contribution to the overall analysis. Numerical variables are standardized by centering them to a of zero and scaling to unit variance, enabling the application of (PCA) that relies on Euclidean distances to summarize the group's variability. This ensures that all numerical variables have comparable scales, preventing any single variable from dominating the group's principal components. Categorical variables require transformation into a disjunctive table, consisting of indicator columns for each category, to facilitate analysis. Each indicator column is weighted by the proportion of individuals who do not possess that category (1 - f_i, where f_i is the frequency of the category), and () is then applied using chi-squared distances, which account for the relative frequencies and capture associations among categories. This approach balances the influence of categories with varying prevalences, aligning the categorical group's with that of numerical groups. Groups with mixed variable types are rare in standard MFA, as the method assumes homogeneity within groups to apply appropriate metric spaces consistently. In such cases, variables are often separated into homogeneous sub-groups for separate or before integration, or extensions incorporate hybrid distances that combine for numerical and chi-squared for categorical components; this is particularly noted in applications to multi-omics data, where diverse data modalities like continuous expression levels and discrete mutations necessitate adaptive handling. Ordinal variables pose type-specific challenges, as their ordered categories can be treated either as categorical—via disjunctive coding and to respect discrete levels—or as numerical if the scale is sufficiently granular to approximate continuous , allowing and . The decision hinges on the number of levels and the meaningfulness of intervals, ensuring the treatment aligns with the variable's measurement properties for compatibility in the global MFA framework. To validate the setup, groups should contain an adequate number of variables, such as more than five per group, to promote stability in the extracted factors and reliable estimation of group-specific inertias. Smaller groups risk unstable principal components and inflated variability in balance metrics.

Core Methodology

Group Normalization and Weighting

In Multiple Factor Analysis (MFA), the process of group normalization and weighting begins with the separate analysis of each group of variables to ensure equitable contributions across diverse data sets. For numerical variable groups, Principal Component Analysis (PCA) is performed on the centered and scaled data matrix X_k, while for categorical groups, Multiple Correspondence Analysis (MCA) is applied, yielding the first eigenvalue \lambda_{1k} for each group k. This initial step captures the internal structure of each group independently, with \lambda_{1k} representing the maximum variance (or inertia) explained by the first principal component. Normalization follows to balance the influence of groups that may differ in size or variability. Specifically, the X_k for group k is divided by the of the first eigenvalue \lambda_{1k}, producing the normalized matrix Z_k = \frac{1}{\sqrt{\lambda_{1k}}} X_k. This adjustment equalizes the maximum variance across groups, as the first of each normalized group now accounts for a unit variance. The rationale for this weighting is to prevent larger groups—those with more variables or higher overall —from dominating the subsequent global analysis, thereby promoting a fair comparison of typologies or structures within each group. The output of this normalization step consists of normalized partial factor maps for each group, where the coordinates in Z_k rescale the original principal components to a common scale. These maps preserve the relative positions within each group while mitigating scale disparities, preparing the data for integration into a unified framework. By design, this approach ensures that no single group can unilaterally define the primary axes of variation in the overall analysis.

Global Data Set Construction

In multiple factor analysis (MFA), the global data set is assembled by horizontally concatenating the normalized matrices from each group of variables, enabling a unified () across all groups. Specifically, for K groups, each normalized Z_k (of dimensions I \times J_k, where I is the number of observations and J_k the number of variables in group k) is bound side-by-side to form the global Z = [Z_1 \ Z_2 \ \dots \ Z_K], resulting in an I \times \sum J_k . This structure preserves the block-wise organization while allowing the extraction of compromise factors that balance contributions from all groups. The of each Z_k, typically by dividing the original group by its first \phi_k = \sqrt{\lambda_{k1}} (where \lambda_{k1} is the first eigenvalue from a preliminary or MCA on group k), ensures that no single group dominates due to scale differences. The primary purpose of this global construction is to perform on Z, yielding factors that equally represent all groups post- and reveal shared structures across variable sets while highlighting discrepancies. Unequal group sizes are implicitly addressed through the normalization step, as scaling by the first equalizes the of each group along the first ; however, when groups differ vastly in variable counts (e.g., one with 5 variables versus another with 50), this may introduce subtle biases toward larger groups, prompting extensions like explicit group weighting in advanced implementations.

Factor Extraction and Coordinates

Once the global data set Z is constructed by concatenating the normalized group data tables Z_k, multiple factor analysis proceeds with a (PCA) applied to this aggregated matrix. The PCA extracts the principal factors by decomposing the structure of Z, yielding eigenvalues \lambda_j that quantify the variance explained by each successive factor j, along with the global principal coordinates F_j for the observations and the loadings for the variables. These global coordinates F_j represent the positions of observations in the compromise space, which synthesizes information across all groups while respecting their individual structures. The partial coordinates for each group k on factor j are then derived by projecting the group's normalized matrix Z_k onto the global eigenvectors V_j from the PCA of Z, given by the formula: F_{k j} = Z_k V_j This projection captures how each group's variables contribute to the global factors without altering the overall compromise. The resulting partial coordinates F_{k j} allow for group-specific interpretations within the shared factor space. The number of factors to retain is typically determined using criteria such as the of eigenvalues or the cumulative percentage of explained, often selecting 2 to 5 dimensions for practical interpretability in applications like . Total in MFA is computed as I = \frac{1}{I} \trace(Z^T Z), where I denotes the number of observations, providing a measure of the overall variance across the balanced groups. Mathematically, this global PCA maximizes the explained variance across all normalized groups simultaneously, ensuring that no single group dominates the factor structure due to prior balancing. The eigenvalues \lambda_j thus reflect the along each principal in this unified , with the sum of retained \lambda_j indicating the proportion of total captured.

Balance Analysis

Metrics for Group Contributions

In multiple factor analysis, several metrics quantify the contributions of individual variable groups to the global factors, enabling researchers to assess relative importance, alignment, and potential imbalances post-extraction. The first eigenvalue ratio for group k, denoted L_{gk} = \lambda_{1k} / \sum_{m=1}^K \lambda_{1m}, measures the group's relative importance prior to normalization, where \lambda_{1k} is the first eigenvalue from the separate (or ) of group k, and the denominator sums these values across all K groups; higher ratios indicate groups with stronger inherent structure that could dominate the analysis without adjustment. The contribution of group k to the of global factor j is captured by C_{kj} = (\sum \text{variances in partial } F_{kj}) / \lambda_j, where the numerator sums the variances (or squared coordinates) from the partial factor map of group k on dimension j, and \lambda_j is the eigenvalue of the global factor; this proportion reveals how much each group supports the explanation of overall data variance along specific dimensions, with larger C_{kj} values highlighting influential groups. Coordinates quality for group k on factor j is evaluated using \cos^2_{kj} = (\text{inertia of partial } F_{kj}) / \lambda_{1k}, which expresses the fraction of the group's total variance (as given by its first eigenvalue \lambda_{1k}) explained by the global ; values approaching 1 denote excellent fit, meaning the global structure effectively captures the group's variability, while lower values suggest misalignment. For imbalance detection, the average \cos^2 across the initial factors (typically the first two or three) is computed for each group; persistently low averages, such as below 0.3, indicate inadequate representation and potential imbalance, where the group's structure deviates substantially from the global factors and may warrant further scrutiny or preprocessing.

Interpreting Balance Across Groups

In multiple factor analysis (MFA), balance across groups is interpreted by evaluating the uniformity of the group inertias L_{gk} and the squared correlations \cos^2 between each group's principal components and the global factors. High uniformity in L_{gk} values across groups, combined with \cos^2 > 0.5 for most groups on the primary dimensions, indicates that the groups capture similar aspects of the underlying global data structure, suggesting a harmonious integration without any single group overly influencing the analysis. Disparities in these metrics, such as varying L_{gk} levels, signal potential imbalances that may warrant remedial steps like removing outlier groups or adjusting weights to equalize their contributions. Decision rules for addressing imbalances rely on thresholds for these metrics to guide analytical choices. For instance, if the C_{kj} for one group exceeds 0.5 on a given dimension, that group is considered dominant and may skew the global solution, prompting separate PCA analyses for that group or exclusion from the MFA to avoid distortion. Pairwise similarities between groups can be further assessed using the RV coefficient, which ranges from 0 (no structural similarity) to 1 (perfect ); values above 0.8 typically indicate strong alignment, while lower values suggest divergent information that could justify subgrouping or hierarchical extensions. The implications of balanced versus imbalanced MFA outcomes provide key insights into the data's underlying patterns. A well-balanced reveals shared structures across groups, facilitating the of common factors that generalize across variable sets, such as consensus in sensory evaluations from multiple experts. In contrast, imbalance highlights unique aspects within specific groups, allowing researchers to isolate group-specific variances that might otherwise be masked; this is particularly useful in exploratory studies where group disparities inform targeted follow-up . To address persistent imbalances, especially in hierarchically structured data, hierarchical MFA extends the method by balancing contributions at multiple levels, offering a more nuanced remedial approach than standard weighting. Despite their utility, balance metrics in MFA have notable limitations that affect . These metrics inherently assume equal of all groups to the global structure, which may not hold if some groups are conceptually peripheral, leading to over- or under-emphasis. Additionally, they are sensitive to imbalances in the number of variables per group, as larger groups can artificially inflate their first eigenvalues and thus their weights, potentially biasing the overall assessment even after normalization.

Visualization and Interpretation

Standard Factorial Graphics

In multiple factor analysis (MFA), standard factorial graphics provide visualizations of the global principal components derived from the concatenated and normalized data sets, enabling an overview of the overall structure across all variable groups. These plots adapt classical and techniques to the MFA framework, where observations are projected onto the global factor space to reveal patterns of similarity and variable contributions without emphasizing group-specific differences. The global factor map, often presented as a , displays observations and loadings simultaneously on the first two global principal components, illustrating the primary axes of variation in the combined data. Observations are positioned according to their coordinates in this global space, while arrows or points represent the loadings of from all groups, typically color-coded by their originating group to distinguish contributions visually. This graphic highlights clusters of similar observations and the directions in which groups pull the , with the of arrows indicating the of influence on the . For instance, in applications involving mixed types, quantitative appear as vectors, and categorical modalities as points, all scaled to the global eigenvalues. A visualizes the eigenvalues associated with the global principal components, plotted against component number, to assess the dimensionality of the solution and the proportion of total inertia explained by each factor. In MFA, this plot often includes both global eigenvalues and a comparison to partial inertias from individual group analyses, aiding in the decision of how many dimensions to retain for interpretation—typically those where the eigenvalue curve begins to flatten. The cumulative variance explained is marked, with the first few components often accounting for a substantial portion, such as over 60% in balanced data sets. Individual factor maps extend the global view by projecting observations onto a single principal component or a specific pair beyond the first two, allowing deeper inspection of variance along isolated dimensions. These maps position observations based on their global coordinates for the selected factor, often supplemented with confidence ellipses or color gradients based on squared correlations (cos²) to indicate how well individuals are represented. Such plots are useful for identifying outliers or subtle patterns not evident in the primary biplot. Correlation circles, akin to those in , depict the correlations between variables (or modalities) and the global principal components, plotted on a to show angular relationships and strengths. In MFA, variables from different groups are included and color-coded accordingly, revealing how each group's elements align with or oppose the factors—for example, variables with correlations near 1 lying close to the . This graphic underscores the quality of variable representation, with points nearer the circle periphery indicating stronger associations with the component.

Unique MFA Visualizations

Multiple factor analysis (MFA) employs several specialized visualizations to evaluate the alignment and balance among variable groups, going beyond standard factorial plots by highlighting group-specific contributions and inter-group relationships. Partial factor maps are superimposed representations that project each group's variables or individuals onto the global principal axes, often using transparency, color coding, or distinct symbols to reveal overlaps and discrepancies in how different groups structure the data. For instance, in applications, these maps allow researchers to compare the configuration of chemical attributes against sensory perceptions, identifying whether groups capture similar patterns in the observations. This aids in assessing group balance by quantifying the proximity of partial points to the global compromise, where closer alignments indicate harmonious contributions across datasets. Group contribution bar plots further illuminate imbalances by displaying metrics such as the eigenvalue-based group weights L_{gk} (the contribution of group g to dimension k) and the average squared cosine \cos^2_k (measuring the quality of representation of group g on dimension k) across principal components. These horizontal or vertical bar charts, typically ordered by magnitude, highlight dominant groups on specific axes; for example, a group with high L_{gk} values suggests it disproportionately influences the global solution, potentially signaling the need for reweighting. Such plots are essential for detecting redundancies or underrepresentations, as low \cos^2_k values imply poor alignment with the overall factors. Recent implementations extend these to interactive formats, enhancing interpretability in complex multiblock studies. The between-group RV matrix, visualized as a heatmap, computes pairwise RV coefficients—a generalization of the squared correlation for matrices—to quantify structural similarities between groups, with values ranging from 0 (no similarity) to 1 (identical structure). In the heatmap, rows and columns represent groups, and color intensity (e.g., from blue for low to red for high) reveals complementary or redundant datasets; for example, high RV values between sensory and instrumental groups indicate convergent information. This tool is particularly useful for identifying clusters of aligned groups, aiding decisions on data fusion. Additionally, dendrograms derived from on the RV matrix facilitate group clustering, where branches represent similarity levels, helping to organize groups into hierarchical structures of complementarity or redundancy. These advancements, integrated into modern software, enhance the analysis of group balances in diverse applications like bioinformatics.

Examples and Applications

Introductory Worked Example

To illustrate the principles of multiple factor analysis (MFA), consider a hypothetical dataset on 20 wines, where each wine is described by three distinct groups of variables: chemical properties (numerical variables such as , alcohol content, and residual sugar), sensory attributes (numerical scores for aroma intensity, body, and aftertaste on a 1-10 scale), and tasting notes (categorical variables classifying dominant flavors as fruity, oaky, or spicy). This setup allows MFA to integrate diverse data types while assessing their balanced contributions to a global structure. The analysis proceeds in steps, beginning with separate analyses of each group to normalize their scales. For the numerical groups (chemical and sensory), (PCA) is applied; for the categorical tasting notes group, (MCA) is used to handle the qualitative data. The first eigenvalues from these separate analyses quantify each group's internal structure: λ₁ = 5.2 for the chemical group, λ₁ = 3.1 for the sensory group, and λ₁ = 2.8 for the tasting notes group. To ensure comparability, each group's data matrix is scaled by dividing by the square root of its respective λ₁, effectively normalizing the first eigenvalue to 1 across groups and preventing any single group from dominating due to scale differences. The normalized matrices are then concatenated column-wise to form a global dataset, on which a single is performed to extract common factors. The eigenvalues from this global are summarized below, showing that the first two factors account for 60% of the total variance, providing a compact representation of the wines' shared patterns.
FactorEigenvalueVariance Explained (%)Cumulative Variance (%)
14.2035.035.0
22.9025.060.0
31.8015.075.0
To evaluate balance, the squared cosines (cos²) are computed for each group, representing the proportion of the group's total captured by the factors (here, the first two). Higher cos² values indicate stronger alignment with the global structure.
Groupcos² (First Two Factors)
Chemical0.70
Sensory0.60
Tasting Notes0.40
For context, a sample of the (first five wines, abbreviated to one representative per group) appears below; full would include all variables and the remaining 15 wines.
WineChemical (pH)Sensory (Aroma Score)Tasting Notes
13.457.2Fruity
23.606.8Oaky
33.308.1Spicy
43.507.5Fruity
53.406.9Oaky
The global factors uncover meaningful patterns in wine quality: factor 1 primarily separates higher-quality wines (higher and sensory harmony) from lower ones, while factor 2 distinguishes profiles along a fruity-spicy continuum. The tasting notes group's lower (cos² = 0.4) highlights its weaker in the global structure, implying that the categorical distinctions may overlook nuances better captured by the numerical groups and could benefit from refined categories or supplementary variables.

Advanced Real-World Applications

In , multiple factor analysis (MFA) has been instrumental in integrating physicochemical measurements with sensory descriptors to evaluate product quality and consumer perceptions. Jérôme Pagès (2005) applied MFA to analyze perceived inter-distances for 10 white wines from the , using sensory data from projective mapping by subjects, revealing key drivers of varietal differences such as fruity notes in varieties. Similar approaches have extended to fruit studies, where MFA reconciles instrumental texture metrics with hedonic scores to identify freshness indicators in products like apples and berries, enabling balanced profiling across sensory and objective groups. In multi-omics research, MFA extensions like Multi-Omics Factor Analysis (MOFA) facilitate the unsupervised integration of diverse biological datasets, such as , transcriptomics, and , to uncover shared variation in complex diseases. Developed post-2010, MOFA decomposes high-throughput data into latent factors that capture both biological signals and technical noise, as demonstrated in analyses of lines where it balanced profiles with patterns to identify tumor heterogeneity drivers. This framework has been enhanced in MOFA+ for single-cell multi-modal data, improving scalability for large-scale biological integrations. MFA supports customer segmentation in by synthesizing heterogeneous tables, including demographic profiles, purchase behaviors, and psychographic attitudes, to delineate actionable groups. For instance, in studies of preferences, MFA combined survey responses on environmental values with behavioral intent to cluster consumers into segments like "Proximity Shoppers," who prioritize local sourcing, highlighting how balanced factor contributions reveal nuanced market dynamics. Despite its versatility, MFA faces challenges in scalability when applied to high-dimensional data, where the curse of dimensionality amplifies computational demands and risks in factor estimation across numerous variable groups. Recent adaptations address this through robust high-dimensional factor models that incorporate regularization. These applications yield outcomes that pinpoint key drivers across domains; for example, in cancer studies, MFA balances data with proteomic profiles to quantify pathway deviations, identifying dysregulated networks like PI3K-AKT signaling as prognostic factors in breast tumors. More recently, as of 2025, MFA has been introduced as a diagnostic tool in , for example in herpetological species delineation using morphological and genetic data blocks.

Historical Development

Origins in the 1980s

Multiple factor analysis (MFA) was developed in the late by Brigitte Escofier and Jérôme Pagès at (formerly the École Nationale Supérieure des Industries Agricoles et Alimentaires) in , as a to integrate and analyze multi-block datasets, particularly those arising from sensory evaluation studies where products are described by multiple groups of variables such as physicochemical measurements, sensory attributes, and consumer preferences. This approach emerged from the need to overcome the limitations of performing separate principal component analyses (PCAs) on each block, which hindered direct comparisons and the identification of common structures across blocks sharing the same set of individuals (e.g., products or samples). Escofier's prior research on factorial plans and , combined with Pagès' expertise in sensory statistics, provided the foundational inspirations for MFA, enabling a unified framework to balance the contributions of heterogeneous variable groups. MFA built upon the traditions of the French school of data analysis, particularly Jean-Paul Benzecri's multiple correspondence analysis (MCA) introduced in 1973, which emphasized geometric interpretations of categorical data through factorial methods. This school, centered at institutions like the Centre de Recherches Mathématiques Appliquées (now part of AgroParisTech), prioritized exploratory techniques for revealing data structures without strong parametric assumptions, influencing MFA's design as an extension of PCA and MCA for multi-table scenarios. Prior to MFA, multi-table methods like Ledyard R. Tucker's three-mode factor analysis from the 1960s offered early approaches to handling three-way data arrays (e.g., subjects × variables × conditions), but these were more suited to tensor decompositions rather than the parallel integration of multiple two-way tables with common rows that MFA targets. Key early publications formalized MFA's principles and implementations. Escofier and Pagès detailed the method in their 1994 paper, which included the AFMULT software package for computational application, and expanded it in their book Analyses factorielles simples et multiples (first edition 1988, with significant updates in 1990 and 1998 editions). Pagès further refined MFA in 2002, extending it to qualitative variables and mixed data types while maintaining its core balance across blocks. These works established MFA as a cornerstone of multi-block exploratory analysis, particularly in and sciences.

Key Extensions and Contributors

One key extension of multiple factor analysis (MFA) is hierarchical multiple factor analysis (HMFA), introduced by Jérôme Pagès in collaboration with S. Le Dien, which accommodates nested structures in data where variables are organized into hierarchical groups, such as subgroups within larger variable sets. This variant builds on the original MFA framework from the by applying successive MFA steps across levels to assess contributions at different scales, enhancing its utility for complex, structured datasets like sensory profiles with panelist hierarchies. Another important development is the incorporation of weighting schemes in MFA to handle unequal group importance, as advanced by François Husson and colleagues in the 2010s, allowing users to adjust the influence of variable groups based on domain-specific priorities rather than relying solely on automatic balancing. This refinement, detailed in methodological extensions within the FactoMineR framework, improves flexibility for applications where certain data blocks warrant differential emphasis, such as in unbalanced multi-table analyses. Prominent contributors to MFA's evolution include François Husson and Julie Josse, who, starting from 2005, significantly advanced practical implementations and extensions through the FactoMineR package, enabling robust handling of mixed data types and group balancing in exploratory analyses. Their work has facilitated widespread adoption in statistical computing, with ongoing refinements in the emphasizing interpretability and scalability. In parallel, Ricardo Argelaguet and colleagues introduced Multi-Omics Factor Analysis (MOFA) in 2018, adapting MFA principles to a Bayesian framework for integrating high-dimensional multi-omics datasets, such as and transcriptomics, by inferring latent factors that capture shared and unique variations across views. Recent developments in the have integrated MFA with techniques, exemplified by sparse MFA (sMFA), which incorporates sparsity penalties to select relevant variables or groups, improving interpretability in high-dimensional settings like sensory evaluation and . This includes post-2010 Bayesian variants like MOFA and scalable methods that address computational challenges in large-scale data, expanding MFA beyond traditional sensory applications into bioinformatics. The impact of these contributions is evident in the field's shift from primarily sensory analysis to bioinformatics and beyond; for instance, Pagès' foundational work on MFA and its extensions has accumulated over 1,000 citations across key publications, underscoring its enduring influence. Similarly, MOFA has rapidly gained traction, with its original formulation cited more than 1,200 times by 2025, driving unsupervised integration in multi-omics research.

Software Implementations

Primary Packages and Tools

Multiple factor analysis (MFA) is supported by several primary software packages and tools, primarily in open-source environments, which enhance its accessibility for researchers and practitioners in statistics and data analysis. The most established implementation is in the R programming language through the FactoMineR package, developed by François Husson and colleagues, which has been a core resource for MFA since its initial release in 2008. This package provides dedicated functions such as MFA() for performing the analysis on datasets with multiple groups of quantitative and/or categorical variables, and plot.MFA() for generating visualizations that highlight group balances and contributions. FactoMineR integrates seamlessly with R's ecosystem, including the missMDA package by the same authors, which extends MFA capabilities to handle missing data through imputation techniques prior to analysis. Its availability on CRAN ensures straightforward installation and widespread use in academic and applied settings. In , the library, authored by Max Halford and first released in 2016 with ongoing updates, offers an accessible implementation of MFA tailored for on heterogeneous variable groups. computes MFA by applying to each group separately before a global integration, producing outputs like factor coordinates and group balances that facilitate interpretation. For specialized applications in multi-omics data, the MOFA2 package, developed by Ricard Argelaguet and collaborators, provides an advanced probabilistic framework (distinct from classical MFA) for unsupervised integration of diverse data modalities, such as and transcriptomics, using Bayesian models. MOFA2, available via , supports scalable analysis of high-dimensional datasets and includes tools for downstream visualization of latent factors. Commercial options like XLSTAT from Addinsoft offer MFA functionality integrated directly into , making it accessible to users without programming expertise. This add-in performs MFA on multiple variable tables, yielding results such as eigenvalues, contributions, and graphical representations, suitable for and contexts. Another commercial tool is JMP from , which includes built-in MFA for analyzing grouped variables in a user-friendly interface. For open-source alternatives in other languages, Julia's MultivariateStats.jl package provides foundational tools like and that can be combined manually to approximate MFA workflows for grouped data. These tools collectively emphasize user-friendly interfaces and robust outputs for balancing variable groups and visualizing common structures across datasets.

Implementation Considerations

When implementing Multiple Factor Analysis (MFA), it is recommended to begin with exploratory analyses on each group of variables separately, such as (PCA) for quantitative data or (MCA) for categorical data, to understand the structure and variability within groups before proceeding to the global analysis.90150-6) This step helps identify potential issues like dominant variables or low-dimensional groups early. The number of factors retained in the final MFA should be selected using criteria such as eigenvalues greater than 1 or cumulative inertia exceeding 80-90%, supplemented by cross-validation techniques to assess predictive stability, particularly in high-dimensional settings. For handling large datasets, observations or employing sparse variants of MFA can mitigate computational demands while preserving key structures; sparse MFA applies group sparsity penalties during to select informative variables and tables, suitable for datasets with thousands of features. Common pitfalls include over-normalization across groups, which equalizes contributions by scaling each group's first eigenvalue to 1 but can obscure group-specific patterns if scales are inherently meaningful, such as in sensory instrumental data.90150-6) Another frequent issue is ignoring within individual groups, which may lead to unstable factor loadings; this can be preempted by examining the eigenvalues from separate PCAs, retaining only groups with sufficient dimensionality (e.g., first eigenvalue substantially larger than subsequent ones). Additionally, rare categories in categorical groups can disproportionately influence results if not aggregated (e.g., combining those with below 5%), distorting the overall compromise. Computationally, MFA is scalable to datasets with thousands of observations using on the concatenated weighted tables, but performance degrades with high dimensionality (e.g., p > 10,000 variables) due to O(p^3) complexity in dense implementations; iterative algorithms or cross-product matrices offer alternatives for efficiency. In environments like FactoMineR, parallelization can be enabled via with packages such as or for bootstrap computations and simulations, accelerating analyses on multi-core systems. Recent scalability challenges for in the 2020s, such as in multi-omics , are addressed through sparse methods that reduce effective dimensionality without full matrix storage. For validation, bootstrap resampling (e.g., 1,000 iterations) is essential to evaluate the stability of group balances and factor scores, computing bootstrap ratios (observed value divided by ) exceeding 2-3 (with ) to confirm reliable contributions from groups or observations. Reproducibility is enhanced by setting random seeds for and simulations, alongside documenting normalization weights and variable groupings to facilitate replication in tools like FactoMineR.

References

  1. [1]
    Multiple factor analysis: principal component analysis for multitable ...
    Feb 14, 2013 · Multiple factor analysis (MFA, also called multiple factorial analysis) is an extension of principal component analysis (PCA) tailored to handle multiple data ...Missing: definition | Show results with:definition
  2. [2]
    Multiple factor analysis (AFMULT package) - ScienceDirect.com
    Multiple Factor Analysis (MFA) studies several groups of variables (numerical and/or categorical) defined on the same set of individuals.
  3. [3]
    [PDF] Multiple Factor Analysis: Main Features and Application to Sensory ...
    Multiple Factor Analysis (MFA) (Escofier & Pag`es (1988-1998), Pag`es (2002)) deals with data table in which a set of individuals is described by several ...
  4. [4]
    [PDF] Multiple Factor Analysis: main features and application to sensory data
    Multiple Factor Analysis (MFA; Escofier & Pagès 1988-1998 ; Pagès 2002) deals with data table in which a set of individuals is described by several sets of.
  5. [5]
    Introducing multiple factor analysis (MFA) as a diagnostic taxonomic ...
    Aug 4, 2025 · Multiple factor analysis (MFA) is introduced as a diagnostic tool for taxonomy and discussed using examples from the herpetological ...
  6. [6]
    [PDF] principal component analysis for multitable and multiblock data sets
    Multiple factor analysis as a tool for studying the effect of physical training on the autonomic nervous system. Comput Cardiol 2002, 2002:437–440. 65. Pavoine ...<|control11|><|separator|>
  7. [7]
    [PDF] Multiple factor analysis ( AFMULT package)
    The problem is to propose a method which follows the general principle of. Carroll but uses a more stable measure of relationship between a variable and a group ...
  8. [8]
    [PDF] Multiple Factor Analysis (MFA) - The University of Texas at Dallas
    Multiple factor analysis (MFA, see Escofier and Pagès, 1990, 1994) analyzes observations described by several “blocks" or sets of vari-.
  9. [9]
    Multi‐Omics Factor Analysis—a framework for unsupervised ...
    MOFA is a computational framework for unsupervised discovery of the principal axes of biological and technical variation when multiple omics assays are applied ...
  10. [10]
    [PDF] Multiple Factor Analysis - Julie Josse
    ⇒ MFA (Escofier & Pagès, 1998). ⇒ Continuous / categorical ... • Hierarchical Multiple Factor Analysis: Takes into account a hierarchy on ...
  11. [11]
    MFA - Multiple Factor Analysis in R: Essentials - Articles - STHDA
    Sep 25, 2017 · Multiple factor analysis (MFA) (J. Pagès 2002) is a multivariate data analysis method for summarizing and visualizing a complex data table.
  12. [12]
    [PDF] Multiple Factor Analysis - FactoMineR
    ... matrix to diagonalize”. Benzécri. MFA is a weighted PCA : • calculate the ... • The Lg coefficient as an indicator of a group's dimensionality. Lg (Kj,Kj) ...
  13. [13]
    Encyclopedia of Measurement and Statistics - Multiple Factor Analysis
    Multiple factor analysis (MFA) analyzes observations described by ... Escofier, B., & Pagès, J.(1990).Analyses factorielles simples ...
  14. [14]
    None
    ### Summary of Data Structure and Organization in Multiple Factor Analysis (Abdi, 2007)
  15. [15]
    [PDF] FactoMineR: Multivariate Exploratory Data Analysis and Data Mining
    Performs Dual Multiple Factor Analysis (DMFA) with supplementary individuals, supplementary quantitative variables and supplementary categorical variables.
  16. [16]
    Individualized multi-omic pathway deviation scores using multiple ...
    ... multiple factor analysis framework called padma. Using a multi-omic ... When examining the partial factor maps for this individual over the first three ...
  17. [17]
    Analyses factorielles simples et multiples: objectifs ... - Google Books
    Analyses factorielles simples et multiples: objectifs, méthodes et interprétation ... Cet ouvrage est destiné aux étudiants en Masters de mathématiques appliquées ...
  18. [18]
    [PDF] Statistical analysis of textual data: Benzécri and the French School of
    The School of Data Analysis often combines factor analysis and clas- sification. 1.1. The origin of data analysis. In A History and prehistory of data ...
  19. [19]
    [PDF] Jan de Leeuw and the French School of Data Analysis
    In this paper, to illustrate the commonalities and discrepancies between the French and Dutch schools we focus on the method dedicated to analyse categorical ...
  20. [20]
    Some mathematical notes on three-mode factor analysis
    Extension of the two-mode factor analytic model to three or more modes of data classification has been suggested by Tucker. InitiM discussions of this ...
  21. [21]
    [PDF] Analyse factorielle multiple appliquée aux variables qualitatives et ...
    Ce tableau montre que l'ACM pondérée possède la propriété très importante de l'ACM usuelle selon laquelle il revient au même d'analyser le nuage des individus ...
  22. [22]
    FactoMineR: An R Package for Multivariate Analysis
    Mar 18, 2008 · In this article, we present FactoMineR an R package dedicated to multivariate data analysis. The main features of this package is the ...Missing: MFA | Show results with:MFA
  23. [23]
    Multiple Factor Analysis (MFA) in FactoMineR - rdrr.io
    Performs Multiple Factor Analysis in the sense of Escofier-Pages with supplementary individuals and supplementary groups of variables.
  24. [24]
    CRAN: Package FactoMineR - R Project
    Jul 23, 2025 · FactoMineR: Multivariate Exploratory Data Analysis and Data Mining. Exploratory data analysis methods to summarize, visualize and describe datasets.
  25. [25]
    MaxHalford/prince: :crown: Multivariate exploratory data analysis in ...
    Prince is a Python library for multivariate exploratory data analysis in Python. It includes a variety of methods for summarizing tabular data.
  26. [26]
    Multiple factor analysis | Prince - Max Halford
    Multiple factor analysis (MFA) is meant to be used when you have groups of variables. In practice, it builds a PCA on each group. It then fits a global PCA.Missing: original paper<|control11|><|separator|>
  27. [27]
    MOFA+: a statistical framework for comprehensive integration of ...
    May 11, 2020 · We present Multi-Omics Factor Analysis v2 (MOFA+), a statistical framework for the comprehensive and scalable integration of single-cell multi-modal data.Missing: mixed | Show results with:mixed
  28. [28]
    Multiple Factor Analysis (MFA) | Statistical Software for Excel - XLSTAT
    Multiple Factor Analysis (MFA) investigates the relationships between several sets of variables. Run MFA in Excel using the XLSTAT statistical software.Missing: definition | Show results with:definition
  29. [29]
    Exploratory Multivariate Analysis by Example Using R
    Apr 25, 2017 · Full of real-world case studies and practical advice, Exploratory Multivariate Analysis by Example Using R, Second Edition focuses on four<|control11|><|separator|>
  30. [30]
  31. [31]