Fact-checked by Grok 2 weeks ago

F-score

The F-score, also known as the F-measure, is a performance metric used to evaluate the accuracy of binary classification models, information retrieval systems, and similar tasks by harmonically combining precision (the proportion of true positives among predicted positives) and recall (the proportion of true positives among actual positives). It provides a single value that balances the trade-off between these two measures, particularly useful in scenarios with imbalanced datasets where accuracy alone is misleading. The standard F1-score (when β = 1) treats precision and recall equally, calculated as F1 = 2 × (precision × recall) / (precision + recall). Introduced by C. J. van Rijsbergen in his 1979 book , the F-score originated in the context of assessing ranked , where it addressed the need for a unified measure of retrieval effectiveness beyond separate evaluations. Van Rijsbergen defined it using measurement theory principles, employing a weighted to incorporate user preferences for versus through a parameter α (where 0 ≤ α ≤ 1), expressed as F = 1 / (α / precision + (1 - α) / recall). This formulation gained prominence in the 1992 Message Understanding Conference for tasks and has since become a standard in evaluation. The generalized Fβ-score extends this by introducing β > 0 to adjust the relative importance of over , with the formula Fβ = (1 + β²) × ( × ) / (β² × + ); β = 1 yields the balanced F1, while β > 1 (e.g., F2 with β = 2) prioritizes , and β < 1 emphasizes . Key properties include its ignorance of true negatives, making it suitable for positive-class-focused assessments, and its non-linear response to changes in or , which can lead to equivalent scores from dissimilar - pairs. In practice, the F-score is applied in fields like search engines (to measure query relevance), medical diagnostics (to balance false positives and negatives), and AI model benchmarking, often preferred over accuracy in imbalanced scenarios. Despite its ubiquity, criticisms highlight its threshold-dependence and failure to fully capture distribution shifts, prompting alternatives like the Matthews correlation coefficient in some contexts.

Fundamentals

Definition

The F-score, also known as the F1-score in its balanced form, is a widely used evaluation metric in binary classification and information retrieval that combines precision and recall into a single measure of model performance. Precision (P) is defined as the ratio of true positives (TP) to the sum of true positives and false positives (FP), representing the proportion of predicted positives that are actually correct:
P = \frac{TP}{TP + FP}
Recall (R), also called sensitivity, is the ratio of true positives to the sum of true positives and false negatives (FN), indicating the proportion of actual positives correctly identified:
R = \frac{TP}{TP + FN}
These definitions rely on the confusion matrix, which tabulates TP (correctly predicted positives), FP (incorrectly predicted positives), FN (missed positives), and true negatives (TN, correctly predicted negatives, though TN is not used in these metrics).
The F1-score is computed as the harmonic mean of and :
F_1 = 2 \times \frac{P \times R}{P + R}
This formulation arises from the need to balance the two metrics equally when they are of comparable importance, as introduced in the context of evaluation. The harmonic mean is preferred over the arithmetic mean because it penalizes imbalances between precision and recall more severely; for instance, if one metric is zero, the F1-score is zero, whereas the arithmetic mean might yield a misleadingly higher value.
The F1-score ranges from 0 to 1, where a value of 1 indicates perfect precision and recall (no false positives or false negatives), and 0 signifies complete failure in identifying positives correctly. In a binary classification scenario like spam detection, where emails are classified as spam (positive class) or legitimate (negative), a high F1-score reflects a model's ability to accurately flag spam without overwhelming the user with false alarms from legitimate emails. Overall, the F1-score motivates evaluation that equally weighs the trade-off between avoiding false positives (via precision) and capturing all true positives (via recall), making it particularly valuable in scenarios with imbalanced classes.

Fβ Score

The Fβ score generalizes the F1 score by introducing a parameter β > 0 to adjust the relative importance of (P) and (R) in their . It is defined as F_{\beta} = (1 + \beta^2) \frac{P \times R}{\beta^2 P + R}, where β = 1 recovers the standard F1 score, β < 1 places greater emphasis on , and β > 1 prioritizes . This parameterization allows evaluators to tune the metric according to domain-specific priorities in balancing . The formula derives from a weighted harmonic mean of P and R, where the weights reflect the desired trade-off. The harmonic mean for two values is H = \frac{2}{1/P + 1/R} = \frac{2PR}{P + R}, which equally weights them; for unequal weights, it generalizes to H = \frac{1 + w}{w/P + 1/R}, where w scales the importance of R relative to P. Setting w = β² yields the Fβ form, as the quadratic scaling ensures the relative importance of recall is β times that of precision in the reciprocal space of the harmonic mean, providing a non-linear adjustment that amplifies the prioritized metric. This β² term arises from van Rijsbergen's effectiveness measure E = 1 - Fβ, originally formulated to incorporate user preferences via an additive conjoint model, where the weight α for precision is α = 1/(1 + β²) and for recall is β²/(1 + β²). Common variants include the F_{0.5} score, which favors (e.g., in systems where false positives, such as irrelevant recommendations, must be minimized to maintain user trust). Conversely, the F_2 score emphasizes (e.g., in medical screening for diseases like cancer, where detecting all potential cases outweighs some false positives to avoid missing diagnoses). To illustrate, consider a binary classifier with the following for 200 samples:
Predicted PositivePredicted Negative
Actual PositiveTP = 80FN = 10
Actual Negative = 20TN = 90
Here, = 80 / ( + 20) = 0.8 and = 80 / (80 + 10) ≈ 0.889. The F_1 score is ≈ 0.842, the F_{0.5} score is ≈ 0.816 (penalizing the false positives more), and the F_2 score is ≈ 0.870 (valuing the high ). These values demonstrate how β shifts the score toward the favored metric without altering the underlying and . The choice of β depends on the application's cost asymmetry between errors: use β = 1 for balanced evaluation in tasks, β < 1 (e.g., 0.5) when is critical such as in fraud detection to limit unnecessary interventions, and β > 1 (e.g., 2) when dominates like in safety-critical diagnostics.

History and Etymology

Etymology

The term "F-score," often used interchangeably with "F-measure," originated in the field of , where it denotes a family of metrics balancing through a weighted . The "F" designation lacks a definitive expansion and was not intentionally derived from statistical nomenclature like the ; instead, its adoption appears to have been serendipitous. In his seminal 1979 book , C. J. van Rijsbergen introduced the underlying formula as an "effectiveness measure" denoted by E, which measures retrieval performance with respect to a user's relative emphasis on versus via a β. The specific name "F-measure" emerged later, reportedly by accident during its formalization for evaluation tasks. According to an analysis by Yutaka Sasaki, the term was selected in 1992 at the Fourth Message Understanding Conference (MUC-4) when organizers misinterpreted and repurposed a unrelated "F" function from van Rijsbergen's book—possibly referring to a fallback function—leading to its labeling as F rather than retaining the original E. This nomenclature stuck due to the metric's structure, which provided a balanced single-value summary, and it gained traction in literature throughout the 1980s as evaluations shifted from separate reports to combined "effectiveness" scores. By the late 1980s, the F-measure had become a standardized term in the community, supplanting earlier ad hoc descriptors like "effectiveness measure." Within the F family, the balanced case where β=1—equally weighting —is commonly termed the F1-score, emphasizing its role as the default without bias toward one metric over the other. This variant's naming underscores the parametric nature of the broader F concept, but the "F1" suffix arose in contexts to distinguish it from generalized Fβ forms. The F-score should not be confused with unrelated concepts sharing the "F" label, such as the in statistics, a variance developed by in the for hypothesis testing in analysis of variance, or the in , a 0-9 scale assessing firm financial strength based on nine criteria introduced by Joseph Piotroski in 2000. These homonyms reflect independent evolutions, with no direct etymological or methodological links to the F-measure.

Historical Development

The F-measure was introduced by C. J. van Rijsbergen in his 1979 book , where it served as an effectiveness function denoted E(1,β) designed to evaluate performance by harmonically combining for ranked systems. This formulation addressed the need for a single metric that balanced the trade-offs between retrieving relevant documents and avoiding irrelevant ones in (IR) contexts. In the late and throughout the , the F-measure became a foundational tool in research, widely applied to assess the quality of document ranking algorithms amid the growing complexity of large-scale text . Its early adoption helped standardize evaluation practices in the field, influencing benchmarks for systems like those developed during the Text REtrieval Conference (TREC) series starting in 1992. By the 1990s, the F-measure transitioned into applications, particularly for classification tasks with imbalanced classes, and was prominently featured in educational resources that bridged and broader computational methods. For instance, it received detailed exposition in the 2008 textbook Introduction to Information Retrieval by , , and Hinrich , which popularized its use among practitioners. During the 2000s, the metric proliferated in (NLP) for tasks like and in for segmentation and detection, solidifying its role as a versatile performance indicator across disciplines. Key milestones in the F-measure's evolution include its incorporation into open-source libraries, beginning with in 2007, which facilitated its routine use in empirical studies and model development. Subsequent integration into further embedded the metric in workflows, enabling seamless evaluation in large-scale experiments. A 2023 review by Hand et al. traced the F-measure's trajectory, emphasizing its persistent dominance in computational evaluation despite critiques regarding its sensitivity to class distribution, with no substantial innovations or replacements documented by 2025.

Properties and Interpretations

Mathematical Properties

The F-score, particularly the F₁ variant, is the of (P) and (R), defined as F_1 = \frac{2PR}{P + R}. This formulation ensures that the F₁-score is bounded above by the minimum of P and R, i.e., F_1 \leq \min(P, R), with equality holding when P = R. Additionally, it satisfies F_1 \leq \min(P, R) \leq \frac{P + R}{2}, where the provides an upper bound, and equality in the latter inequality occurs only when P = R. These inequalities arise from the properties of the , which penalizes imbalances between P and R more severely than the arithmetic or geometric means. The F₁-score lies between the geometric mean \sqrt{PR} and the arithmetic mean \frac{P + R}{2}, specifically \sqrt{PR} \leq F_1 \leq \frac{P + R}{2}, reflecting its position as an intermediate measure that emphasizes balanced performance. To see that F₁ is twice the harmonic mean of P and R, note that the harmonic mean H of two positive numbers a and b is H = \frac{2ab}{a + b}, so F_1 = H(P, R); this follows directly from the definition, as substituting a = P and b = R yields the expression. The choice of the harmonic mean over alternatives, such as the arithmetic mean, stems from its alignment with decreasing marginal effectiveness in evaluation contexts, where improvements in the lower-performing metric yield greater relative gains. For the generalized F_β-score, F_\beta = \frac{(1 + \beta^2) PR}{\beta^2 P + R} with β > 0, the score is monotonically increasing in both P and R for fixed β, as partial derivatives \frac{\partial F_\beta}{\partial P} = \frac{(1 + \beta^2) R}{(\beta^2 P + R)^2} > 0 and \frac{\partial F_\beta}{\partial R} = \frac{(1 + \beta^2) P}{(\beta^2 P + R)^2} > 0 when 0 < P, R ≤ 1. The parameter β modulates sensitivity: for β > 1, F_\beta weights recall more heavily, making \frac{\partial F_\beta}{\partial R} > \frac{\partial F_\beta}{\partial P} at equal P and R, and vice versa for β < 1. The F_β-score is bounded as 0 ≤ F_β ≤ 1, achieving 1 if and only if P = R = 1, and 0 if either P = 0 or R = 0. In certain parameter spaces, such as precision-recall curves, the F_β-score exhibits convexity properties derived from the harmonic mean's concavity in reciprocal space, leading to convex isoeffectiveness contours that justify its use in optimizing balanced trade-offs. Unlike the Jaccard index J = \frac{PR}{P + R - PR}, which measures set overlap directly, the F-score's harmonic form avoids overemphasizing union size and provides a tuned balance via β, though the two are monotonically related since F₁ = \frac{2J}{1 + J}.

Use in Diagnostic Testing

In diagnostic testing, the F-score serves as a key metric for evaluating binary classifiers designed to detect diseases, balancing precision—interpreted as the positive predictive value (PPV), or the proportion of true positives among all positive predictions—and recall, equivalent to sensitivity, or the proportion of true positives among all actual positives. This harmonic mean formulation captures the inherent trade-off in diagnostic tests: high sensitivity ensures few cases are missed, while high PPV minimizes unnecessary interventions from false positives, making the F-score particularly valuable in clinical scenarios where both patient outcomes and resource allocation are critical. A notable application occurred in the evaluation of COVID-19 diagnostic models, where the F2 score was employed to prioritize high , thereby emphasizing the detection of all potential cases to reduce false negatives amid the pandemic's urgency for . Conversely, an F0.5 score could be optimized to favor high precision, helping to limit false positives that might lead to unwarranted quarantines or resource strain in low-prevalence settings. Unlike the area under the curve (ROC-AUC), which aggregates performance across all possible classification thresholds to assess overall discriminability, the F-score evaluates effectiveness at a specific operating , highlighting the precision-recall balance relevant to real-world deployment. It thus complements ROC-AUC by providing targeted insight into threshold-dependent performance, especially in imbalanced datasets common to diagnostics where positive cases are rare. Threshold selection in diagnostics often involves optimizing the Fβ score to align with cost-sensitive priorities, such as weighting more heavily (β > 1) when false negatives carry higher consequences, like undetected infections leading to outbreaks or untreated conditions. For instance, in scenarios where missing a outweighs over-testing, this adjustment guides the choice of on the precision- curve to maximize clinical utility. Empirical studies from the 2020s demonstrate the F-score's integration in assessing AI-driven diagnostic tools, including those supporting regulatory approvals; for example, models classifying versus achieved F1 scores of 0.84 to 0.87, while detection reached 0.94, underscoring its role in validating performance for gastroenterological applications.

Impact of Class Imbalance

Class imbalance refers to datasets where the distribution of instances across classes is unequal, often with one majority class vastly outnumbering a minority class, such as in detection where fraudulent transactions are rare. This imbalance skews because classifiers tend to bias toward the majority class to minimize overall error, resulting in high precision for the majority but poor recall for the minority, or vice versa if forced to predict more minorities. In such scenarios, the F1-score, as the harmonic mean of precision and recall, becomes sensitive to the imbalance ratio and can favor the majority class, leading to misleading interpretations if not adjusted, particularly when using macro-averaging where the high performance on the majority class inflates the overall score despite poor minority class detection. For instance, a trivial classifier that always predicts the majority class achieves near-perfect precision and recall on that class but zero recall on the minority, yielding a macro-F1 that appears reasonably high due to the imbalance, while the per-class F1 for the minority is zero. Simulations demonstrate this vulnerability in imbalanced settings, where the F1-score assigns high values primarily to classifiers with very high true negative rates, making true positive rates less influential even for moderate performance on the minority class. Further, in minority class imbalance, the standard F1-score (β=1) can appear recall-dominated because achieving high recall on the scarce minority requires predicting many positives, which often lowers precision due to increased false positives from the majority class; however, this balance shifts unfavorably in extreme cases, with simulated data showing F1 scores rising steeply toward 1 as imbalance worsens for suboptimal classifiers, unlike more stable metrics. Studies comparing F1 to accuracy highlight its relative robustness—accuracy remains high (e.g., >90%) for trivial majority predictors in 1:99 imbalance, while F1 drops significantly for the minority class—but it still underperforms in extreme imbalances compared to threshold-independent alternatives. To mitigate these effects, tuning the β parameter in the Fβ-score allows greater emphasis on (β > 1) when minority class detection is critical, as this weighted better balances the trade-off in imbalanced settings by penalizing low more heavily. Sampling techniques like SMOTE (Synthetic Minority Over-sampling Technique) address imbalance by generating synthetic minority instances, improving F1-scores in empirical evaluations across 30 (e.g., from 0.556 baseline to 0.605 with SMOTE) by enhancing without severely degrading , though results vary by dataset severity. Compared to balanced accuracy—which averages per-class accuracies to equally weight minority performance and remains more stable across imbalances—F1 is less inherently robust but can be comparable when β is tuned appropriately.

Applications

In Information Retrieval

In (IR), the F-score provides a balanced of search system performance by harmonizing , particularly in ranked . at k (P@k) quantifies the fraction of relevant documents among the top k results returned for a query, emphasizing the quality of retrieved items, while measures the proportion of all relevant documents in the collection that are actually retrieved, focusing on completeness. The F-score, especially the F1 variant with equal weighting, acts as their , offering a single metric for non-interpolated assessment that captures true performance without smoothing recall levels, making it suitable for scenarios where both avoiding irrelevant results and ensuring comprehensive coverage are critical. The F-score originated in IR as a formulation for measuring retrieval effectiveness, introduced by Van Rijsbergen in 1979 to address the need for a unified effectiveness metric beyond separate precision and recall curves. Compared to Mean Average Precision (MAP), which aggregates precision values across varying recall levels for a more stable summary of ranked output, the F-score excels in providing a concise balance at specific operating points, though MAP has become more prevalent in TREC for its sensitivity to ranking quality. The generalized Fβ score adjusts this balance, with β > 1 (such as β = 3 or 5) prioritizing recall over precision in domains like legal search, where failing to retrieve pertinent documents carries higher risk than including extras. Practical applications include tuning web search engines to enhance user satisfaction by balancing relevance and coverage in query responses. Over time, IR evaluation has evolved toward metrics like Normalized Discounted Cumulative Gain (NDCG) since the 2000s, which better accommodate graded relevance scales in modern search tasks; nonetheless, the F-score remains a staple for binary relevance scenarios, such as initial filtering in retrieval pipelines.

In Machine Learning

In , the F-score serves as a key evaluation metric for assessing the performance of classifiers, particularly in and tasks where class imbalance is common. It balances to provide a single score that reflects a model's ability to correctly identify positive instances without excessive false positives, making it especially valuable in applications like and . For instance, in , F1 scores help evaluate models on datasets where negative sentiments may dominate, ensuring robust performance across varied linguistic patterns. The F1 score, a special case of the Fβ score with β=1, has become the default choice for imbalanced datasets in pipelines, as it penalizes models that favor the majority class. In multi-label scenarios, such as tagging multiple objects in an , the F1 score can be computed per label and then averaged (e.g., or averaging) to account for varying label frequencies. Libraries like implement this through the f1_score function, which supports customizable β values and averaging methods, facilitating its integration into hyperparameter tuning processes like grid search for optimizing classifier thresholds. During tuning, F1 often guides the selection of models that achieve high in minority classes, as seen in cross-validation setups. Historical case studies highlight the F-score's prominence in benchmarks. In , the Conference on Computational Natural Language Learning (CoNLL) shared tasks have used F1 as the primary metric since the 1990s for tasks like , where it evaluates sequence labeling accuracy on imbalanced entity types; for example, the 2003 CoNLL task reported top F1 scores around 89% for English. Similarly, in , the PASCAL Visual Object Classes (VOC) challenges from 2005 to 2012 employed mean average precision derived from precision-recall curves, with F1 scores informing detection performance on datasets featuring rare object classes like bottles or trains. Compared to accuracy, the F-score offers superior handling of class imbalance by equally weighting false positives and false negatives, which is critical in real-world scenarios where minority errors carry high costs. In Kaggle competitions, such as the 2017 Toxic Comment Classification Challenge, F1 was the primary metric, where winning models achieved a macro F1 score of approximately 0.69 on the private leaderboard by prioritizing for toxic labels amid heavily skewed data. This advantage has been empirically validated in studies showing F1 outperforming accuracy by up to 20% on imbalanced benchmarks like those from the UCI Repository. Recent trends underscore the F-score's evolution in deep learning and ethical AI. During fine-tuning of models like BERT for tasks such as question answering, F1 is optimized as the main objective, with reported improvements of 2-5% over baselines on datasets like SQuAD, emphasizing exact match and partial overlap. In the 2020s, its role has expanded to ethical AI frameworks, where F1 variants assess fairness in classification across demographic groups, as in subgroup F1 metrics proposed for detecting biases in hiring algorithms. Class imbalance can skew F1 toward majority classes, but thresholding adjustments mitigate this in practice.

In Medical and Other Domains

In medical applications, the F-score is employed to evaluate the performance of algorithms for variant calling in genomic sequencing, where accurate identification of genetic mutations is critical for diagnosing diseases like cancer. For instance, robust variant calling pipelines can achieve F1 scores exceeding 0.99 for small variants using high-quality DNA samples, as demonstrated in best practices for clinical sequencing. Deep learning-based callers, such as Clair3 and DeepVariant, have reported SNP F1 scores of up to 99.7% on benchmark datasets, highlighting their precision and recall balance in distinguishing true variants from noise in next-generation sequencing data. Beyond diagnostics, the F-score assesses predictive models in epidemiology, particularly for infectious disease forecasting and outbreak detection. In global health forecasting efforts, machine learning models use F1 scores to measure their ability to detect disease presence, with higher scores indicating reliable early warnings for interventions during epidemics like COVID-19. In bioinformatics, the F-score evaluates protein structure prediction tools, where it quantifies the accuracy of predicted interfaces between protein chains. Community-wide assessments like the Critical Assessment of Structure Prediction (CASP) employ the Interface Contact Score, equivalent to the F1 score, to rank models, with top performers achieving scores that nearly double prior benchmarks through advances in deep learning. For example, ultrafast end-to-end predictors optimize thresholds to maximize F1 scores around 0.4 for precision-recall trade-offs in structural alignments. The also finds application in for assessment, distinct from the unrelated which evaluates firm fundamentals. models for predicting loan defaults, such as those using support vector machines or , report F1 scores around 0.80-0.83, aiding lenders in balancing approval rates with default minimization. In detection within financial transactions, ensemble models achieve F1 scores up to 0.95, outperforming single classifiers by reducing erroneous predictions. Autonomous driving systems utilize the F-score to validate pedestrian detection algorithms, crucial for safety in urban environments. Attention-based approaches integrated with data yield F1 scores above 0.90, enhancing detection precision under varying conditions like or low light. Score methods combining multiple sensors further improve F1 scores to 0.97 for bounding box accuracy, outperforming standalone vision models. Domain-specific challenges influence F-score weighting; in , false negatives (missed diagnoses) incur higher costs than false positives, favoring beta values greater than 1 to emphasize . Conversely, in , false positives (unnecessary rejections) lead to opportunity losses, prompting use of F0.5 to prioritize and minimize alerts in fraud systems. These adaptations ensure the metric aligns with asymmetric error impacts across fields.

Extensions

To Multi-Class Classification

The F-score is extended from to multi-class settings, where each instance is assigned to exactly one of multiple classes, by treating the problem as a series of one-vs-rest binary classifications for each class. For a K-class problem, precision and recall are computed separately for each class k using elements from the K × K C, where C_{ij} represents the number of instances with true class i predicted as j. Specifically, the true positives for class k are TP_k = C_{kk}, false positives are FP_k = \sum_{j \neq k} C_{jk}, and false negatives are FN_k = \sum_{j \neq k} C_{kj}. for class k is then P_k = \frac{TP_k}{TP_k + FP_k} and is R_k = \frac{TP_k}{TP_k + FN_k}, yielding the per-class F1 score F1_k = 2 \frac{P_k R_k}{P_k + R_k}. An overall F1 score is obtained by averaging the per-class F1 scores, often using macro-averaging for equal class weighting. This adaptation addresses the multi-dimensional nature of the confusion matrix in multi-class problems but introduces challenges related to label overlap. In standard multi-class , labels are , meaning each instance has precisely one true label, which simplifies the computation of TP, FP, and FN as off-diagonal sums in the confusion matrix. However, this differs from multi-label scenarios, where instances can belong to multiple labels simultaneously (e.g., non-exclusive categories), requiring the confusion matrix to be extended or treated as independent decisions per label without assuming . In , the F1 score is computed per label by binarizing each label's predictions—treating presence of the label as positive—and then averaging the resulting per-label F1 scores, similar to the one-vs-rest strategy but without exclusivity constraints. For each label l, TP_l counts instances where both true and predicted sets include l, FP_l counts predicted but not true, and FN_l counts true but not predicted; these yield label-specific P_l and R_l for F1_l, with the overall score as an average across labels. This approach is particularly useful in tasks like image tagging, where an image might simultaneously receive tags such as "animal" and "landscape," evaluating the model's ability to predict overlapping sets of labels accurately. The multi-class extension of the F-score is applied in scenarios involving more than two classes, such as text categorization, where documents are classified into topics like , , or based on features.

Averaging Methods

In multi-class classification, aggregating per-class F1 scores into a single metric is essential for overall evaluation, particularly when classes have varying prevalences. The primary averaging methods—, , and weighted F1—differ in how they handle class contributions, influencing their sensitivity to imbalance and suitability for different scenarios. These methods extend the binary F1 computation by considering predictions across all classes, typically using a one-vs-all approach. The macro F1 score computes the unweighted of the F1 scores for each , treating all classes equally regardless of their size or support. It is defined as: \text{Macro F1} = \frac{1}{C} \sum_{c=1}^{C} \text{F1}_c where C is the number of and \text{F1}_c is the F1 score for c. This approach emphasizes balanced performance across classes, making it particularly sensitive to errors on minority classes, as poor performance on a rare class impacts the average equally to a common one. In contrast, the micro F1 score aggregates true positives (TP), false positives (FP), and false negatives (FN) globally across all classes before computing , then derives the F1. Its formula is: \text{Micro F1} = 2 \times \frac{\sum_{c=1}^{C} \text{TP}_c}{\sum_{c=1}^{C} (\text{TP}_c + \text{FP}_c) + \sum_{c=1}^{C} \text{FN}_c} This method weights contributions by class prevalence, effectively favoring majority classes and equating to accuracy in single-label multi-class settings where each instance belongs to exactly one class. As a result, it reflects overall error rates but may mask deficiencies in handling rare classes. The weighted F1 score addresses some limitations of and by taking a weighted average of per-class F1 scores, with weights proportional to each class's support (number of true instances). It is given by: \text{Weighted F1} = \sum_{c=1}^{C} \left( \frac{n_c}{N} \times \text{F1}_c \right) where n_c is the number of true instances for class c and N is the total number of instances. This balances the equal treatment of with the prevalence emphasis of , providing a compromise that accounts for class distribution without fully dominating by majority classes. To illustrate differences, consider an imbalanced three-class problem with classes A (90 instances), B (9 instances), and C (1 instance). Suppose a classifier achieves F1 on A of 0.95, on B of 0.40, and on C of 0.00. The F1 would be (0.95 + 0.40 + 0.00)/3 ≈ 0.45, penalizing minority class failures equally. Assuming the overall accuracy (which equals F1 in this setting) is dominated by performance on A and approximates 0.86, the F1 would be 0.86. The weighted F1 would be (90/100)*0.95 + (9/100)*0.40 + (1/100)*0.00 = 0.891, slightly higher due to the weighting of per-class F1 scores. Such disparities highlight how suits balanced evaluations emphasizing equity, while and weighted prioritize aggregate performance in prevalence-weighted contexts. In , where instances can belong to multiple classes simultaneously, these averaging methods are adapted similarly: macro F1 averages per-label F1 scores equally, micro F1 pools counts across all labels and instances, and weighted uses label frequencies. Complementary metrics like Hamming loss (average fraction of labels incorrectly predicted per instance) and subset accuracy (proportion of instances with exact set matches) provide additional perspectives, with Hamming focusing on per-label errors and evaluating set . The choice of depends on whether label independence or overall set prediction is prioritized.

Comparisons and Limitations

The Fowlkes–Mallows (FM) index, introduced in 1983, serves as a measure of similarity between two clusterings and is defined as the geometric mean of precision and recall, given by FM = \sqrt{P \times R}, where P is precision and R is recall. In contrast, the F1-score, originating from information retrieval in 1979, uses the harmonic mean F_1 = \frac{2PR}{P + R}. The harmonic mean penalizes imbalances between P and R more severely than the geometric mean, making the FM index relatively more forgiving in scenarios where precision and recall differ substantially. For instance, consider a dataset with true positives (TP) = 10, false positives (FP) = 0, and false negatives (FN) = 30. Here, P = 1.0 and R = 0.25, yielding F_1 = 0.4 but FM = \sqrt{1.0 \times 0.25} = 0.5. This difference highlights how the F1-score underscores the cost of low in tasks, while the provides a less stringent evaluation suitable for clustering, where the absence of a predefined "positive" class makes less directly applicable. The is thus preferred in unsupervised clustering evaluations, such as comparing hierarchical clusterings, whereas the F1-score dominates in supervised . The postdates the F1-score by four years, reflecting its for clustering contexts beyond the retrieval-focused origins of the F-measure. The , also known as the intersection over union, is computed as J = \frac{TP}{TP + FP + FN}, focusing solely on the overlap relative to the union of predicted and actual positives. Unlike the F1-score, which balances through their , the Jaccard index does not incorporate the full denominator of false negatives in a weighted manner, making it stricter on false positives but insensitive to true negatives. The coefficient, defined as D = \frac{2 \times TP}{2 \times TP + FP + FN}, is mathematically equivalent to the F1-score in settings, as both reduce to the same expression \frac{2PR}{P + R}. This equivalence arises because the Dice coefficient, originally from , aligns with the F1-score's structure when applied to contingency tables in . The phi coefficient (\phi), a correlation measure for binary variables based on the Pearson product-moment correlation for 2x2 contingency tables, differs fundamentally from the F1-score by assessing linear association rather than rate-based performance like precision and recall. Specifically, \phi = \frac{TP \times TN - FP \times FN}{\sqrt{(TP + FP)(TP + FN)(TN + FP)(TN + FN)}}, incorporating true negatives (TN) to capture overall contingency balance, whereas the F1-score ignores TN and focuses on positive class efficacy. This makes \phi more robust to class imbalance in correlational analyses, such as in contingency table evaluations, but less intuitive for tasks prioritizing positive predictions like information retrieval, where the F1-score excels.

Criticism and Alternatives

The F-score, particularly the F1 variant, has faced significant criticism for its failure to incorporate true negatives (TN) in its calculation, which can result in inflated scores for classifiers that perform poorly overall, especially on imbalanced datasets where the negative class dominates. For instance, in scenarios with a large number of TNs, two classifiers with markedly different error patterns may yield identical F1 scores despite one being substantially worse. This limitation stems from the metric's reliance solely on true positives, , rendering it insensitive to the correct identification of negatives, a critical oversight in applications like where missing negatives has low impact but overall performance matters. Additionally, the F-score is inherently threshold-dependent, requiring a that affects asymmetrically; without proper calibration, it can mislead evaluations across varying operating points. The formulation assumes equal importance of unless adjusted via the β parameter, but this adjustment lacks a strong theoretical foundation and often proves arbitrary in practice. In multi-class settings, micro-averaging of F1 favors majority es by weighting contributions proportionally to class prevalence, exacerbating toward dominant labels in imbalanced data. Furthermore, the F-score is not inherently suitable for cost-sensitive scenarios, where carry unequal penalties, necessitating modifications like cost-sensitive reformulations to optimize it effectively. As alternatives, the addresses the F-score's neglect of TN by providing a balanced measure that incorporates all elements, making it more robust for imbalanced binary and multi-class problems. offers a chance-corrected metric emphasizing agreement beyond random guessing, suitable for assessing classifier reliability in inter-annotator or multi-class contexts. For highly imbalanced data, the area under the precision-recall curve (AUPRC) serves as a threshold-independent alternative, focusing on positive performance without the dilution from abundant negatives. In probabilistic settings, the evaluates calibration and of predicted probabilities, penalizing overconfident errors more directly than the F-score. Recent reviews, such as those from 2023, highlight the need for unified frameworks that mitigate the F-score's and averaging biases, advocating metrics like calibrated variants of macro F1 or prevalence-invariant alternatives to standardize assessments across diverse datasets. Empirical analyses in applications, particularly in tasks with class imbalance, have demonstrated that F1 can overestimate performance by prioritizing majority class accuracy, leading to misguided model selections. The F-score should be avoided in highly imbalanced scenarios or multi-label problems with uneven costs, where alternatives like or AUPRC provide more reliable insights into minority class handling and overall discriminability.

References

  1. [1]
    A Review of the F-Measure: Its History, Properties, Criticism, and ...
    The F-measure can be traced to the book “Information Retrieval” by Van Rijsbergen [59] published in 1979. In information retrieval, a document collection is ...
  2. [2]
    [PDF] INFORMATION RETRIEVAL - Open Library Society, Inc.
    This chapter has been included because I think this is one of the most interesting and active areas of research in information retrieval. There are still many.
  3. [3]
    fbeta_score — scikit-learn 1.7.1 documentation
    beta > 1 gives more weight to recall, while beta < 1 favors precision. For example, beta = 2 makes recall twice as important as precision, while beta = ...
  4. [4]
    [PDF] Derivation of the F-Measure
    Feb 19, 2004 · Appling this formula to precision and recall, we get. H = 1. 1. 2 (1 ... van Rijsbergen. Information Retireval. Butterworths, London, 1979. 2.
  5. [5]
    [PDF] Evaluation in information retrieval - Stanford NLP Group
    Values of β < 1 emphasize precision, while values of β > 1 emphasize recall. For example, a value of β = 3 or β = 5 might be used if recall is to be emphasized.
  6. [6]
    F-Beta Score | Baeldung on Computer Science
    Jun 13, 2023 · We care more about recall if a false negative is more severe an error than a false positive. Automated diagnostic ML tools in medicine ...1. Introduction · 2. The F1 Score · 3. The F-Beta Score
  7. [7]
    [PDF] The truth of the F-measure
    Oct 26, 2007 · If β > 1,. F becomes more recall-oriented and if β < 1, it becomes more precision- oriented, e.g., F0 = P. While it seems that van Rijsbergen ...
  8. [8]
    The F Distribution and the F-Ratio | Introduction to Statistics
    The distribution used for the hypothesis test is a new one. It is called the F distribution, named after Sir Ronald Fisher, an English statistician.
  9. [9]
    Value Investing: The Use of Historical Financial Statement ... - SSRN
    Jan 5, 2001 · Joseph D. Piotroski. Stanford Graduate School of Business. Abstract. This paper examines whether a simple accounting-based fundamental analysis ...
  10. [10]
  11. [11]
    About us — scikit-learn 1.7.2 documentation
    History: This project was started in 2007 as a Google Summer of Code project by David Cournapeau. Later that year, Matthieu Brucher started working on this ...
  12. [12]
    tf.keras.metrics.F1Score | TensorFlow v2.16.1
    Computes F-1 Score ... function · gather · gather_nd · get_current_name_scope · get_logger · get_static_value · grad_pass_through · gradients · group ...Missing: integration | Show results with:integration
  13. [13]
    Precision-recall curves – what are they and how are they used?
    A precision-recall curve shows the relationship between precision (= positive predictive value) and recall (= sensitivity) for every possible cut-off.
  14. [14]
    Evaluating Machine Learning Models and Their Diagnostic Value
    Jul 23, 2023 · The PR curve plots the Precision (also called PPV) as a function of Recall (also called sensitivity) (Fig. 4). It can also be summarized using a ...
  15. [15]
    On evaluation metrics for medical applications of artificial intelligence
    F1 score (F1). The F1 score is the harmonic mean of precision and recall, meaning that it penalizes extreme values of either. This metric is not symmetric ...
  16. [16]
    COVID-19 diagnosis by routine blood tests using machine learning
    May 24, 2021 · We constructed a machine learning model for COVID-19 diagnosis that was based and cross-validated on the routine blood tests of 5333 patients.
  17. [17]
    Using Automated Machine Learning to Predict the Mortality of ...
    Feb 26, 2021 · Unlike the F1 score, which gives equal weight to precision (ie, PPV) and sensitivity (ie, recall), the F2 score gives more weight to sensitivity ...
  18. [18]
    Understanding F1 Score, Accuracy, ROC-AUC & PR-AUC Metrics
    Jun 13, 2024 · In such cases, the ROC curve and AUC can provide a more accurate assessment of the model's performance than metrics such as accuracy or F1 score ...F1 Score · Roc-Auc · Pr-Auc
  19. [19]
    What is F-Beta Score? - Analytics Vidhya
    Dec 3, 2024 · The F-Beta Score is a measure that assesses the accuracy of an output of a model from two aspects of precision and recall.Calculating the F-Beta Score · Practical Applications of the F...
  20. [20]
    The advantages of the Matthews correlation coefficient (MCC) over ...
    Jan 2, 2020 · Accuracy and F1 score, although popular, can generate misleading results on imbalanced datasets, because they fail to consider the ratio ...
  21. [21]
    (PDF) A surrogate loss function for optimization of $F_\beta$ score ...
    Apr 3, 2021 · Abstract and Figures. The F β score is a commonly used measure of classification performance, which plays crucial roles in classification tasks ...
  22. [22]
    [PDF] measuring class-imbalance sensitivity of deterministic performance
    As we increase the imbalance ratio the surface warps quite drastically. As evident in this example, the f1 score under this imbalance condition assigns high ...
  23. [23]
    The receiver operating characteristic curve accurately assesses ...
    Jun 14, 2024 · The effect of class imbalance on the estimate of the F1 score, G-mean score, and MCC using a fixed score threshold of 0.5 is shown for each ...Missing: misleading | Show results with:misleading
  24. [24]
    [PDF] Class imbalance should not throw you off balance - HAL
    Feb 23, 2024 · In line with the simulated data, the F1 score increased abruptly at optimal class balance. AUC was stable over a wide range of class imbalance.
  25. [25]
    [PDF] Effect of Data Imbalance in Predicting Student Performance in a ...
    It should be noted that accuracy, precision and recall may suffer from imbalanced data, by favoring the majority class, whereas F1-Score seems to cope with ...
  26. [26]
    A surrogate loss function for optimization of $F_β$ score in binary ...
    Apr 3, 2021 · The proposed surrogate F_\beta loss function is effective for optimizing F_\beta scores under class imbalances in binary classification tasks compared with ...
  27. [27]
    [PDF] A Comprehensive Study on Tackling Class Imbalance in Binary ...
    Sep 29, 2024 · The F1-score provides a balanced measure of precision and recall, making it particularly useful for assessing performance on imbalanced datasets ...
  28. [28]
    Evaluating classifier performance with highly imbalanced Big Data
    Apr 11, 2023 · Our contribution is to show AUPRC is a more effective metric for evaluating the performance of classifiers when working with highly imbalanced Big Data.<|separator|>
  29. [29]
    [PDF] MEASURES.pdf - Text REtrieval Conference
    F-beta score used in the track is defined as. T11F = 0 if R =N + =0. 1.25 ... evaluation score is either the F-beta score or the utility score, depending ...
  30. [30]
    F1-Score (F-Score) | Definition, Formula & Use Cases - Xenoss
    Information retrieval and search engines. Search engines optimize for F1-score to ensure relevant documents are retrieved while minimizing irrelevant ones.
  31. [31]
    Measurements to evaluate a web search engine - Stack Overflow
    Nov 6, 2018 · Precision and recall, along with the F score / F measure are commonly-used metrics for evaluating (unranked) retrieval sets in search engine performance.Can we use F-measure, precision, recall, with ranked retrieval results?lucene - How to calculate score of a doc in solr? - Stack OverflowMore results from stackoverflow.com
  32. [32]
    Best practices for variant calling in clinical sequencing - PMC - NIH
    Oct 26, 2020 · For small variants, an F1 score > 0.99 should be achievable by robust variant calling pipelines. High-quality DNA samples for NA12878 can also ...
  33. [33]
    Benchmarking reveals superiority of deep learning variant callers on ...
    SNP F1 scores of 99.99% are obtained from Clair3 and DeepVariant on sup-basecalled data. For indel calls, Clair3 achieves F1 scores of 99.53% and 99.20% for sup ...
  34. [34]
    Meeting Global Health Needs via Infectious Disease Forecasting - NIH
    Mar 21, 2025 · We assessed forecasting model accuracies by their ability to detect the presence of a disease based on F1-scores and by their ability to ...
  35. [35]
    [PDF] Artificial intelligence-driven patient monitoring for adverse event ...
    May 8, 2024 · Some of the things that are used to judge supervised learning models are their F1-score, memory, accuracy, and precision. The model is said ...
  36. [36]
    Protein Structure Prediction Center
    In particular, the accuracy of models almost doubled in terms of the Interface Contact Score (ICS a.k.a. F1) ... Welcome to the Protein Structure Prediction Center ...CASP13 · CASP16 · CASP15 · CASP1 (1994)
  37. [37]
    Ultrafast end-to-end protein structure prediction enables high ...
    A maximal precision, recall and F1-score is achieved using a threshold of 0.4 (red datapoint). (F) Comparison of DMPfold1, DMPfold2, C-I-TASSER and ...
  38. [38]
    Customer Credit Risk: Application and Evaluation of Machine ...
    This study used machine learning models to analyze customer credit risk. SVM achieved 80.67% accuracy, and Extreme Gradient Boosting achieved 83.3% with ...
  39. [39]
    Application of Machine Learning in Credit Risk Assessment
    Metrics such as F1 score, AUC score, prediction accuracy, precision and recall have been used to evaluate each model. Among all the models, the combination ...
  40. [40]
    Enhancing credit card fraud detection with a stacking-based hybrid ...
    Sep 2, 2025 · LR achieves high accuracy (99.92%) but has lower precision (88.06%) and recall (60.20%) for fraudulent transactions, leading to a moderate F1- ...<|separator|>
  41. [41]
    [PDF] Attention-Based Deep Learning Approach for Pedestrian Detection ...
    precision, recall, and F1-score, demonstrating its potential to significantly enhance pedestrian detection in autonomous vehicle systems. The proposed Attn ...
  42. [42]
    A Pedestrian Detection Algorithm Based on Score Fusion for Multi ...
    Feb 7, 2021 · In this study, we aimed to develop a score fusion-based pedestrian detection algorithm by integrating the data of two light detection and ranging systems ( ...
  43. [43]
    The Cost of Fraud Prediction Errors - American Accounting Association
    We compare seven fraud prediction models with a cost-based measure that nets the benefits of correctly anticipating instances of fraud.
  44. [44]
    (PDF) Evaluating Trade-offs Between Error Rates in Machine ...
    Aug 19, 2025 · An increased false positive rate will misclassify more potential customers as high-risk, causing financial losses for lenders and negatively ...
  45. [45]
    [PDF] metrics for multi-class classification: an overview - arXiv
    Aug 13, 2020 · Accuracy is one of the most popular metrics in multi-class classification and it is directly computed from the confusion matrix. Referring to ...
  46. [46]
    [PDF] A Review of Multi-Label Classification Methods - LPIS
    Multi-Label Classification: An Overview. Grigorios Tsoumakas, Ioannis Katakis. Dept. of Informatics, Aristotle University of Thessaloniki, 54124, Greece.Missing: survey | Show results with:survey
  47. [47]
    Comparing ϕ and the F-measure as performance metrics for ...
    Oct 1, 2022 · The F-measure was originally defined to evaluate the performance of information retrieval techniques (van Rijsbergen 1979). However, it has ...
  48. [48]
    F1 Score = Dice Coefficient - Chen Riang's Blog
    May 5, 2020 · F1 score is equivalent to Dice Coefficient(Sørensen–Dice Coefficient). In the section below, we will prove it with an example.
  49. [49]
    A Review of the F-Measure: Its History, Properties, Criticism, and ...
    As we discussed in Section 2.1, the Fβ -measure was derived from the effectiveness measure, E (Equation (1)), as Fβ = 1 − E, developed by Van Rijsbergen [59].Missing: formula | Show results with:formula
  50. [50]
  51. [51]
    [PDF] Optimizing F-Measures by Cost-Sensitive Classification - NIPS papers
    F-measures are optimized by reducing the problem to cost-sensitive classification, using different costs for false positive/negative errors, and learning ...
  52. [52]
    The Matthews correlation coefficient (MCC) is more reliable than ...
    Feb 4, 2021 · We reaffirm that MCC is a robust metric that summarizes the classifier performance in a single value, if positive and negative cases are of equal importance.Missing: AUPRC | Show results with:AUPRC
  53. [53]
  54. [54]
    [PDF] A structured overview of metrics for multi-class - Heidelberg University
    Different metrics are used to evaluate classi- fiers. Many recent papers and shared tasks pick 'macro' metrics to rank systems (e.g.,. 'macro F1').Missing: seminal | Show results with:seminal
  55. [55]
    A Review of Evaluation Metrics in Machine Learning Algorithms
    This review paper aims at highlighting the various evaluation metrics being applied in research and the non-standardization of evaluation metrics.
  56. [56]
    F1 score: Don't Use It For All Imbalanced Data | Towards AI
    Aug 21, 2023 · Overall, the F1 score is a good choice if your positive class is the minority class or the data is balanced. In that case, if your F1 score is ...
  57. [57]
    Precision, Accuracy and F1 Score for Multi-Label Classification
    Jan 12, 2021 · Then we can use these global precision and recall scores to compute a global F1 score as their harmonic mean. This F1 score is known as the ...