Fact-checked by Grok 2 weeks ago

Random forest

A random forest is an ensemble learning method for classification, regression, and other supervised machine learning tasks that constructs a multitude of decision trees during training and aggregates their outputs by majority voting for classification or averaging for regression to yield a final prediction. This approach builds on the concept of bagging, where each tree is trained on a random bootstrap sample of the dataset, and introduces randomness in feature selection at each split to decorrelate the trees and enhance overall performance. Introduced by Leo Breiman in 2001, random forests combine ideas from earlier work on bootstrap aggregating (bagging) and random subspace methods, resulting in a robust algorithm that converges to a low generalization error as the number of trees increases, provided the trees are both strong predictors and lowly correlated. The algorithm operates by growing an of decision , where for each tree, a random of the training data is selected with replacement (bootstrap sampling), and at every node, a random of features is considered for the best split, typically limited to the of the total number of features for . This randomness reduces the variance of individual decision trees, which are prone to , and allows the forest to handle noisy data and outliers more effectively than single trees or other boosting methods like . Key hyperparameters include the number of trees (n_estimators, often defaulting to 100), the maximum depth of trees to control complexity, and the fraction of features considered per split (max_features), which can be tuned to optimize accuracy and computational efficiency. Additionally, random forests provide built-in mechanisms for estimating (using samples not selected in ) and computing variable importance based on mean decrease in impurity or importance. Random forests offer several advantages over traditional models, including high predictive accuracy comparable to or exceeding methods in many scenarios, inherent resistance to due to ensemble averaging, and the ability to process high-dimensional data without extensive or handling missing values natively in modern implementations. They are computationally efficient and parallelizable, as trees can be built independently, making them suitable for large datasets. Since their introduction, random forests have seen advancements such as extensions for , , and integration with genetic algorithms for , further broadening their applicability. Widely adopted across domains, random forests are employed in bioinformatics for gene selection and protein , ecology for species distribution modeling, medicine for diagnostic prediction, and finance for , owing to their interpretability through feature importance rankings and robustness in real-world, noisy environments. Their popularity stems from consistent performance on benchmark datasets and ease of implementation in libraries like , where they serve as a for more complex techniques.

Introduction and Background

Definition and Overview

A is a supervised algorithm for and tasks that constructs a multitude of decision trees during and outputs the that is the of the classes () or prediction () of the individual trees. Each tree in the forest is built using a bootstrap sample of the data and a random of features at each to introduce diversity among the trees. The core intuition behind random forests is to combine multiple weak learners—typically unpruned decision trees, which individually exhibit low bias but high variance—into a single robust model that mitigates overfitting by averaging out errors across the ensemble. This approach leverages the law of large numbers, ensuring that as more trees are added, the ensemble's performance stabilizes without further degradation from overfitting. Unlike a single decision tree, which can be highly sensitive to noise and prone to high variance due to its recursive partitioning, the random forest reduces this variance through aggregation, leading to improved generalization on unseen data. For prediction in , the final output is determined by majority across all ; a brief representation is:
For each [tree](/page/Tree) in [forest](/page/Forest):
    prediction = tree.predict(input)
Aggregate predictions = [mode](/page/Mode) of all predictions
Return aggregate
In , predictions are averaged instead of taking the . At a high level, the involves generating bootstrap samples from the to each independently, constructing the trees with randomized selections to decorrelate them, and then combining their outputs via or averaging for the final prediction. This ensemble structure makes random forests particularly effective in handling high-dimensional data and noisy environments common in applications.

Historical Development

The concept of ensemble methods for decision trees emerged in the early , with Stephen Salzberg and colleagues exploring randomized approaches to improve tree-based classifiers. In 1993, Heath, Kasif, and Salzberg introduced methods for inducing oblique decision trees through randomized perturbations and heuristics, laying groundwork for diversifying tree structures in ensembles to enhance . This work highlighted the potential of to mitigate in single trees, influencing subsequent ensemble techniques. A key milestone came in 1996 when Leo Breiman introduced (bagging), a for generating multiple versions of a predictor by on bootstrap samples and aggregating their outputs to reduce variance. In 1995, Tin Kam Ho developed the , which constructs decision forests by building trees in randomly selected subspaces of the feature space, particularly effective for high-dimensional data. Ho's approach addressed limitations in traditional decision trees by reducing correlation among members through feature subset , demonstrating improved accuracy on tasks. Building on similar ideas, Amit and Geman in 1997 proposed randomized trees for shape recognition, incorporating bagging with random to create diverse classifiers from a large pool of possible features. Their emphasized at split points to handle complex, high-variety data like images. The pivotal advancement occurred in 2001 with Leo Breiman's seminal paper "Random Forests," which formalized the algorithm by combining (bagging) with random at each split, creating uncorrelated trees for robust predictions in and . Breiman's integrated prior ideas, such as Dietterich's 2000 study on via perturbed splits to compare methods like bagging and boosting, which demonstrated benefits of in reducing error. By 2025, Breiman's paper had amassed over 167,000 citations, underscoring its transformative impact on . Following Breiman's introduction, random forests saw rapid adoption in specialized fields, notably bioinformatics, where the 2002 release of the randomForest package for facilitated analysis and classification tasks in . This tool enabled researchers to handle high-dimensional data, marking an early evolution toward domain-specific applications. In the 2010s, integration into libraries like further broadened accessibility, solidifying random forests as a standard ensemble method across disciplines.

Core Algorithm

Decision Tree Fundamentals

A is a predictive modeling tool that represents decisions and their possible consequences as a tree-like structure, consisting of internal nodes, branches, and leaf nodes. Internal nodes represent tests or splits on input variables, branches denote the outcome of those tests, and leaf nodes provide the final predictions, such as class labels for or mean values for . This hierarchical structure allows the tree to the input space into regions based on values, enabling interpretable and non-parametric modeling of complex relationships in . The learning for a involves of the , starting from the and repeatedly splitting subsets of until a stopping criterion is met, such as maximum depth or minimum samples per . At each internal , an measure quantifies the homogeneity of the with respect to the target variable; common measures include the Gini index for , defined as G = 1 - \sum_{i=1}^{c} p_i^2, where c is the number of classes and p_i is the proportion of class i in the , and information entropy, H = - \sum_{i=1}^{c} p_i \log_2 p_i. These measures evaluate how "pure" a split is, with lower indicating better separation of classes. For , analogous measures like are used. Splitting criteria are determined using a that selects, at each , the and combination which maximizes the reduction in across the resulting child nodes, often computed as the weighted average decrease: \Delta I = I(parent) - \sum_{k} \frac{N_k}{N} I(child_k), where N and N_k are the sizes of the parent and child subsets. This exhaustive search over features and possible thresholds ensures locally optimal splits but does not guarantee a globally optimal . splits are typical in methods like , though multi-way splits appear in algorithms such as ID3. To mitigate , techniques simplify the fully grown by removing branches that contribute little to predictive accuracy. Cost-complexity , a post- , balances accuracy and size by minimizing the objective R_\alpha(T) = R(T) + \alpha |T|, where R(T) is the resubstitution error (e.g., misclassification rate), |T| is the number of terminal nodes, and \alpha \geq 0 is a tuned via cross-validation. As \alpha increases, suboptimal subtrees are pruned, producing a sequence of nested trees; the optimal \alpha is selected to minimize error on validation data. This approach prevents the tree from capturing in the set. Despite their strengths in interpretability and handling mixed data types, single decision trees suffer from high variance and a propensity for , leading to instability where small changes in the training data can produce substantially different trees. For instance, consider a toy with features representing height and weight to predict gender; a single tree might split first on height > 170 cm, achieving near-perfect training accuracy but failing to generalize to a slightly perturbed test set with added noise, resulting in high error due to memorizing outliers rather than underlying patterns. This instability arises because the greedy splitting amplifies small data variations into divergent structures, particularly in deeper trees.

Bagging Procedure

The bagging procedure, short for , forms the foundational technique in random forests by generating multiple bootstrap samples from the original training dataset to train diverse s, thereby reducing variance. Introduced by Leo Breiman in 1996, bagging involves resampling the dataset with replacement to create subsets, each used to fit an independent base learner, typically a , followed by aggregating their outputs to produce a final . This method leverages the of s—where small changes in training data lead to significant variations in s—to stabilize the through averaging or voting. Bootstrap sampling is performed by drawing n with replacement from the original dataset of size n, resulting in B such subsets (where B is the number of trees, often 100 or more). On average, approximately 63.2% of the original appear in each bootstrap sample, leaving about 36.8% out-of-bag (OOB), calculated as the probability that a specific is not selected: $1 - \left(1 - \frac{1}{n}\right)^n \approx 1 - e^{-1} \approx 0.368. These OOB samples provide an internal validation set for unbiased error estimation without separate holdout data. Each tree is then trained on its respective bootstrap sample, ensuring among the trees due to the in sampling. Aggregation combines the predictions from the B trees: for regression tasks, the final prediction is the average of the individual tree outputs, which smooths out fluctuations; for classification, it is the majority vote across the trees, selecting the class with the most endorsements. This aggregation step is crucial, as it exploits the : as B \to \infty, the aggregated predictor converges to the of a single tree's prediction over the bootstrap distribution, theoretically reducing variance while preserving the low bias of the base learners. Breiman demonstrated that for unstable procedures like trees, bagging can substantially improve accuracy by mitigating to in any single training set. The algorithm proceeds in the following steps:
  1. For b = 1 to B, generate a bootstrap sample by drawing n observations with replacement from the training data.
  2. Train a decision tree on the b-th bootstrap sample, using the full set of features at each split (unlike the random subspace extension).
  3. For a new input, obtain predictions from all B trees and aggregate them via averaging (regression) or majority vote (classification).
To illustrate, consider a small regression dataset with three observations: (x1=1, y1=2), (x2=2, y2=4), (x3=3, y3=6), where the true underlying function is linear (y=2x). A single decision tree might overfit noise if the data includes minor perturbations, predicting y=5.5 for x=2.5. With bagging using B=3 bootstraps, one sample might be {(1,2), (2,4), (2,4)} yielding a tree predicting 4 for x=2.5; another {(2,4), (3,6), (3,6)} predicting 5.33; and the third {(1,2), (1,2), (3,6)} predicting 3.33. Averaging these gives 4.22, closer to the true 5, demonstrating stabilization despite individual tree variability. This example highlights bagging's variance reduction in practice, as extended to random forests.

Random Subspace Method

The , a key component of random forests, involves selecting a random of features at each during tree construction to determine potential splits, thereby introducing additional diversity among the members beyond data sampling techniques. This approach randomly chooses m features out of the total p available features, where m \ll p, and restricts the split search to this subset, with the process repeated independently at every . Originally introduced by Tin Kam Ho in as a technique for in high-dimensional problems, the method constructs decision trees within randomly projected subspaces of the feature space to mitigate and enhance . Ho's formulation emphasized building entire trees within fixed random subspaces using the full training set, which proved effective for datasets with thousands of features, such as handwritten digit recognition. The primary motivation in random forests is to further decorrelate the trees by reducing the in their predictions, particularly when features are highly correlated, leading to more error structures across the . This decorrelation helps in lowering the overall variance without substantially compromising individual tree accuracy. In implementation, the feature subset is selected without replacement at each node, ensuring that the chosen m features are distinct, after which standard splitting criteria (e.g., Gini impurity for ) are applied solely within this subset to find the best . This per-node , as adapted in random forests, contrasts with Ho's whole-tree but achieves similar benefits while allowing trees to adapt to different feature combinations as they grow deeper. The impact on quality is a modest reduction in the optimality of individual splits compared to searching all p features, but this promotes broader exploration of the feature space across the forest, resulting in improved ensemble performance on unseen data. The size of the feature subset m is a tunable that balances strength and inter-tree correlation. For tasks, a common choice is m = \lfloor \sqrt{p} \rfloor, while for , m = p/3 is often used to maintain higher individual accuracy. For example, with p = 100 features in a classification problem, m = \lfloor \sqrt{100} \rfloor = 10, limiting each node's split search to just 10 randomly selected features. Compared to exhaustive search over all features, this introduces a slight increase in due to suboptimal splits but yields a substantial decrease in variance through reduced tree correlation, as empirically demonstrated by Breiman on datasets like , where minimum error occurred at m values of 4 to 8, outperforming single trees and bagging alone.

Forest Construction and Prediction

The random forest algorithm integrates and the to construct an of decision , enhancing predictive performance through diversity and averaging. Specifically, it builds B , each trained on a bootstrap sample drawn with from the original of size n, ensuring approximately 63% unique samples per tree on average. At each internal of a , a random subset of m features is selected from the total p features, and the best split is chosen among those m candidates using standard criteria like Gini impurity for or for . This randomization decorrelates the trees, reducing compared to single trees or plain bagging. The complete procedure for forest construction can be outlined in pseudocode as follows:
Algorithm BuildRandomForest(D, B, m):
    Input: Dataset D with n samples and p features, number of trees B, features per split m
    Output: Forest ensemble F = {T_1, T_2, ..., T_B}

    F ← empty list
    for k = 1 to B do:
        D_k ← BootstrapSample(D)  // Sample n instances with replacement
        T_k ← BuildTree(D_k, m)   // Grow tree with random feature subsets at nodes
        Append T_k to F
    return F

Algorithm BuildTree(D_k, m):
    // Recursive function to grow unpruned tree
    If stopping criterion met (e.g., all samples same [class](/page/Class) or min samples):
        Return leaf node with majority [class](/page/Class) or mean target
    Else:
        Select m random features from p
        Find best [split](/page/Split) among those m
        [Split](/page/Split) D_k into child subsets
        Left child ← BuildTree(left subset, m)
        Right child ← BuildTree(right subset, m)
        Return internal node with [split](/page/Split)
Trees are grown to full depth without to maximize individual strength. For prediction on a new instance x, each of the B trees independently provides an output, which is then aggregated across the forest. In classification tasks, the predicted class is the one receiving the majority vote from the trees, equivalent to argmax over the averaged class probabilities from each tree. For regression, the prediction is the unweighted average of the individual tree outputs. This simple aggregation leverages the to stabilize predictions as B increases. Since each tree is constructed independently on its own bootstrap sample, the forest building process is inherently parallelizable, allowing trees to be grown concurrently on multi-core systems or environments for efficient scaling to large datasets. Key hyperparameters include the number of trees B (often denoted n_estimators) and the number of features m (or max_features) considered at each split. Common defaults are B=100 for computational efficiency, with m set to the square root of p for and p/3 for , though values like m=√p are recommended as near-optimal in many cases. Increasing B generally improves performance up to a point, with beyond 100-500 trees depending on the dataset size and complexity. As the number of trees B approaches , the of the random forest converges to an expected limit determined by the strength and correlation of individual trees, stabilizing predictions without further reduction in variance. In , this convergence relates to Breiman's margin concept, defined as the difference between the average vote for the correct class and the maximum average vote for any incorrect class across trees; a larger margin indicates greater classification strength and lower error rates for the ensemble.

Theoretical Properties

Bias-Variance Tradeoff

In machine learning, the expected prediction error for a model can be decomposed into bias squared, variance, and irreducible noise: \mathbb{E}[(y - \hat{f}(x))^2] = \Bias^2(\hat{f}(x)) + \Var(\hat{f}(x)) + \sigma^2, where \Bias^2(\hat{f}(x)) = (\mathbb{E}[\hat{f}(x)] - f(x))^2 measures systematic deviation from the true function f(x), \Var(\hat{f}(x)) = \mathbb{E}[(\hat{f}(x) - \mathbb{E}[\hat{f}(x)])^2] captures sensitivity to training data fluctuations, and \sigma^2 is the inherent noise variance. Single decision trees typically exhibit low bias but high variance due to their tendency to overfit by capturing noise in the training data, leading to unstable predictions across different samples. The bagging procedure in random forests addresses this high variance by averaging predictions from B bootstrap-aggregated trees, reducing the ensemble variance approximately by a factor of $1/B when tree errors are uncorrelated: \Var(\hat{f}_{\text{ensemble}}(x)) \approx \frac{1}{B} \Var(\hat{f}_{\text{single}}(x)) + \frac{B-1}{B} \Cov(\hat{f}_i(x), \hat{f}_j(x)) for i \neq j. In practice, residual correlations among trees temper this reduction, but the averaging still substantially lowers without altering , as each tree is trained on similar bootstrap samples. The additional in at each split further decorrelates trees, slightly increasing by restricting the search space but yielding a net decrease in overall error, as the variance reduction outweighs the minor inflation. Empirically, the of random forests decreases monotonically as the number of trees B increases, reflecting diminishing variance contributions, before plateauing at a level close to the irreducible error once correlations stabilize. In the limit of an infinite number of trees, the random forest converges in probability to the true function under certain conditions on the data distribution and tree construction, with variance approaching the correlation term \rho(x) \sigma^2(x) if \rho(x) < 1.

Variable Importance

Variable importance in random forests quantifies the contribution of each feature to the predictive performance of the ensemble, aiding in feature selection and model interpretability. Two primary methods are employed: mean decrease in impurity (MDI), which measures the average reduction in node impurity attributed to a feature across the forest, and permutation importance, which assesses the impact of disrupting a feature's relationship with the target by randomly shuffling its values. Mean decrease in impurity (MDI), also known as Gini importance for classification tasks, computes the total impurity reduction from splits using a feature f, summed over all trees and then averaged. The formula is I(f) = \frac{1}{T} \sum_{t=1}^{T} \sum_{n \in N_t(f)} \frac{|n|}{N} \left( i(n) - \frac{|L_n| i(L_n) + |R_n| i(R_n)}{|n|} \right), where T is the number of trees, N_t(f) are the nodes in tree t split on f, i(\cdot) is the impurity measure (e.g., Gini index), L_n and R_n are left and right child nodes, | \cdot | denotes node size, and N is the total number of training samples. This method favors features that frequently appear at the top of trees, providing a fast, built-in assessment without additional computation. Permutation importance evaluates the degradation in model accuracy when a feature's values are randomly permuted in out-of-bag (OOB) samples, breaking its predictive association while preserving others. For stability, multiple permutations are averaged. The score is the relative increase in error: \Delta(f) = \frac{\text{OOB error}_{\text{permuted}}(f) - \text{OOB error}_{\text{original}}}{\text{OOB error}_{\text{original}}}, expressed as a percentage. This approach uses OOB estimates for unbiased evaluation and is applicable to both classification and regression. MDI is computationally efficient but biased toward continuous variables or high-cardinality categorical features, which offer more split opportunities and thus higher impurity reductions, potentially inflating their importance. Permutation importance avoids this bias, providing a more robust measure of true predictive power, though it is more expensive, scaling with the number of features and permutations required. Importances are often visualized via horizontal bar plots, with features ranked by score. For the Iris dataset, MDI typically ranks petal length and width highest, reflecting their strong separation of species, while sepal measurements show lower importance. To facilitate comparison, importances are standardized by normalizing them to sum to 1 across all features, converting raw scores into relative contributions.

Out-of-Bag Error Estimation

In random forests, out-of-bag (OOB) samples arise during the bootstrap aggregation process, where approximately 37% of the training data instances are excluded from the bootstrap sample used to grow each individual decision tree, providing an independent evaluation set for that tree. This exclusion fraction, derived from the bootstrap sampling with replacement, is expected to be $1 - e^{-1} \approx 0.368, ensuring that each tree is trained on a subset that does not include these OOB instances. The OOB error estimation leverages these samples to assess the ensemble's performance without requiring a separate validation or test set. For each data instance, predictions are obtained by averaging the outputs from all trees in the forest for which that instance was OOB, treating it as an internal cross-validation mechanism. The overall OOB error is then computed as the average loss across all instances, formalized as: \text{OOB_error} = \frac{1}{n} \sum_{i=1}^{n} L(y_i, \hat{y}_{\text{OOB},i}) where n is the number of instances, L is the loss function (e.g., mean squared error for regression or misclassification rate for classification), y_i is the true label, and \hat{y}_{\text{OOB},i} is the prediction aggregated from the relevant OOB trees. This approach offers several advantages as an unbiased alternative to traditional cross-validation, delivering reliable error estimates that closely correlate with out-of-sample test error, as demonstrated in empirical evaluations on benchmark datasets where OOB estimates tracked test errors with minimal bias. In Breiman's original analysis, OOB estimates were consistently slightly higher than test set errors but exhibited strong agreement in trends across varying forest sizes and parameters. OOB error estimation is particularly useful for practical applications such as hyperparameter tuning, where it guides the selection of parameters like the number of trees or feature subsets by monitoring performance improvements, and for early stopping during forest construction to prevent overfitting without excessive computational cost.

Extensions and Variants

ExtraTrees

Extremely Randomized Trees (ExtraTrees) is a variant of random forests that introduces additional randomization in the tree construction process to enhance efficiency and generalization. Unlike standard decision trees or random forests, which optimize split points by exhaustively searching for the best threshold within each feature's range, ExtraTrees selects split thresholds randomly from the empirical range of the selected feature values at each node. This method builds an ensemble of such trees without using bootstrap sampling, instead employing the entire training dataset for every tree, further emphasizing randomization to decorrelate the trees. The core algorithmic difference lies in the split selection procedure: at each internal node, a random subset of features is chosen—typically K attributes, where K is a hyperparameter similar to that in but often set larger (e.g., K = n for regression tasks, with n being the number of features)—and for each candidate feature, a split value is drawn uniformly at random from the observed values in the node's data subset, rather than optimizing for criteria like or mean squared error. This eliminates the computational overhead of exhaustive search, making tree growth significantly faster; the time complexity per tree is O(N log N) for N samples in balanced trees, with reduced constants compared to the O(N log N \cdot m) typical of optimized splits in , where m is the number of features. By drawing on the random subspace method for feature selection while extending randomization to split points, ExtraTrees produces trees that are less prone to overfitting due to the increased stochasticity. This approach yields several benefits, including reduced variance and thus lower overfitting, particularly in noisy environments, as the random splits average out irregularities in the data. Computationally, the lack of optimization enables faster training, scaling well to large datasets. Empirically, ExtraTrees has demonstrated superior performance over random forests and bagging on regression tasks across multiple benchmarks, outperforming on 22 out of 24 datasets tested. It is particularly advantageous for high-dimensional or noisy data, where the extra randomization helps mitigate the curse of dimensionality and noise sensitivity. Hyperparameters like the number of trees (T), minimum samples per leaf (n_min, often 5 for regression), and K control the trade-off between bias and variance, with larger K values promoting more thorough exploration at the cost of some added computation.

Unsupervised Random Forests

Unsupervised random forests adapt the ensemble structure of standard to tasks lacking labeled data, such as clustering and anomaly detection, by leveraging proximity measures derived from tree structures. In this approach, random class labels are assigned to the unlabeled data points to simulate a supervised classification problem. A is then constructed using bootstrap sampling and random feature selection at each split, as in the supervised case. The resulting proximity matrix captures the similarity between pairs of observations based on the fraction of trees in which they co-occur in the same terminal node; this proximity can be transformed into a dissimilarity measure, such as \sqrt{1 - \text{proximity}}, for downstream unsupervised analyses. For clustering, the dissimilarity matrix serves as input to hierarchical clustering or partitioning algorithms like PAM (Partitioning Around Medoids). One common variant is the random partitioning forest (RPF), where trees are built by randomly partitioning the feature space without relying on class labels, and the similarity measure S(i,j) is defined as the proportion of trees in which observations i and j end in the same leaf. This method has been applied in genomics to cluster tumor samples based on expression data from DNA microarrays, identifying clinically meaningful subgroups with significant survival differences (e.g., p-value of 4e-9 for RF clusters versus p=0.019 for Euclidean distance clusters). To mitigate sensitivity to the initial random labels, multiple forests are often averaged, with recommendations for at least five forests each containing 2000 trees. In anomaly detection, points with low average proximity to the rest of the dataset are flagged as outliers, as they rarely co-occur in terminal nodes with others, indicating isolation in the feature space. Path length—the average depth reached in the trees—can also serve as an anomaly score, with shorter paths suggesting anomalies due to easier isolation by random splits; this principle underpins related methods like Isolation Forests but applies directly to standard random forest proximities. Despite these strengths, unsupervised random forests have limitations, including lack of rotation invariance in the proximity measure and potential for spurious clusters depending on the synthetic label distribution. They are generally less accurate than dedicated unsupervised methods like k-means or spectral clustering for pure clustering tasks.

Kernel Random Forests

Kernel random forests (KeRF) represent a theoretical extension of standard random forests, reformulating them as kernel methods to enable non-linear embeddings in (RKHS). Introduced by Scornet in 2016, KeRF leverages the partitioning nature of decision trees to define a kernel that captures similarities based on leaf co-occupancy, providing a bridge between ensemble tree methods and kernel-based machine learning. This approach allows random forests to be analyzed through the lens of , enhancing interpretability and facilitating connections to functional analysis. Subsequent developments by Scornet and others between 2016 and 2019 explored variants such as centered and uniform KeRF, which differ in split selection mechanisms to improve theoretical guarantees. In formal notation, a single decision tree grown on a sample of size n with randomness \Theta acts as a partition function, assigning each point x \in [0,1]^d to a leaf A_n(x, \Theta) \subseteq [0,1]^d. The for a finite forest of M trees is defined as the average indicator of co-occupancy: K_{M,n}(x, z) = \frac{1}{M} \sum_{j=1}^M \mathbb{1}_{\{z \in A_n(x, \Theta_j)\}}, where \mathbb{1} is the indicator function. In the limit of an infinite forest (M \to \infty), this converges to the expected kernel K_n(x, z) = \mathbb{P}_\Theta[z \in A_n(x, \Theta)], yielding the infinite forest regression estimate \tilde{m}_{\infty,n}(x) = \sum_{i=1}^n y_i K_n(x, X_i) / \sum_{i=1}^n K_n(x, X_i). The centered variant employs midpoint splits at each node, leading to an explicit kernel form: K_{\text{cc}}^k(x, z) = \sum_{k_1 + \cdots + k_d = k} \frac{k!}{k_1! \cdots k_d!} \left( \frac{1}{d} \right)^k \prod_{j=1}^d \mathbb{1}_{\{\lfloor 2^{k_j} x_j \rfloor = \lfloor 2^{k_j} z_j \rfloor\}}, for trees of depth k, which centers the partitions around medians. The uniform variant, in contrast, selects splits uniformly at random within the node's range, resulting in a kernel that subtracts a diagonal term for centering: K_{\text{uf}}^k(x, z) = K^k(x, z) - \frac{1}{n} \sum_{i=1}^n K^k(x, X_i) \delta_{z, X_i}, though explicit forms are more complex due to the randomization. KeRF kernels are positive semi-definite under mild conditions on the tree construction, as established by the applied to the partition probabilities, ensuring they induce valid norms. Furthermore, the infinite KeRF regression function \tilde{m}_{\infty,n} approximates the standard predictor under assumptions like Lipschitz continuity of the target function m and uniform distribution of inputs on [0,1]^d, with the bias bounded by the forest's approximation error. For the centered KeRF, this relation holds with the RKHS consisting of functions constant on dyadic intervals, linking directly to the piecewise constant nature of tree predictions. Consistency results for KeRF demonstrate convergence to the true function m. For the centered variant, Scornet (2016) derived a rate of \mathbb{E}[(\tilde{m}_{\text{cc},\infty,n}(x) - m(x))^2] \leq C n^{-1/(3 + d \log 2)} (\log n)^2 under Lipschitz assumptions. The uniform variant achieves \mathbb{E}[(\tilde{m}_{\text{uf},\infty,n}(x) - m(x))^2] \leq C n^{-2/(6 + 3 d \log 2)} (\log n)^2. Recent improvements by Iakovidis and Arcozzi (2023) refine these to \tilde{C} n^{-1/(1 + d / \log 2)} (\log n) for centered and \tilde{C} n^{-1/(1 + 3d / (2 \log 2))} (\log n) for uniform KeRF, still under basic regularity conditions. Under stronger margin assumptions—such as a margin parameter \alpha > 0 controlling the separation between regression levels—both variants attain fast rates of O(1/\sqrt{n}), matching kernel ridge performance in low-noise regimes.

Modern Extensions

Mondrian forests represent an adaptive extension of random forests designed for and scenarios. Introduced by Lakshminarayanan et al., these forests employ Mondrian processes to dynamically partition the feature space, allowing trees to grow incrementally without full retraining, which enhances efficiency for large-scale or evolving datasets. Subsequent extensions in 2016 adapted Mondrian forests for , providing well-calibrated uncertainty estimates that outperform approximate Gaussian processes on large tasks while maintaining online trainability. To address interpretability challenges in random forests, recent integrations with post-hoc explanation methods like SHAP and have enabled local feature attribution for individual predictions. For instance, a 2023 study applied these techniques to random forest classifiers on datasets, revealing how specific symptoms contribute to risk scores and improving in healthcare applications through visualizations of model decisions. Such approaches facilitate explainable AI by decomposing ensemble predictions into additive feature importance scores, particularly useful in regulated domains like . Evolutions of forests, originally for , have focused on for high-dimensional data. The 2025 introduction of kernel signature forests incorporates methods to better isolate anomalies in non-linear spaces, demonstrating improved detection accuracy and reduced computational overhead on large datasets compared to standard forests. These advancements enable efficient processing of millions of points, making them suitable for fraud detection and cybersecurity. Honest random forests, developed for causal inference, mitigate bias in treatment effect estimation by splitting data into training and estimation subsets during tree construction. Proposed by Wager and Athey in 2017, this "honest" approach ensures unbiased heterogeneous treatment effect estimates with valid , outperforming methods in observational studies like policy evaluations. Recent benchmarks highlight random forests' continued relevance, often surpassing on specific tabular tasks. In a 2025 analysis of prediction, random forests achieved the highest accuracy among ensemble methods, attributed to their robustness to noisy or discontinuous data patterns.

Applications and Implementations

Supervised Learning Applications

Random forests have been widely applied in tasks, particularly for and problems involving high-dimensional or noisy . In , they excel at predicting categorical outcomes by aggregating predictions from multiple decision trees, reducing through bagging and random . For , random forests estimate continuous values by averaging tree outputs, making them suitable for scenarios where interpretability and robustness are key. These applications leverage the ensemble's ability to handle non-linear relationships and interactions without assuming distribution. In bioinformatics, random forests have been instrumental since the early 2000s for selection and classification, especially in cancer using data. A seminal approach proposed in uses random forests to genes by importance measures like mean decrease in accuracy, enabling the identification of a small of predictive biomarkers from thousands of features while achieving high classification accuracy in multi-class tumor discrimination tasks. For instance, this was applied to colon cancer datasets, selecting around 16-32 genes for models with rates around 17-24%, outperforming decision trees. More recent balanced iterative variants further enhance performance on imbalanced genomic data by iteratively refining gene subsets for prediction. In the domain, random forests support scoring and , demonstrating robustness to imbalanced datasets common in or default detection. For assessment, models combining random forests with techniques like SMOTE have shown improved on skewed lending data, with one study reporting scores up to 0.965 by prioritizing key financial and macroeconomic features. In , random forests have been used to regress closing or classify trends using historical trading volumes and indicators, demonstrating effectiveness in capturing non-linear dynamics. Variable importance scores from these models aid interpretability, highlighting influential factors like volatility indices in trading decisions. Remote sensing applications utilize random forests for land cover classification from satellite imagery, effectively processing multispectral bands to map vegetation, urban areas, and water bodies. On Sentinel-2 data, random forest classifiers achieve overall accuracies of 85-95% in delineating land covers, outperforming single classifiers by integrating spatial and temporal features like NDVI indices. A 2023 study on diverse ecosystems reported kappa coefficients above 0.80, attributing success to the algorithm's handling of mixed pixels and class imbalances in imagery from regions like forests and croplands. Illustrative examples highlight random forests' efficacy in benchmark datasets. On the dataset, a classic multi-class problem, random forest models routinely attain 100% accuracy by distinguishing species based on and petal measurements, leveraging feature importance to emphasize petal length as the top predictor. In spam detection, random forests classify emails with accuracies exceeding 97%, using importance rankings to prioritize textual features like word frequencies and sender domains, which reveal patterns in attempts. Performance benchmarks confirm random forests' strengths on tabular data, often rivaling or surpassing other models in supervised tasks, particularly on small-to-medium datasets (n<10,000), though neural networks may edge out on very large datasets.

Unsupervised and Other Applications

Random forests extend beyond supervised classification and regression to unsupervised settings, notably through variants like isolation forests for anomaly detection. In this approach, anomalies are identified by measuring the average path length required to isolate data points in randomized decision trees; shorter paths signify outliers, as anomalous instances are separated from the majority with fewer splits. This method, introduced in the isolation forest algorithm, efficiently handles high-dimensional data without assuming normality and has been applied in fraud detection, where it isolates unusual transactions like those indicative of credit card scams, outperforming traditional distance-based techniques in scalability and speed. In survival analysis, random survival forests adapt the ensemble framework to handle right-censored data, enabling predictions of time-to-event outcomes. Developed by Ishwaran et al. in 2008, this method employs survival-specific splitting rules, such as the log-rank test, to grow trees and aggregates predictions to estimate the cumulative hazard function, which quantifies the instantaneous risk of the event occurring over time. The resulting ensemble provides non-parametric estimates of survival probabilities and hazard rates, useful in medical research for prognostic modeling, such as predicting patient survival in clinical trials. For ranking tasks, random forests support learning-to-rank algorithms by optimizing objectives tailored to ordinal predictions, such as pairwise comparisons or listwise approximations of metrics like normalized discounted cumulative gain. Extremely randomized trees, a random forest variant, have demonstrated strong performance in re-ranking documents for information retrieval, as seen in search engine applications where they prioritize query-relevant results by aggregating tree-based scores. This adaptation maintains the robustness of random forests while addressing the partial order inherent in ranking problems. In multi-label classification, random forests handle scenarios with multiple simultaneous labels by treating each label as an independent binary classification problem, aggregating votes from individual trees via majority voting per label to produce a set of predictions. This binary relevance decomposition, combined with the ensemble's feature selection, has been effectively used in domains like non-intrusive load monitoring, where appliances are identified by multiple electrical signatures simultaneously. The approach scales well to high-label spaces without requiring problem transformation beyond per-label ensembling. Emerging applications include detecting AI-generated text, where random forests classify content based on linguistic features such as vocabulary richness, sentence complexity, and stylometric patterns. Studies from 2023 to 2025 have shown random forests achieving accuracies above 90% in distinguishing human-authored from AI-produced text, leveraging features like perplexity and burstiness to capture subtle stylistic differences; for instance, one framework integrates these with for hybrid detection in educational and journalistic contexts.

Software Implementations

The randomForest package in R, developed by Liaw and Wiener, provides a foundational implementation of Breiman's random forest algorithm for both classification and regression tasks. It includes core functions such as randomForest(), which builds an ensemble of decision trees using bootstrap sampling and random feature selection, and varImpPlot() for visualizing variable importance based on mean decrease in impurity or permutation accuracy. This package, first published in 2002, remains widely used for its simplicity and integration with R's statistical ecosystem, supporting out-of-bag error estimation and proximity-based analyses. In Python, the scikit-learn library offers robust random forest implementations through the RandomForestClassifier and RandomForestRegressor classes, which construct ensembles of decision trees via bagging and feature subsampling. Key parameters include n_estimators, which controls the number of trees in the forest (defaulting to 100 for balanced performance), and max_features, which limits the number of features considered for splits at each node (e.g., 'sqrt' for classification or 'auto' for regression to optimize randomness). These estimators support seamless integration with scikit-learn's preprocessing and evaluation pipelines, making them suitable for rapid prototyping in machine learning workflows. Other notable implementations include H2O's Distributed Random Forest (DRF), which scales random forests across clusters for big data environments using in-memory processing. In contrast, , while primarily a gradient boosting framework, includes tree-based ensembles that can mimic random forest behavior through parallel tree construction but often outperforms traditional random forests in predictive accuracy on structured data due to sequential error correction. For Java users, the Weka toolkit provides the RandomForest classifier, which builds forests of unpruned trees with configurable bag size and attribute selection, emphasizing ease of use in graphical and programmatic data mining. Implementation notes highlight efficiency enhancements like parallel processing in scikit-learn, achieved via the joblib backend, which distributes tree fitting across CPU cores to reduce training time on multicore systems. GPU acceleration has advanced in the 2020s through RAPIDS cuML, a drop-in replacement for scikit-learn that ports random forest operations to NVIDIA GPUs, enabling 10-50x speedups on large datasets by leveraging CUDA for tree building and inference. Regarding benchmarks, scikit-learn's random forest scales effectively to datasets with millions of instances on standard hardware, handling up to 10 million rows in under an hour for moderate tree counts, though performance plateaus beyond tens of millions without distributed extensions. H2O and further improve scalability for enterprise-scale data, with RAPIDS demonstrating superior throughput on GPU clusters for high-dimensional problems.

Limitations and Considerations

Key Advantages

Random forests exhibit remarkable robustness to overfitting, a common issue in single decision trees, by aggregating predictions from numerous independently trained trees, which averages out individual errors and stabilizes the model as more trees are added. This ensemble approach leverages the law of large numbers to ensure that the generalization error converges without further degradation, making it particularly effective for complex datasets where high-variance models might otherwise fail. Additionally, random forests are resilient to outliers and noisy data, as the randomization in bootstrapping and feature selection at each node prevents any single anomalous point from disproportionately influencing the overall prediction, unlike more sensitive algorithms such as . In terms of feature handling, random forests require no prior scaling or normalization of input variables, as the tree-based splitting mechanism relies on threshold comparisons rather than distance metrics, allowing direct use of raw data including mixed numerical and categorical types. This eliminates preprocessing steps often needed for other models and enables seamless integration of heterogeneous features. Furthermore, the algorithm provides built-in measures of feature importance, computed via permutation of out-of-bag samples or Gini impurity reductions, offering interpretable rankings that highlight influential variables without additional computation. The versatility of random forests extends to their applicability across diverse data types and problem domains, naturally accommodating both classification and regression tasks on datasets with categorical predictors by randomly subsetting features at splits, avoiding the computational burden of exhaustive searches. Training is highly parallelizable, as individual trees can be built independently on subsets of data, facilitating efficient scaling to large datasets on multi-core systems or distributed environments. This combination of flexibility and computational efficiency has made random forests a staple in practical machine learning pipelines. Empirically, random forests have shown superior performance in numerous benchmarks, often achieving lower error rates than single trees or boosting methods like on datasets such as breast cancer diagnosis, where they maintain accuracy even under noise. In competitive settings like Kaggle tabular data challenges up to 2025, random forest ensembles, frequently combined with variants like , have powered many winning solutions due to their low bias and high predictive accuracy on structured data. As a non-parametric method, random forests impose no distributional assumptions on the underlying data, unlike parametric models such as that require linearity or normality, enabling them to capture complex, non-linear interactions and handle skewed or multimodal distributions effectively. This assumption-free nature contributes to their broad applicability in real-world scenarios where data rarely conform to idealized forms.

Primary Disadvantages

Random forests, as ensemble methods comprising numerous decision trees, inherently sacrifice the interpretability of individual trees for improved predictive performance. While a single decision tree can be visualized and traced through its branching logic to understand decision pathways, the aggregation of hundreds or thousands of trees in a random forest creates a complex black-box model where individual contributions are obscured. This lack of transparency often necessitates reliance on proxy measures, such as variable importance scores derived from Gini impurity reductions or permutation importance, which provide aggregate insights but do not fully elucidate the model's reasoning process. Computationally, random forests demand substantial resources, particularly in terms of memory and training time, due to the need to store and process multiple trees. For instance, constructing an ensemble with a large number of trees, such as 500, requires memory proportional to the dataset size multiplied by the ensemble size, leading to high storage overhead for large-scale applications. Moreover, the recursive partitioning and bootstrap sampling across trees result in training times that scale poorly compared to simpler linear models like , making random forests less suitable for real-time or resource-constrained environments without optimized implementations. Random forests also exhibit biases that limit their effectiveness in certain scenarios, particularly with respect to function approximation and generalization beyond training data. As piecewise constant approximators, ensembles of decision trees struggle to capture very smooth underlying functions, such as those governed by linear separability or low-degree polynomials, often requiring excessive tree depth or ensemble size to achieve adequate fidelity, which in turn exacerbates overfitting risks. Additionally, their poor extrapolation capabilities stem from the localized nature of tree splits, which confine predictions to the range observed in the training data, leading to unreliable performance on out-of-distribution inputs. Handling categorical variables with high cardinality poses further challenges for random forests, as standard implementations inefficiently treat each level as a separate binary split, resulting in fragmented trees and increased computational overhead without prior encoding. This inefficiency is pronounced when categories outnumber samples, amplifying variance and reducing model stability unless techniques like target encoding or hashing are applied beforehand. In contemporary benchmarks as of 2025, random forests are increasingly outperformed by transformer-based models on non-tabular data, such as sequential or multimodal inputs, where the latter's attention mechanisms enable superior pattern capture and scalability. While random forests remain competitive on structured tabular datasets, their rigid tree structure limits adaptability to the diverse, high-dimensional representations prevalent in modern non-tabular tasks like natural language processing or computer vision.

Hyperparameter Tuning

Hyperparameter tuning in random forests involves optimizing key parameters to balance bias, variance, and computational efficiency, often yielding modest but consistent improvements in predictive performance. The primary parameters include the number of trees (n_estimators), the number of features considered for splitting (max_features), tree depth (max_depth), and minimum samples per split or leaf (min_samples_split or min_samples_leaf). These adjustments help mitigate overfitting while enhancing generalization, with tuning typically performed via validation techniques that leverage the ensemble's inherent out-of-bag (OOB) estimates or cross-validation. The number of trees, n_estimators, controls the ensemble size and directly impacts variance reduction; values between 100 and 1000 are commonly recommended, as additional trees beyond this point offer diminishing returns in accuracy but linearly increase runtime. For instance, Breiman demonstrated that using 100 trees suffices for many datasets, while larger values up to 2500 benefit high-dimensional cases with noisy features. The max_features parameter, often set to 'sqrt' (square root of total features for classification) or 'log2', introduces randomness in splits to decorrelate trees; tuning it can improve area under the curve (AUC) by an average of 0.006 across diverse datasets, with lower values preferred when many relevant features exist. Parameters like max_depth and min_samples_split regulate tree complexity to prevent overfitting; max_depth is typically tuned from 1 to 10, with deeper trees risking higher variance, while min_samples_split (e.g., 0.2n to 0.9n, where n is sample size) enforces splits only on sufficiently large nodes, modestly boosting AUC by 0.001 to 0.004. Grid search or random search over these, combined with k-fold cross-validation (k=5–10), provides precise evaluation, though OOB error offers a faster, unbiased alternative approximating full cross-validation for most datasets. Bayesian optimization, implemented in libraries like tuneRanger, further refines these by modeling the hyperparameter space sequentially, efficiently handling interactions such as between max_features and sample size. Best practices begin with default settings—such as n_estimators=500, max_features='sqrt', and no depth limit—then iteratively tune using OOB curves to monitor convergence, where error stabilizes as trees increase. In high-dimensional settings (p >> n), advanced preprocessing via selection reduces the feature space before fitting the forest, enhancing tuning and variable importance stability by focusing on the most predictive subsets.

References

  1. [1]
    RandomForestClassifier — scikit-learn 1.7.2 documentation
    A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and uses averaging to improve the ...Comparing Random Forests... · RandomForestRegressor
  2. [2]
    [PDF] 1 RANDOM FORESTS Leo Breiman Statistics Department University ...
    A recent paper (Breiman [2000]) shows that in distribution space for two class problems, random forests are equivalent to a kernel acting on the true margin.
  3. [3]
    Random Forests | Machine Learning
    Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same di.
  4. [4]
    Random forests: from early developments to recent advancements
    In this paper, we look at developments of RF from birth to present. The main aim is to describe the research done to date and also identify potential and ...
  5. [5]
    1.11. Ensembles: Gradient boosting, random forests, bagging, voting ...
    Ensemble methods combine the predictions of several base estimators built with a given learning algorithm in order to improve generalizability / robustness ...
  6. [6]
    Random decision forests | IEEE Conference Publication - IEEE Xplore
    The essence of the method is to build multiple trees in randomly selected subspaces of the feature space. Trees in, different subspaces generalize their ...
  7. [7]
    [PDF] Classification and Regression by randomForest - The R Journal
    Our experience has been that the OOB estimate of error rate is quite accurate, given that enough trees have been grown (otherwise the OOB estimate can bias ...Missing: adoption bioinformatics
  8. [8]
    Classification and Regression Trees | Leo Breiman, Jerome ...
    Oct 19, 2017 · ABSTRACT. The methodology used to construct tree structured rules is the focus of this monograph. Unlike many other statistical procedures, ...
  9. [9]
    Induction of decision trees | Machine Learning
    This paper summarizes an approach to synthesizing decision trees that has been used in a variety of systems, and it describes one such system, ID3, in detail.
  10. [10]
    [PDF] Bagging Predictors - UC Berkeley Statistics
    Abstract. Bagging predictors is a method for generating multiple versions of a pre- dictor and using these to get an aggregated predictor.
  11. [11]
    [PDF] Random decision forests - Purdue University
    A decision tree is constructed in each selected sub- space using the entire training set and the algorithms given in the previous section. Notice that each ...
  12. [12]
    [PDF] Bias-corrected random forests in regression - UNM Math
    May 19, 2011 · Breiman [2] showed that bagging could effectively reduce the variance of regression predictors, while leaving the bias unchanged. As the.<|control11|><|separator|>
  13. [13]
    [PDF] Understanding Random Forests: From Theory to Practice - arXiv
    Jul 2, 2014 · The first part of this work studies the induction of decision trees and the construction of ensembles of randomized trees, motivating their ...
  14. [14]
  15. [15]
    [PDF] randomForest: Breiman and Cutlers Random Forests for ...
    Sep 22, 2024 · Title Breiman and Cutlers Random Forests for Classification and ... Variable Importance Plot. Description. Dotchart of variable importance as ...
  16. [16]
  17. [17]
    Extremely randomized trees | Machine Learning
    Mar 2, 2006 · This paper proposes a new tree-based ensemble method for supervised classification and regression problems.
  18. [18]
    Unsupervised Learning With Random Forest Predictors
    Jan 1, 2012 · A random forest (RF) predictor is an ensemble of individual tree predictors. As part of their construction, RF predictors naturally lead to a dissimilarity ...
  19. [19]
    [1406.2673] Mondrian Forests: Efficient Online Random Forests - arXiv
    Jun 10, 2014 · Access Paper: View a PDF of the paper titled Mondrian Forests: Efficient Online Random Forests, by Balaji Lakshminarayanan and 1 other authors.
  20. [20]
    Mondrian Forests for Large-Scale Regression when Uncertainty ...
    Jun 11, 2015 · We demonstrate that Mondrian forests outperform approximate GPs on large-scale regression tasks and deliver better-calibrated uncertainty assessments.
  21. [21]
    Explainable artificial intelligence for Healthcare applications using ...
    Nov 9, 2023 · Random Forest Classifier as black box AI is used on a publicly available Diabetes symptoms dataset with LIME and SHAP for better interpretations.
  22. [22]
  23. [23]
    Estimation and Inference of Heterogeneous Treatment Effects using ...
    Oct 14, 2015 · In this paper, we develop a non-parametric causal forest for estimating heterogeneous treatment effects that extends Breiman's widely used random forest ...
  24. [24]
    Gene selection and classification of microarray data using random ...
    Jan 6, 2006 · We investigate the use of random forest for classification of microarray data (including multi-class problems) and propose a new method of gene selection.Missing: GeneRF | Show results with:GeneRF
  25. [25]
    A balanced iterative random forest for gene selection from ...
    Aug 27, 2013 · A balanced iterative random forest algorithm is proposed to select the most relevant genes for the disease and can be used in the classification ...
  26. [26]
    A two-stage credit scoring model based on random forest
    Firstly, we use SMOTE to deal with the imbalanced data and secondly, we employ random forest to build predictive credit features. Dominance analysis shows that, ...Missing: stock | Show results with:stock
  27. [27]
    [PDF] Credit risk prediction in an imbalanced social lending environment
    The results reveal that combining random forest and random under-sampling may be an effective strategy for calculating the credit risk associated with loan ...Missing: stock | Show results with:stock
  28. [28]
    [PDF] The Random Forest Model for Analyzing and Forecasting the ... - arXiv
    Feb 27, 2024 · This paper analyses the application of the random forest algorithm combined with artificial intelligence in predicting stock price trends in the ...
  29. [29]
    Optimal parameters of random forest for land cover classification ...
    Oct 22, 2023 · This study suggests that the suitable data types for LC classification were Sentinel-2 data with auxiliary data.
  30. [30]
    Remote sensing based forest cover classification using machine ...
    Jan 2, 2024 · Remote sensing techniques leveraging Sentinel-2 satellite images were employed. Both single-layer stacked images and temporal layer stacked ...
  31. [31]
    Analysis of e-Mail Spam Detection Using a Novel Machine Learning ...
    Random forest is a higher-level variation of CART that uses the bootstrap bagging approach and random feature selection. In this approach, a forest is created ...
  32. [32]
    A comprehensive benchmark of machine and deep learning models ...
    Sep 3, 2025 · Previous comparative benchmarks have shown that DL performance is frequently equivalent or even inferior to models such as Gradient Boosting ...
  33. [33]
    Random survival forests - Project Euclid
    We introduce random survival forests, a random forests method for the analysis of right-censored survival data. New survival splitting rules for growing ...
  34. [34]
    [PDF] Learning to rank with extremely randomized trees
    Another popular tree-based randomization method is the Random Forests algorithm proposed by Breiman (2001), which combines the bootstrap sampling idea of.
  35. [35]
    Multi-Label Classification Based on Random Forest Algorithm for ...
    This paper proposes a multi-label classification method using Random Forest (RF) as a learning algorithm for non-intrusive load identification.Missing: aggregation | Show results with:aggregation
  36. [36]
    [PDF] AI-Generated Text Detection and Source Identification
    The study analyzed texts for classification based on the source by utilizing Machine Learning (ML) such as Random Forest,. Support Vector Machine, and Logistic ...
  37. [37]
    RandomForestRegressor — scikit-learn 1.7.2 documentation
    A random forest is a meta estimator that fits a number of decision tree regressors on various sub-samples of the dataset and uses averaging to improve the ...
  38. [38]
    Distributed Random Forest (DRF) - H2O.ai Documentation
    Jul 24, 2023 · DRF is a powerful classification and regression tool. When given a set of data, DRF generates a forest of classification or regression trees.
  39. [39]
    RandomForest
    Class for constructing a forest of random trees. For more information see: Leo Breiman (2001). Random Forests. Machine Learning. 45(1):5-32.
  40. [40]
    9.3. Parallelism, resource management, and configuration
    Joblib is able to support both multi-processing and multi-threading. Whether joblib chooses to spawn a thread or a process depends on the backend that it's ...
  41. [41]
    szilard/benchm-ml: A minimal benchmark for scalability ... - GitHub
    This project aims at a minimal benchmark for scalability, speed and accuracy of commonly used implementations of a few machine learning algorithms.Szilard/benchm-Ml · Results · Boosting (gradient Boosted...
  42. [42]
    Benchmarks — cuml 25.10.00 documentation - RAPIDS Docs
    cuML offers accelerated inference and training for classical ML models using NVIDIA GPUs. ... Simpler algorithms like KMeans and Random Forest can also see ...Missing: 2020s | Show results with:2020s
  43. [43]
    Advantages and Disadvantages of Random Forest Models for ...
    Jul 3, 2023 · Random forests are by their nature nonparametric. Thus, they are extremely flexible and make few assumptions about the relationships between ...
  44. [44]
    A survey and taxonomy of methods interpreting random forest models
    Jul 17, 2024 · This paper reviews and classifies methods for interpreting random forest models, which are considered "black boxes" due to their complexity.
  45. [45]
    [2004.14841] Interpretable Random Forests via Rule Extraction - arXiv
    Apr 29, 2020 · Despite their powerful predictivity, this lack of interpretability may be highly restrictive for applications with critical decisions at stake.
  46. [46]
    [1407.7502] Understanding Random Forests: From Theory to Practice
    Jul 28, 2014 · The goal of this thesis is to provide an in-depth analysis of random forests, consistently calling into question each and every part of the algorithm.<|control11|><|separator|>
  47. [47]
    [2104.00629] Regularized target encoding outperforms traditional ...
    Apr 1, 2021 · A common problem are high cardinality features, i.e. unordered categorical predictor variables with a high number of levels.
  48. [48]
    Hyperparameters and Tuning Strategies for Random Forest - arXiv
    Apr 10, 2018 · In this paper, we first provide a literature review on the parameters' influence on the prediction performance and on variable importance measures.
  49. [49]
    Unbiased Feature Selection in Learning Random Forests for High ...
    We propose a new RF algorithm, called xRF, to select good features in learning RFs for high-dimensional data.