Fact-checked by Grok 2 weeks ago

Variational Bayesian methods

Variational Bayesian methods, also known as , constitute a family of optimization-based approximation techniques in designed to estimate intractable posterior distributions by selecting a simpler variational distribution that minimizes the Kullback-Leibler divergence to the true posterior, typically achieved by maximizing the (ELBO). These methods transform the problem into an optimization task, providing a computationally efficient alternative to sampling-based approaches like (MCMC), particularly for large-scale datasets and complex models. Developed prominently in the late 1990s for graphical models, variational Bayesian methods gained traction through foundational work on mean-field approximations and coordinate ascent algorithms, enabling scalable inference in probabilistic graphical models such as Bayesian networks and Markov random fields. The core objective, the ELBO, decomposes into an expected log-likelihood term and an entropy regularization term, balancing model fit with distribution simplicity to yield a tractable lower bound on the model . Common variational families include fully factorized mean-field distributions for high-dimensional parameters, though advanced variants like structured approximations address limitations in capturing posterior correlations. In practice, these methods have revolutionized applications, powering latent variable models such as topic models (e.g., ), Gaussian processes, and Bayesian neural networks by facilitating fast posterior approximation and . Stochastic variational inference extends their scalability to massive data via mini-batch gradients, while amortized inference amortizes computations across data points using neural networks, as seen in variational autoencoders. Despite their speed—often orders of magnitude faster than MCMC—variational methods can underestimate posterior variance due to mode-seeking behavior, prompting ongoing research into more accurate divergences and hybrid approaches.

Overview

Definition and Purpose

Variational Bayesian methods, also known as variational inference, constitute a family of deterministic approximation techniques designed to address the challenges of in probabilistic models. These methods posit a simpler variational q(\theta) over the model parameters \theta, which is optimized to approximate the intractable true posterior p(\theta \mid \text{data}). The optimization proceeds by minimizing the Kullback-Leibler (KL) divergence \KL(q(\theta) \parallel p(\theta \mid \text{data})), which measures the information loss when using q in place of the posterior, thereby providing a tractable for complex Bayesian computations. The primary purpose of variational Bayesian methods is to overcome the computational intractability inherent in exact Bayesian inference, where the posterior often involves high-dimensional integrals that cannot be evaluated analytically or sampled efficiently, particularly for large datasets or intricate models such as graphical models. By transforming the inference problem into an optimization task over the parameters of q(\theta), these methods enable scalable approximations that facilitate parameter estimation and model selection in scenarios where traditional approaches like Markov chain Monte Carlo (MCMC) become prohibitively slow. This optimization is typically framed around maximizing the evidence lower bound (ELBO), a quantity that serves as the objective function and provides a tight lower bound on the model evidence while being equivalent to minimizing the KL divergence. Key benefits of variational Bayesian methods include their computational efficiency compared to sampling-based alternatives like MCMC, as they rely on deterministic optimization routines that converge faster and scale better to massive data regimes. Additionally, the optimized q(\theta) yields not only point estimates for parameters but also a full approximate posterior, allowing for in predictions and decisions. These advantages have made variational methods particularly valuable in applications, where rapid is essential for model training and deployment.

Historical Context

The origins of variational Bayesian methods can be traced to early efforts in approximating within probabilistic graphical models. A precursor to these developments appeared in work on Bayesian methods for mixtures of experts, where variational minimization was used for and parameter in hierarchical models. This laid groundwork for handling intractability in complex structures. In 1999, and colleagues introduced mean-field variational as a systematic approach for and learning in graphical models, including Bayesian networks and Markov random fields, by optimizing a lower bound on the log-likelihood to approximate posterior distributions. Building directly on this, Hagai Attias formalized a comprehensive variational Bayesian framework in 2000, enabling model averaging and selection in graphical models through variational approximations that integrate prior and likelihood information. Subsequent extensions in the early 2000s refined these techniques for broader applicability. Zoubin Ghahramani and Matthew J. Beal developed propagation algorithms in 2000 that adapted belief propagation and junction tree methods within the variational Bayesian learning paradigm, facilitating efficient inference in factor analysis and mixture models. By 2008, Michael J. Wainwright and Michael I. Jordan provided a rigorous textbook formalization, linking graphical models to exponential families and deriving general variational representations for likelihood computation, marginal probabilities, and most probable configurations, which solidified the theoretical underpinnings for scalable implementations. Modern advances from the 2010s onward addressed scalability and integration with emerging computational paradigms. Matthew D. Hoffman and colleagues introduced stochastic variational inference (SVI) in 2013, a scalable optimization that uses noisy gradient estimates to approximate posteriors in large datasets, particularly for topic models like . This was complemented by Diederik P. Kingma and Max Welling's 2014 work on variational auto-encoders (VAEs), which incorporated amortized variational inference via neural networks to enable efficient generative modeling of high-dimensional data. Rajesh Ranganath, Sean Gerrish, and David M. Blei further advanced generality with black-box variational inference in 2014, providing a model-agnostic that leverages and objectives without requiring custom derivations. By the 2020s, variational Bayesian methods evolved toward even greater scalability in and distributed settings. Comprehensive reviews, such as that by David M. Blei and colleagues in 2017, highlighted variational inference's role in approximating complex posteriors across statistics and , emphasizing its advantages over sampling methods in speed and differentiability. Recent developments up to 2025 have extended these to contexts, where privacy-preserving inference is critical; for instance, federated variational inference for Bayesian mixture models enables distributed clustering of large-scale categorical data without centralizing sensitive information. Similarly, integrations of Bayesian neural networks with federated frameworks have produced robust, uncertainty-aware distributed variational inference, enhancing model performance in heterogeneous environments.

Core Concepts

Bayesian Inference Challenges

In Bayesian inference, the goal is to compute the posterior distribution over model parameters \theta given observed data x, which is proportional to the likelihood times the : p(\theta | x) \propto p(x | \theta) p(\theta). However, the , known as the or p(x) = \int p(x | \theta) p(\theta) \, d\theta, typically requires evaluating a high-dimensional that lacks a closed-form solution in most realistic models. This computational intractability stems from the complexity of integrating over parameter spaces that can involve hundreds or thousands of dimensions, especially in hierarchical or latent variable models where exact evaluation demands exponential resources relative to the model size. Further challenges arise with non-conjugate priors, where the prior and likelihood do not belong to the same , preventing analytical updates to the posterior as in conjugate cases. Conjugate priors are rare in practice, particularly for modern applications like deep probabilistic models or with non-linear interactions, rendering exact inference algorithms—such as those based on junction trees or —impractical due to their exponential time and in the size of the largest or . For instance, the QMR-DT diagnostic , with approximately 600 diseases and 4,000 findings, has a size of 151.5 and a cutset size of 106.5, making or enumeration infeasible even on modern hardware. Additionally, large datasets exacerbate these issues, as full posterior computations often scale linearly with data volume N in non-conjugate settings, leading to prohibitive costs for regimes involving millions or billions of observations. Exact methods, including numerical quadrature or without approximation, fail for complex models because they either diverge in high dimensions (the curse of dimensionality) or require assumptions that do not hold for non-Gaussian or posteriors. These limitations motivate the development of approximate techniques that provide tractable alternatives, allowing scalable computation of posterior summaries like means, variances, or predictive distributions without fully marginalizing the intractable evidence. Such approximations are essential for enabling Bayesian methods in fields like and statistics, where model complexity and data scale continue to grow.

Kullback-Leibler Divergence

The , introduced as a measure of information loss when using one to approximate another, quantifies the difference between two q and p defined over the same event space. For continuous distributions with densities q(\theta) and p(\theta), it is defined as \mathrm{KL}(q \| p) = \int q(\theta) \log \frac{q(\theta)}{p(\theta)} \, d\theta, while for discrete distributions, it takes the form \mathrm{KL}(q \| p) = \sum_{\theta} q(\theta) \log \frac{q(\theta)}{p(\theta)}. This can equivalently be expressed using expectation notation as \mathrm{KL}(q \| p) = \mathbb{E}_{q(\theta)} \left[ \log q(\theta) - \log p(\theta) \right], highlighting its interpretation as the expected value under q of the log-ratio of the densities. The divergence is asymmetric, meaning \mathrm{KL}(q \| p) \neq \mathrm{KL}(p \| q) in general, which distinguishes it from symmetric distance metrics like the Jensen-Shannon divergence. Key properties of the KL divergence make it particularly suitable for optimization in statistical contexts. It is always non-negative, \mathrm{KL}(q \| p) \geq 0, with equality holding if and only if q = p ; this non-negativity follows from , which applies to the -\log. Additionally, the KL divergence is not a true because it does not satisfy the and lacks symmetry, but its non-negativity ensures it serves as a valid objective for measuring distributional discrepancy. These properties, including the tight bound at equality, underpin its use in bounding log-likelihoods and deriving variational objectives. In variational Bayesian methods, the KL divergence forms the core objective for approximating intractable posterior distributions. Specifically, variational inference seeks to minimize \mathrm{[KL](/page/KL)}(q(\theta) \| p(\theta \mid x)) over a flexible of distributions q(\theta) parameterized to tractably represent the posterior p(\theta \mid x) given observed x, thereby finding the member of the family closest to the true posterior in the KL sense. This minimization is equivalent to maximizing the (ELBO), as the \log p(x) = \mathrm{ELBO}(q) + \mathrm{[KL](/page/KL)}(q(\theta) \| p(\theta \mid x)) holds, where \log p(x) is constant with respect to q; thus, reducing the KL term directly tightens the bound on the . In expectation form, the relevant KL expression expands to \mathbb{E}_{q(\theta)} \left[ \log q(\theta) - \log p(\theta, x) \right] + \log p(x), emphasizing the trade-off between the of q and its fit to the joint model p(\theta, x).

Evidence Lower Bound

The evidence lower bound (ELBO) is a core objective function in variational inference, providing a tractable lower bound on the log marginal likelihood, or evidence, \log p(x), where x denotes observed data and \theta represents latent variables or parameters. For a variational distribution q(\theta) chosen to approximate the intractable posterior p(\theta|x), the ELBO is defined as L(q) = \mathbb{E}_{q(\theta)} \left[ \log p(x, \theta) \right] - \mathbb{E}_{q(\theta)} \left[ \log q(\theta) \right], which equals \log p(x) - \mathrm{KL}\left( q(\theta) \| p(\theta|x) \right), where \mathrm{KL} denotes the Kullback-Leibler divergence. This non-negative gap ensures L(q) \leq \log p(x), with equality holding if and only if q(\theta) = p(\theta|x). An equivalent integral formulation emphasizes the joint density ratio: L(q) = \int q(\theta) \log \frac{p(x|\theta) p(\theta)}{q(\theta)} \, d\theta. This decomposes the ELBO into the expected complete-data log likelihood \mathbb{E}_{q(\theta)} \left[ \log p(x, \theta) \right] and the of the variational H(q) = -\mathbb{E}_{q(\theta)} \left[ \log q(\theta) \right], highlighting its role as a balance between model fit and distributional simplicity. Alternatively, it can be viewed as the expected log likelihood \mathbb{E}_{q(\theta)} \left[ \log p(x|\theta) \right] minus the \mathrm{KL}\left( q(\theta) \| p(\theta) \right) between q and the p(\theta). Maximizing L(q) with respect to the parameters of q minimizes the KL divergence to the true posterior, thereby tightening the bound on the and yielding a useful q(\theta) \approx p(\theta|x). This optimization simultaneously provides an estimate of the model \log p(x), which is otherwise computationally prohibitive, and facilitates posterior in complex Bayesian models. As q improves, the ELBO increases monotonically toward \log p(x), offering a reliable signal of quality without direct evaluation of the intractable posterior.

Theoretical Foundations

Problem Formulation

In Bayesian inference, the goal is to compute the posterior p(\theta \mid x) over latent variables or parameters \theta given observed data x, defined as p(\theta \mid x) = \frac{p(x, \theta)}{p(x)} = \frac{p(x \mid \theta) p(\theta)}{p(x)}, where p(x, \theta) is the joint model , p(x \mid \theta) is the likelihood, and p(\theta) is the . The p(x) = \int p(x \mid \theta) p(\theta) \, d\theta is often intractable due to the high-dimensional required, rendering exact posterior infeasible for models. Variational Bayesian methods address this intractability by formulating the inference problem as an optimization task over a variational distribution q(\theta) from a specified family \mathcal{Q}. The objective is to find q^*(\theta) = \arg\max_{q \in \mathcal{Q}} \mathcal{L}(q), where \mathcal{L}(q) is the evidence lower bound (ELBO), a tractable surrogate for the log marginal likelihood \log p(x). This optimization minimizes the Kullback-Leibler (KL) divergence between q(\theta) and the true posterior p(\theta \mid x), equivalently maximizing the ELBO given by \mathcal{L}(q) = \mathbb{E}_{q(\theta)} \left[ \log p(x \mid \theta) \right] - \mathrm{KL} \left( q(\theta) \Vert p(\theta) \right). The first term encourages q(\theta) to concentrate on values of \theta that explain the data well (reconstruction fidelity), while the second term regularizes q(\theta) toward the prior p(\theta) to prevent overfitting. The family \mathcal{Q} is typically parameterized for tractability, allowing q(\theta) to be specified via free parameters (e.g., means and variances in Gaussian forms) or structured forms that impose conditional independencies, such as mean-field approximations where q(\theta) = \prod_i q_i(\theta_i) assumes independence across factors \theta_i. This parameterization enables the ELBO to be computed efficiently, provided that the expectations under q(\theta) (e.g., \mathbb{E}_{q} [\log p(x \mid \theta)]) are analytically or numerically tractable, often leveraging conjugate priors or model-specific simplifications. By solving this optimization, variational methods yield a deterministic approximation to the posterior that scales to large datasets and complex graphical models.

Derivation of ELBO

In variational Bayesian methods, the (ELBO) arises as a tractable objective for approximating the intractable posterior distribution. The derivation begins with the log , or , of the observed data \mathbf{x}, given by \log p(\mathbf{x}) = \log \int p(\mathbf{x}, \theta) \, d\theta, where \theta denotes the latent variables or parameters, and p(\mathbf{x}, \theta) is the joint distribution. This is typically intractable due to the high dimensionality of \theta or the complexity of the model. To derive a lower bound, introduce an approximate posterior distribution q(\theta) from a chosen family of distributions. The log evidence can then be expressed using the expectation with respect to q(\theta) as \log p(\mathbf{x}) = \mathbb{E}_{q(\theta)} \left[ \log \frac{p(\mathbf{x}, \theta)}{q(\theta)} \right] + D_{\text{KL}} \bigl( q(\theta) \parallel p(\theta \mid \mathbf{x}) \bigr), where D_{\text{KL}}(\cdot \parallel \cdot) is the Kullback-Leibler (KL) divergence. The first term is the ELBO, denoted \mathcal{L}(q) = \mathbb{E}_{q(\theta)} \left[ \log \frac{p(\mathbf{x}, \theta)}{q(\theta)} \right]. Since the KL divergence is nonnegative for any probability distributions q(\theta) and p(\theta \mid \mathbf{x}), it follows that \log p(\mathbf{x}) \geq \mathcal{L}(q), with equality if and only if q(\theta) = p(\theta \mid \mathbf{x}). This bound holds by the nonnegativity property of the KL divergence. Expanding the ELBO expression yields \mathcal{L}(q) = \mathbb{E}_{q(\theta)} \bigl[ \log p(\mathbf{x} \mid \theta) \bigr] + \mathbb{E}_{q(\theta)} \bigl[ \log p(\theta) \bigr] - \mathbb{E}_{q(\theta)} \bigl[ \log q(\theta) \bigr]. The first term represents the expected complete-data log-likelihood under q, the second is the expected log prior, and the third is the entropy of q, H(q) = -\mathbb{E}_{q(\theta)} \bigl[ \log q(\theta) \bigr]. Thus, \mathcal{L}(q) can be rewritten as the expected log-likelihood plus the entropy of the approximating distribution. Maximizing \mathcal{L}(q) with respect to the parameters of q tightens the bound and improves the posterior approximation. This derivation relies only on measure-theoretic validity of the expectations and the properties of the KL divergence, without requiring conjugate priors or specific model assumptions. It applies generally to any probabilistic model where the evidence is intractable.

Proofs of Optimality

In variational Bayesian methods, the optimization objective is to find the variational distribution q^*(\theta) that maximizes the evidence lower bound (ELBO), denoted as \mathcal{L}(q) = \mathbb{E}_q[\log p(x, \theta)] - \mathbb{E}_q[\log q(\theta)]. This maximization is globally optimal within the chosen family of distributions \mathcal{Q} because it is equivalent to minimizing the Kullback-Leibler (KL) divergence D_{\text{KL}}(q(\theta) \| p(\theta | x)), where p(\theta | x) is the true posterior. Specifically, the identity \log p(x) = \mathcal{L}(q) + D_{\text{KL}}(q(\theta) \| p(\theta | x)) holds, with \log p(x) constant with respect to q; thus, \arg\max_q \mathcal{L}(q) = \arg\min_q D_{\text{KL}}(q \| p(\theta | x)). Equality is achieved when q(\theta) = p(\theta | x), meaning the approximation is exact if the variational family \mathcal{Q} is sufficiently expressive to include the true posterior. At stationary points of the ELBO, the optimal q satisfies the mean-field fixed-point equations derived from setting the variational to zero, where each factor of q is updated to match the expected log-joint under the current approximation. Under convexity assumptions—such as when the model and priors belong to families—the ELBO is in the natural parameters of q, ensuring that local are global and that coordinate ascent algorithms converge to the unique maximum. For models parameterized by exponential families with full support on the parameter space, the optimal variational distribution is unique provided that the variational family is chosen to match the form of the true conditional posteriors, leveraging the conjugate duality between the log-partition function and to guarantee a single solution. This uniqueness arises because the moment-matching conditions in exponential families yield a correspondence between the variational parameters and the expected sufficient statistics of the posterior. Despite these theoretical guarantees, the ELBO landscape is generally non-convex in high dimensions, leading to potential multiple local modes that depend on initialization and can result in suboptimal approximations far from the true posterior. Such limitations highlight the importance of structured variational families to mitigate mode collapse, though exact uniqueness requires the variational family to fully capture posterior , which is often infeasible in complex models.

Approximation Techniques

Mean-Field Approximation

The mean-field approximation is a fundamental technique in variational Bayesian methods, where the variational posterior distribution q(\theta) is assumed to factorize fully across the components of the latent variables \theta = (\theta_1, \dots, \theta_n), such that q(\theta) = \prod_{i=1}^n q_i(\theta_i). This assumption imposes among the latent variables given the observed data, simplifying the approximation while targeting the true posterior p(\theta | x). Introduced in the context of graphical models, this approach enables scalable by restricting the variational family to product distributions. A key advantage of the mean-field approximation lies in its computational tractability, as the factorization allows for closed-form or efficient iterative updates via cyclic coordinate ascent optimization, where each q_i(\theta_i) is optimized while holding the others fixed. This makes it particularly well-suited for latent variable models, such as topic models and mixture models, where exact posterior computation is intractable due to high dimensionality. The method's simplicity also facilitates integration with techniques for large-scale datasets. The update rule for each factor in the mean-field family follows from maximizing the (ELBO) and takes the form q_i(\theta_i) \propto \exp\left\{ \mathbb{E}_{q_{-i}(\theta_{-i})} \left[ \log p(x, \theta) \right] \right\}, where q_{-i}(\theta_{-i}) = \prod_{j \neq i} q_j(\theta_j) and the is with respect to all factors except the i-th. These updates are performed iteratively until , often yielding quick approximations to the marginal posteriors despite the independence assumption. Despite its efficiencies, the mean-field approximation has notable limitations, primarily its tendency to underestimate posterior variances and ignore correlations between latent variables, which can lead to overly confident inferences especially in low-data regimes. This underestimation arises because the product form cannot capture dependencies present in the true posterior, potentially biasing downstream predictions in complex models.

Structured Approximations

Structured approximations in variational Bayesian methods extend beyond the independence assumptions of mean-field approaches by incorporating dependencies and correlations into the variational posterior q(\mathbf{z}), enabling more accurate inferences in complex models. These methods define [q](/page/Q) over structured families that respect the underlying model , such as graphical structures or priors, to better approximate the true posterior while still optimizing the (ELBO). One prominent type is variational message passing (VMP), which applies to Bayesian networks with conjugate-exponential family priors, using to compute structured updates that propagate information across the graph. Introduced by Winn and in 2005, VMP automates for hierarchical and factorized models by deriving variational messages that account for dependencies, making it suitable for dynamic Bayesian networks and state-space models. For instance, in hierarchical models, VMP can impose a tree-structured variational q that mirrors the model's latent structure, preserving parent-child dependencies and improving ELBO values compared to fully factorized alternatives. Spike-and-slab priors offer another structured approach, particularly for sparse Bayesian models, where the variational combines a "" (narrow near zero) and a "slab" (broader ) to model variable selection and shrinkage. Titsias and Lázaro-Gredilla developed a variational Bayesian using this prior in 2011 for and multiple kernel learning, enabling the posterior to capture sparsity patterns and inter-task correlations without assuming independence across parameters. This method enhances interpretability in high-dimensional settings, such as , by concentrating probability mass on relevant features while allowing dependencies through shared slab components. Flow-based approximations, exemplified by normalizing flows, transform a simple base distribution (e.g., Gaussian) into a more expressive q via invertible mappings, thereby capturing multimodal and non-Gaussian dependencies in the posterior. Rezende and Mohamed's seminal 2015 work integrated normalizing flows into variational inference, demonstrating substantial ELBO improvements on tasks like and variational autoencoders, where the flow's ensures tractable density evaluation. In hierarchical models, flows can model correlated latents across levels, outperforming mean-field with improvements of several nats in the ELBO on benchmarks like binarized MNIST. Nested variational methods further generalize this by learning hierarchical proposals within importance sampling frameworks, allowing nested distributions to approximate intractable posteriors more flexibly. As proposed by et al. in 2021, nested variational inference minimizes forward or reverse divergences at multiple levels, benefiting applications in nested sampling scenarios like evidence estimation in astronomy, where it reduces variance in ELBO estimates compared to standard . These structured approximations provide key benefits, including superior handling of dependencies in models with latent hierarchies or sparsity, leading to tighter ELBO bounds and more reliable . For example, tree-structured q in VMP has been shown to capture inter-layer correlations in hierarchical models, yielding improved ELBO values compared to mean-field on examples like . However, they introduce challenges such as higher computational costs due to the need for propagating messages or inverting flows, often scaling quadratically with model depth or dimensionality. Model selection among structured families typically relies on ELBO comparisons, with cross-validation used to balance expressiveness against . More recent structured approximations incorporate skew-normal decomposable graphical models to handle posterior (Salomone et al., 2023).

Algorithms and Optimization

Coordinate Ascent VI

Coordinate Ascent Variational Inference (CAVI) is a deterministic optimization algorithm designed to maximize the (ELBO) by iteratively updating the factors of a factorized variational distribution q(\theta) = \prod_i q_i(\theta_i), where each q_i is optimized while holding the others fixed. This coordinate ascent approach exploits the convexity of the ELBO with respect to each individual factor, ensuring a monotonic increase in the bound at each step. Introduced as a standard method for approximate in probabilistic models, CAVI is particularly suited for of complete datasets and is widely applied in scenarios where exact posterior is intractable. The algorithm proceeds by initializing the variational parameters defining each q_i(\theta_i), often using non-informative priors or moment matching from the model. In each iteration, the factors are updated sequentially or cyclically: for every i, the optimal q_i^* is found by solving q_i^* = \arg\max_{q_i} \int q_i(\theta_i) \log p(x, \theta) \prod_{j \neq i} q_j(\theta_j) \, d\theta_{-i}, where the integral is over all latent variables \theta_{-i} excluding \theta_i, and p(x, \theta) is the joint distribution of observed data x and latents \theta. This maximization yields the closed-form update q_i^*(\theta_i) \propto \exp\left( \mathbb{E}_{q_{-i}} \left[ \log p(x, \theta) \right] \right), with the expectation taken over the current values of all other factors q_{-i}(\theta_{-i}) = \prod_{j \neq i} q_j(\theta_j); normalization ensures q_i integrates to 1. These updates can often be computed analytically when the model belongs to the exponential family, leveraging digamma functions or other sufficient statistics for efficiency. The following pseudocode outlines the core procedure:
Input: Joint model p(x, θ), observed data x
Output: Optimized variational distribution q(θ) = ∏_i q_i(θ_i)
Initialize factors q_i(θ_i) for i = 1 to m (e.g., via priors or heuristics)
While not converged:
    For each i = 1 to m:
        Compute q_i^*(θ_i) ∝ exp{ E_{q_{-i}} [log p(x, θ)] }
        Normalize q_i^* to integrate to 1
        Set q_i ← q_i^*
    Evaluate ELBO with current q(θ)
Return q(θ)
Convergence is typically assessed by monitoring the change in the ELBO between iterations; the process halts when |\Delta \mathcal{L}| < \epsilon for a small tolerance \epsilon (e.g., $10^{-4}), or after a fixed number of iterations to avoid excessive computation. Due to the coordinate-wise optimization, the ELBO non-decreases at each full cycle, converging to a local maximum, though the final solution may depend on initialization owing to the non-convexity of the overall objective. In practice, CAVI requires only a few iterations for many models, as formalized in the standard recipe for variational Bayes.

Stochastic Variational Inference

Stochastic variational inference (SVI) addresses the computational challenges of variational inference on large datasets by leveraging stochastic optimization techniques. Traditional coordinate ascent variational inference (CAVI) requires a full pass over the entire dataset of size N in each iteration, leading to O(N) complexity per update, which becomes prohibitive for massive data regimes common in modern machine learning applications. In contrast, SVI approximates the evidence lower bound (ELBO) objective using mini-batches of size B \ll N, reducing the per-iteration cost to O(B) while maintaining scalability through noisy but unbiased gradient estimates. This approach enables efficient posterior approximation in high-dimensional models, such as topic models and deep generative models, where exact full-data computations are infeasible. The core of SVI, as introduced by Hoffman et al. (2013), lies in stochastic natural gradient ascent on the ELBO for models in the exponential family, using mini-batches to compute noisy estimates of the natural gradient via subsampling of data and latent variables. For a mini-batch drawn from the dataset, the update scales the local gradient estimate by N/B to unbiasedly approximate the full-data gradient, with parameters updated via adaptive step sizes \rho_t following the Robbins-Monro algorithm (e.g., \rho_t \propto 1/t^\gamma for $0.5 < \gamma \leq 1): \phi_{t+1} = \phi_t + \rho_t \hat{\nabla}_\phi \mathcal{L}(\phi_t). Subsequent developments, such as the reparameterization trick introduced by Kingma and Welling (2014), enable low-variance stochastic gradients for broader classes of models by reparameterizing latents as \theta = g(\epsilon, \phi) with \epsilon \sim p(\epsilon), allowing direct differentiation through the sampling process. This framework allows SVI to handle conjugate exponential family models efficiently, with natural gradients further accelerating convergence in structured variational families. SVI was introduced by Hoffman et al. in their seminal 2013 paper, where it was applied to latent Dirichlet allocation (LDA) topic models, demonstrating orders-of-magnitude speedups over full-batch methods on large corpora. Subsequent extensions, such as black-box variational inference by Ranganath et al. (2014), generalize SVI to non-conjugate and non-reparameterizable models using score-function estimators (e.g., REINFORCE) for broader applicability across diverse probabilistic models without model-specific derivations. These advancements have made SVI a cornerstone for scalable Bayesian inference in machine learning pipelines as of 2025.

Examples

Simple Bernoulli Model

The simple Bernoulli model provides an accessible illustration of variational inference (VI) in a fully conjugate setting, where the approximation recovers the exact posterior distribution in a single update step. Consider a dataset of N independent observations \mathbf{x} = \{x_1, \dots, x_N\}, where each x_i \in \{0, 1\} represents the outcome of a coin toss and follows a Bernoulli distribution parameterized by the unknown success probability \theta \in [0, 1], such that x_i \sim \text{Bernoulli}(\theta). A prior is placed on \theta, \theta \sim \text{Beta}(\alpha, \beta), with hyperparameters \alpha > 0 and \beta > 0. The joint distribution over the observations and parameter is then p(\mathbf{x}, \theta) = \left[ \prod_{i=1}^N \theta^{x_i} (1 - \theta)^{1 - x_i} \right] \frac{\theta^{\alpha - 1} (1 - \theta)^{\beta - 1}}{B(\alpha, \beta)}, where B(\alpha, \beta) is the . In VI, a variational distribution q(\theta) from the same Beta family is chosen to approximate the true posterior p(\theta \mid \mathbf{x}), parameterized as q(\theta) = \text{Beta}(\mu_\alpha, \mu_\beta). The goal is to maximize the evidence lower bound (ELBO), \mathcal{L}(q) = \mathbb{E}_q[\log p(\mathbf{x} \mid \theta)] + \mathbb{E}_q[\log p(\theta)] - \mathbb{E}_q[\log q(\theta)], with respect to \mu_\alpha and \mu_\beta. Due to conjugacy, the optimal variational parameters coincide exactly with the posterior parameters: \mu_\alpha = \alpha + \sum_{i=1}^N x_i and \mu_\beta = \beta + N - \sum_{i=1}^N x_i. This yields q(\theta) = p(\theta \mid \mathbf{x}), demonstrating that VI achieves the exact result without approximation error in this case. The algorithm thus requires only a single parameter update, highlighting how VI leverages conjugacy for efficient exact inference while providing a template for more complex non-conjugate models.

Gaussian Mixture Model

The Gaussian mixture model (GMM) is a fundamental example for demonstrating variational Bayesian (VB) inference in models with latent variables, particularly in the multivariate setting where data points are clustered into multiple Gaussian components. In this framework, observed data \{ \mathbf{x}_n \}_{n=1}^N in \mathbb{R}^D are generated from a mixture of K Gaussians, with latent cluster assignments \mathbf{z}_n \in \{1, \dots, K\} for each data point. The generative model specifies \mathbf{z}_n \sim \mathrm{Multinomial}(\pi), where \pi = (\pi_1, \dots, \pi_K) are the mixing proportions drawn from a Dirichlet prior \pi \sim \mathrm{Dir}(\alpha) with symmetric concentration parameter \alpha > 0. Conditional on \mathbf{z}_n = k, the observation follows \mathbf{x}_n \sim \mathcal{N}(\boldsymbol{\mu}_k, \boldsymbol{\Sigma}_k), where the component means \boldsymbol{\mu}_k have a normal prior \boldsymbol{\mu}_k \sim \mathcal{N}(\mathbf{m}_0, (\beta_0 \boldsymbol{\Lambda}_k)^{-1}) and the precisions \boldsymbol{\Lambda}_k = \boldsymbol{\Sigma}_k^{-1} follow a Wishart prior \boldsymbol{\Lambda}_k \sim \mathrm{Wishart}(\nu_0, \mathbf{W}_0), with hyperparameters \beta_0 > 0, \nu_0 > D-1, and positive definite \mathbf{W}_0. This conjugate prior setup facilitates tractable expectations in the VB framework. To approximate the intractable posterior p(\mathbf{z}, \pi, \{\boldsymbol{\mu}_k, \boldsymbol{\Lambda}_k\}_{k=1}^K \mid \{\mathbf{x}_n\}), a mean-field variational distribution is assumed: q(\mathbf{z}, \pi, \{\boldsymbol{\mu}_k, \boldsymbol{\Lambda}_k\}) = \prod_{n=1}^N q(\mathbf{z}_n) \cdot q(\pi) \cdot \prod_{k=1}^K q(\boldsymbol{\mu}_k \mid \boldsymbol{\Lambda}_k) q(\boldsymbol{\Lambda}_k), fully factorized across latent variables and parameters. Specifically, q(\mathbf{z}_n) = \mathrm{Multinomial}(\mathbf{r}_n) with responsibilities \mathbf{r}_n = (r_{n1}, \dots, r_{nK}) where \sum_k r_{nk} = 1; q(\pi) = \mathrm{Dir}(\boldsymbol{\gamma}) with \boldsymbol{\gamma} = (\gamma_1, \dots, \gamma_K); each q(\boldsymbol{\mu}_k \mid \boldsymbol{\Lambda}_k) = \mathcal{N}(\mathbf{m}_k, (\beta_k \boldsymbol{\Lambda}_k)^{-1}); and q(\boldsymbol{\Lambda}_k) = \mathrm{Wishart}(\nu_k, \mathbf{W}_k). This factorization simplifies optimization of the evidence lower bound (ELBO), \mathcal{L}(q) = \mathbb{E}_q[\log p(\{\mathbf{x}_n\}, \mathbf{z}, \pi, \{\boldsymbol{\mu}_k, \boldsymbol{\Lambda}_k\})] - \mathbb{E}_q[\log q], which lower-bounds the marginal likelihood and is maximized via coordinate ascent. The variational parameters are updated iteratively in a variational Bayesian EM (VBEM) scheme. The responsibilities are given by r_{nk} \propto \exp\left( \mathbb{E}_{q(\pi)}[\log \pi_k] + \mathbb{E}_{q(\boldsymbol{\mu}_k, \boldsymbol{\Lambda}_k)}[\log \mathcal{N}(\mathbf{x}_n \mid \boldsymbol{\mu}_k, \boldsymbol{\Lambda}_k^{-1})] \right), with \sum_k r_{nk} = 1. The over the Dirichlet is \mathbb{E}[\log \pi_k] = \psi(\gamma_k) - \psi(\sum_{j=1}^K \gamma_j), where \psi is the , and \gamma_k = \alpha + \sum_{n=1}^N r_{nk}. The Gaussian likelihood involves quadratic forms like \mathbb{E}[\log |\boldsymbol{\Lambda}_k|] = \sum_{i=1}^D \psi\left( \frac{\nu_k + 1 - i}{2} \right) + D \log 2 - \log |\mathbf{W}_k| from the Wishart, along with terms for \mathbb{E}[\boldsymbol{\mu}_k] and \mathbb{E}[\boldsymbol{\mu}_k \boldsymbol{\mu}_k^\top]. Updates for the normal-Wishart parameters follow similarly: \beta_k = \beta_0 + \sum_n r_{nk}, \nu_k = \nu_0 + \sum_n r_{nk}, and adjusted \mathbf{m}_k, \mathbf{W}_k incorporating data moments weighted by responsibilities. These closed-form updates leverage conjugacy, enabling efficient computation. In practice, this VB approach for the GMM illustrates key limitations of mean-field approximations, such as underestimation of posterior correlations between latent variables (e.g., cluster assignments \mathbf{z}_n and parameters \boldsymbol{\mu}_k) due to the independence assumptions in q. For instance, the factorized q ignores dependencies that MCMC methods capture, leading to tighter but biased posterior variances. The ELBO is computed at each iteration to monitor convergence and model fit, often showing effective automatic model selection by shrinking ineffective components when \alpha is small. Empirical evaluations on synthetic multivariate data demonstrate that VB-GMM recovers true structures faster than sampling-based alternatives, though with slightly overestimated component overlaps.

Advanced Topics

Exponential Family Distributions

In variational Bayesian methods, the exponential family plays a central role due to its , which enables tractable approximations for posterior . The encompasses a broad class of probability distributions, including the , , , and Dirichlet distributions, whose probability density functions (or mass functions) can be expressed in the p(x \mid \eta) = h(x) \exp \left\{ \eta^\top T(x) - A(\eta) \right\}, where h(x) is the base measure, \eta is the natural parameter vector, T(x) is the vector of sufficient statistics, and A(\eta) = \log \int h(x) \exp \left\{ \eta^\top T(x) \right\} dx is the log-partition function that ensures normalization. This parameterization highlights the role of sufficient statistics T(x), which capture all relevant information about the parameter \eta from the data x. When both the likelihood p(x \mid \theta) and the prior p(\theta) belong to conjugate exponential families, the exact posterior p(\theta \mid x) also lies within the same family, simplifying inference. In variational inference (VI), the approximating distribution q(\theta) is typically chosen from the same conjugate exponential family form, leading to closed-form updates for its natural parameters. Specifically, the update rule for the natural parameter \eta_q of q becomes \eta_q = \eta_{\text{prior}} + \mathbb{E}_{q(\text{other vars})} [T(x, \text{other vars})], where the expectation is taken with respect to the variational distributions over latent variables or other parameters. This additivity in the natural parameter space arises directly from the exponential family structure and mirrors the exact Bayesian update in conjugate settings. The primary advantages of this setup in VI include the availability of closed-form expressions for key expectations, such as \mathbb{E}_q [T(x)] = \nabla_\eta A(\eta_q), which avoids numerical integration and accelerates optimization of the (ELBO). For instance, in mean-field approximations where q is factorized across parameters, the additivity of parameters allows for efficient coordinate ascent updates, as each factor's parameter can be optimized independently while leveraging expectations from the others. This property makes models particularly prevalent in VI applications, as many common probabilistic models admit conjugate priors within this family. For non-conjugate models, where the prior or likelihood does not fit the exponential family form, VI updates can still be approximated using techniques such as lower bound optimizations or quadrature methods to estimate the required expectations \mathbb{E}_q [T(x)]. These generalizations extend the applicability of VI beyond strictly conjugate settings, though they may introduce additional computational overhead compared to the closed-form conjugate case.

Duality Formulas

Variational inference (VI) can be formulated as a primal optimization problem that maximizes the evidence lower bound (ELBO) over a variational distribution q(\theta) to approximate the posterior p(\theta \mid x), expressed as \max_q \mathcal{L}(q) = \mathbb{E}_q[\log p(x, \theta)] - \mathbb{E}_q[\log q(\theta)], which provides a lower bound on the log evidence \log p(x). This primal perspective leverages the structure of exponential families, where the ELBO maximization corresponds to optimizing the cumulant generating function A(\theta) with respect to canonical parameters \theta. The dual formulation arises from conjugate duality between the cumulant function A(\theta) and its convex conjugate A^*(\mu), transforming the inference problem into a convex optimization over mean parameters \mu = \mathbb{E}[\phi(X)], where \phi are sufficient statistics. Specifically, the dual problem is \min_{\mu \in \mathcal{M}} A^*(\mu) = \min_{\mu} \sup_{\theta} \{\langle \theta, \mu \rangle - A(\theta)\}, where \mathcal{M} is the marginal polytope, and for \mu in the interior \mathcal{M}^\circ, A^*(\mu) = -H(p_{\theta(\mu)}), with H denoting entropy. A key duality formula links the primal and dual via A(\theta) = \sup_{\mu \in \mathcal{M}} \{\langle \theta, \mu \rangle - A^*(\mu)\}, enabling variational representations for computing log-likelihoods and marginals. A simple proof sketch of this conjugate duality follows from the definition of the convex conjugate and Fenchel-Moreau theorem: since A(\theta) is convex and lower semicontinuous, its biconjugate equals itself, yielding A(\theta) = \sup_{\mu} \{\langle \theta, \mu \rangle - A^*(\mu)\}. For the reverse, A^*(\mu) = \sup_{\theta} \{\langle \mu, \theta \rangle - A(\theta)\} holds by direct computation using the exponential family form p(x; \theta) = \exp(\langle \theta, \phi(x) \rangle - A(\theta)). In extensions to upper bounds on the log partition function, Wainwright et al. (2005) introduce a of dual formulations using tree-reweighted distributions, where the minimizes a \min_{\rho} \sum_T \rho(T) \log Z(T) over spanning trees T with weights \rho, and the maximizes \sum_i H_i(\mu_i) - \sum_{i,j} \rho_{i,j} I(\mu_{i,j}) over pseudomarginals \mu, providing tight upper bounds via the dual gap. The dual gap measures the difference between and objectives, with tightness achieved when pseudomarginals match true marginals, as in tree-structured models. These duality formulas facilitate theoretical analysis of in by quantifying the gap between the ELBO and true , often through convexity properties that ensure global optimality in the dual. They also aid non-convex optimization in by reformulating problems as convex duals, enabling efficient algorithms like tree-reweighted that converge to fixed points under stationarity conditions of the . Such tools are particularly valuable for assessing bound tightness in structured approximations, though they remain optional for practical implementations focused on scalable inference.

Comparisons and Applications

Vs. MCMC and EM

Variational inference (VI) differs from the expectation-maximization (EM) algorithm in its treatment of uncertainty and objectives. While EM iteratively maximizes the likelihood or achieves maximum a posteriori (MAP) estimates by treating parameters as fixed point values, VI approximates the full posterior distribution over parameters by optimizing a variational family, providing a more comprehensive Bayesian treatment that quantifies uncertainty. EM does not provide a bound on the model evidence, whereas VI maximizes the evidence lower bound (ELBO), enabling model selection and comparison through evidence estimates. This makes VI particularly suitable for scenarios requiring posterior predictive distributions, unlike EM's focus on point estimates. In contrast to (MCMC) methods, offers deterministic optimization for faster computation and scalability to massive datasets, such as those with millions of parameters in topic modeling, but it introduces by restricting the to a chosen variational family, potentially underestimating posterior variance. MCMC, by sampling from the posterior, yields asymptotically unbiased and exact results with proper mixing, though it suffers from high computational cost, variance in estimates, and challenges in high dimensions due to slow convergence. Hybrids like variational inference incorporate MCMC steps into VI to mitigate these biases while retaining speed advantages. VI is preferred for large-scale applications demanding rapid inference, such as analyzing billions of documents, where EM may suffice for simpler maximum likelihood tasks but lacks Bayesian rigor, and MCMC is reserved for low-dimensional problems prioritizing accuracy over speed, like small experimental datasets.

Modern Applications

Variational Bayesian methods have found widespread application in , particularly in generative modeling through variational autoencoders (VAEs). Introduced in the seminal work on auto-encoding variational Bayes, VAEs combine neural networks with variational inference to learn latent representations of data, enabling efficient generation of new samples and in high-dimensional spaces such as images and text. This approach scales to large datasets by optimizing the (ELBO) via stochastic gradients, making it a for modern generative systems. In topic modeling, variational inference underpins scalable inference for (LDA) models, originally proposed for discovering abstract topics in document collections. Subsequent advancements in stochastic variational inference (SVI) have extended this to massive corpora, such as online news archives, by processing data in minibatches and achieving near-real-time updates without full posterior recomputation. Similarly, variational methods facilitate Bayesian neural networks, where over weights improves robustness in tasks like and ; for instance, the variational Bayesian dropout technique approximates posteriors efficiently, outperforming point estimates in overparameterized models. In , variational inference supports hierarchical modeling, allowing flexible incorporation of group-level variability in complex datasets like longitudinal studies or multi-level surveys. This is particularly useful for , where Bayesian frameworks quantify decision uncertainty by modeling conversion rates hierarchically, enabling more informed business choices than frequentist p-values; implementations in languages like PyMC demonstrate this scalability for real-time experimentation. In , variational approximations have accelerated in COVID-19 phylodynamic models, fitting transmission dynamics to genomic data from millions of sequences to estimate key parameters like reproduction numbers during the 2020-2022 outbreaks. Recent developments highlight variational inference's role in emerging domains. In , it enables privacy-preserving posterior approximations across distributed devices, as formulated in scalable expectation propagation frameworks that treat local updates as variational contributions to a global model. For , variational Bayes inference leverages quantum circuits to approximate intractable posteriors, offering computational advantages over classical methods in high-dimensional optimization problems as demonstrated in 2023 analyses. As of 2024, variational inference has been applied to physically structured models for Bayesian full waveform inversion in , enabling efficient uncertainty assessment in subsurface . In 2025, fast variational Bayesian methods have advanced estimation in panel spatial autoregressive models for spatial . Practical implementations in tools like Stan's variational inference (ADVI) and PyMC further democratize these applications, supporting scalable Bayesian in production environments by handling millions of parameters with minimal computational overhead. Overall, these methods enable at unprecedented scales, bridging theoretical rigor with real-world deployment in data-intensive fields.

References

  1. [1]
  2. [2]
  3. [3]
    [PDF] An Introduction to Variational Methods for Graphical Models
    This paper presents a tutorial introduction to the use of variational methods for inference and learning in graphical models (Bayesian networks and Markov ...
  4. [4]
  5. [5]
    [PDF] Bayesian parameter estimation via variational methods - Stat@Duke
    The deterministic approximation methods that we develop in this paper are known generically as variational methods. Variational techniques have been used ...
  6. [6]
    Bayesian Methods for Mixtures of Experts - NIPS papers
    We present a Bayesian framework for inferring the parameters of a mixture of experts model based on ensemble learning by varia(cid:173) tional free energy ...
  7. [7]
    A Variational Baysian Framework for Graphical Models - NIPS papers
    This paper presents a novel practical framework for Bayesian model averaging and model selection in probabilistic graphical models.
  8. [8]
    Propagation Algorithms for Variational Bayesian Learning
    We show how the belief propagation and the junction tree algorithms can be used in the inference step of variational Bayesian learning.
  9. [9]
    [PDF] Graphical Models, Exponential Families, and Variational Inference
    Abstract. The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random.
  10. [10]
    [1206.7051] Stochastic Variational Inference - arXiv
    Abstract:We develop stochastic variational inference, a scalable algorithm for approximating posterior distributions.
  11. [11]
    [1312.6114] Auto-Encoding Variational Bayes - arXiv
    Dec 20, 2013 · Authors:Diederik P Kingma, Max Welling. View a PDF of the paper titled Auto-Encoding Variational Bayes, by Diederik P Kingma and 1 other authors.
  12. [12]
    [1401.0118] Black Box Variational Inference - arXiv
    Dec 31, 2013 · In this paper, we present a "black box" variational inference algorithm, one that can be quickly applied to many models with little additional derivation.
  13. [13]
    [1601.00670] Variational Inference: A Review for Statisticians - arXiv
    Jan 4, 2016 · In this paper, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization.
  14. [14]
    Federated learning meets Bayesian neural network: Robust and ...
    We introduce a novel approach explicitly integrating Bayesian neural networks (BNNs) into the FL framework. The proposed approach is able to enhance the ...
  15. [15]
  16. [16]
    [PDF] Variational Inference: A Review for Statisticians - arXiv
    May 9, 2018 · In this paper, we review variational inference (VI), a method from machine learning that approximates probability densities through ...
  17. [17]
    None
    ### Summary of Evidence Lower Bound (ELBO) or Variational Bound in Variational Bayesian Methods
  18. [18]
    [PDF] A Variational Bayesian Framework for Graphical Models - NIPS papers
    In this paper I present Variational Bayes (VB), a practical framework for Bayesian computations in graphical models. VB draws together variational ideas from ...
  19. [19]
    Mean-field variational approximate Bayesian inference for latent ...
    Oct 15, 2007 · It is shown that the mean-field variational method always underestimates the posterior variance and, that, for small sample sizes, the mean- ...Missing: seminal papers
  20. [20]
    [PDF] Variational Message Passing - Journal of Machine Learning Research
    In this paper we introduce Variational Message Passing (VMP), a general purpose algorithm for applying variational inference to Bayesian Networks. Like belief ...
  21. [21]
    Spike and Slab Variational Inference for Multi-Task and Multiple ...
    We introduce a variational Bayesian inference algorithm which can be widely applied to sparse linear models. The algorithm is based on the spike and slab prior ...
  22. [22]
    [1505.05770] Variational Inference with Normalizing Flows - arXiv
    May 21, 2015 · We use this view of normalizing flows to develop categories of finite and infinitesimal flows and provide a unified view of approaches for ...
  23. [23]
    [2106.11302] Nested Variational Inference - arXiv
    Jun 21, 2021 · We develop nested variational inference (NVI), a family of methods that learn proposals for nested importance samplers by minimizing an forward or reverse KL ...Missing: Bayesian | Show results with:Bayesian
  24. [24]
    [PDF] Pattern Recognition and Machine Learning - Microsoft
    This new textbook reflects these recent developments while providing a compre- hensive introduction to the fields of pattern recognition and machine learning.
  25. [25]
    [PDF] CPSC 540: Machine Learning - Conjugate Priors
    Beta-bernoulli example (beta is also conjugate for binomial and geometric): x ∼ Ber(θ), θ ∼ B(α, β), ⇒ θ | x ∼ B(α. 0. ,β. 0. ),. Gaussian-Gaussian example:.
  26. [26]
    SVI Part I: An Introduction to Stochastic Variational Inference in Pyro
    Variational inference offers a scheme for finding θ m a x and computing an approximation to the posterior p θ m a x ( z | x ) . Let's see how that works. Guide¶.Model Learning · Optimizers · A Simple Example
  27. [27]
    [PDF] Variational Bayesian Model Selection for Mixture Distributions
    In this paper we extend the continuous hyper-parameter framework to address the problem of choosing the num- ber of components in a Gaussian mixture model.Missing: seminal | Show results with:seminal
  28. [28]
    [PDF] Bayesian model search for mixture models based on optimizing ...
    Waterhouse, S. R., MacKay, D., & Robinson, A. J. (1995). Bayesian methods for mixture of experts. Advances in Neural Information. Processing Systems, NIPS8 ...<|separator|>
  29. [29]
    [PDF] A New Class of Upper Bounds on the Log Partition Function
    [36] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky, “A new class of upper bounds on the log partition function,” Uncertainty in Artif. Intell., vol ...
  30. [30]
    [PDF] Variational Methods - Carnegie Mellon University
    The Variational Bayesian EM algorithm has been used to approximate Bayesian learning in a wide range of models such as: • probabilistic PCA and factor analysis.
  31. [31]
  32. [32]
    [PDF] Markov Chain Monte Carlo and Variational Inference:Bridging the Gap
    Variational inference is faster, while MCMC is non-parametric and exact. The paper combines them by incorporating MCMC steps into variational approximations.Missing: EM | Show results with:EM
  33. [33]
    [PDF] Latent Dirichlet Allocation - Journal of Machine Learning Research
    Journal of Machine Learning Research 3 (2003) 993-1022. Submitted 2/02; Published 1/03. Latent Dirichlet Allocation. David M. Blei. BLEI@CS.BERKELEY.EDU.
  34. [34]
    [1505.05424] Weight Uncertainty in Neural Networks - arXiv
    May 20, 2015 · We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network.
  35. [35]
    [1511.02386] Hierarchical Variational Models - arXiv
    Nov 7, 2015 · Black box variational inference allows researchers to easily prototype and evaluate an array of models. Recent advances allow such algorithms ...
  36. [36]
    Variational Phylodynamic Inference Using Pandemic-scale Data
    Jul 11, 2022 · In this paper, we develop the variational Bayesian skyline (VBSKY), a method for fitting Bayesian phylodynamic models to very large pathogen genetic data sets.
  37. [37]
    [2302.04228] Federated Learning as Variational Inference - arXiv
    Feb 8, 2023 · This paper extends the inference view and describes a variational inference formulation of federated learning where the goal is to find a global variational ...Missing: 2020s | Show results with:2020s
  38. [38]
    [1506.03431] Automatic Variational Inference in Stan - arXiv
    Jun 10, 2015 · We implement ADVI in Stan (code available now), a probabilistic programming framework. We compare ADVI to MCMC sampling across hierarchical ...<|control11|><|separator|>