Fact-checked by Grok 2 weeks ago

Factor graph

A factor graph is a bipartite probabilistic graphical model that represents the factorization of a global function—often a —into a product of local functions, enabling efficient of marginals or other inferences through message-passing algorithms. It consists of two types of nodes: variable nodes, each corresponding to a , and factor nodes, each representing a local (or ) that depends on a of those s; edges connect a variable node to a factor node if the variable is an argument of that factor. This structure generalizes other s like Bayesian networks and Markov random fields, providing a unified for modeling complex dependencies in probabilistic systems. Introduced in the context of and , factor graphs were formalized to visualize and exploit function factorizations for algorithmic efficiency, with the seminal work demonstrating their equivalence to Tanner graphs and their role in decoding algorithms like those for low-density parity-check (LDPC) codes. The associated sum-product algorithm, a key inference method, performs exact marginalization in tree-structured (cycle-free) graphs by propagating messages—summaries of local computations—between nodes, unifying diverse techniques such as the forward-backward algorithm for hidden Markov models, the for maximum-likelihood decoding, and in Bayesian networks. In graphs with cycles, the algorithm can be applied iteratively to yield approximations, though convergence is not guaranteed. Factor graphs have since found broad applications across fields, including for probabilistic reasoning, communications for error-correcting codes, and for tasks like Kalman filtering. In and , they model large-scale inference problems such as (SLAM), where variables represent robot poses and landmarks, and factors encode sensor measurements or motion constraints, solved via nonlinear least-squares optimization on the graph's sparse structure. This sparsity enables scalable solutions for real-time systems, often using libraries like GTSAM to handle millions of variables through techniques like incremental smoothing and Bayes trees.

Fundamentals

Definition

A factor graph is an undirected bipartite used to represent the of a multivariate into a product of local over subsets of . It consists of two distinct types of : , which represent the random involved in the model, and , which represent the local or that depend on specific subsets of those . Edges in the connect a to a only if the variable is an argument of that , ensuring a clear separation between and their influencing . Formally, a factor graph is defined as G = (V, F, E), where V is the set of nodes, F is the set of factor nodes, and E is the set of edges linking to the factors they participate in. This bipartite structure distinguishes factor graphs from general graphical models by explicitly encoding high-order interactions—where a single factor can depend on multiple —without introducing cycles between variable nodes themselves, thereby facilitating efficient representation and computation. Factor graphs are particularly useful in for modeling joint probability distributions through , expressed as P(X) = \frac{1}{Z} \prod_{a \in F} f_a(X_a), where X denotes the set of all variables, each f_a is a over the subset X_a \subseteq X, and Z is the normalizing . This representation allows complex probabilistic dependencies to be decomposed into manageable local components, aiding in tasks such as in large-scale models.

Formal Representation

A factor graph is a G = (V, F, E), where V = \{x_i\}_{i=1}^n denotes the set of variable nodes, each corresponding to a x_i taking values in a (typically a finite alphabet), F = \{f_a\}_{a=1}^m denotes the set of factor nodes, each representing a local function f_a, and E \subseteq V \times F is the set of edges connecting a variable node x_i to a factor node f_a if and only if x_i is in the scope X_a \subseteq X of f_a, where X = \{x_1, \dots, x_n\} is the set of all . This structure encodes the factorization of a global function \psi: \mathcal{X} \to \mathbb{R} (with \mathcal{X} as the Cartesian product of the variable domains) as \psi(X) = \prod_{a=1}^m f_a(X_a), where each f_a: \mathcal{X}_a \to \mathbb{R} is a local function defined over the subset X_a, and the product runs over all factors without redundancy, as the edges precisely delineate the dependencies. The bipartite nature ensures that the graph visually and computationally represents the scopes X_a compactly, avoiding explicit enumeration of variables in each factor. In the special case of undirected factor graphs for modeling joint probability mass functions, the graph represents a distribution P(X) via potential functions \psi_a \geq 0 as P(X) = \frac{1}{Z} \prod_{a=1}^m \psi_a(X_a), where Z = \sum_X \prod_{a=1}^m \psi_a(X_a) is the partition function ensuring normalization. Here, the factor nodes correspond to the potentials \psi_a, which may include conditional probabilities or joint factors derived from models like Markov random fields. Deterministic constraints, such as equality or validity conditions on subsets of variables, are incorporated as special factors using indicator (or characteristic) functions \delta_a(X_a), which equal 1 if the constraint holds and 0 otherwise; these multiply into the product as \prod_a \delta_a(X_a), enforcing the constraints without altering the probabilistic interpretation when combined with other potentials.

Relations to Other Models

Comparison with Bayesian Networks

Bayesian networks are directed acyclic graphs (DAGs) that represent joint probability distributions through conditional independencies, factorizing the distribution as P(X) = \prod_{i=1}^n P(x_i \mid \mathrm{pa}(x_i)), where \mathrm{pa}(x_i) denotes the parents of x_i in the . This structure encodes directed dependencies, making Bayesian networks particularly suitable for modeling causal relationships and performing interventions. Factor graphs can represent any by treating each P(x_i \mid \mathrm{pa}(x_i)) as a factor node connected to the variable node x_i and its parent nodes, preserving the exact of the joint distribution. However, factor graphs are more general, as they subsume Bayesian networks and can directly express undirected or loopy models without requiring conversion, allowing representation of arbitrary factorizations beyond conditional independencies. Factor graphs offer representational advantages over Bayesian networks, particularly for high-order factors involving multiple variables, which can be defined directly without introducing auxiliary nodes, and for efficient message-passing algorithms in non-tree (loopy) structures. Unlike inference in Bayesian networks, which often requires moralization—adding undirected edges between co-parents and dropping directions to form a —factor graphs avoid this step, preserving sparsity and simplifying the transition to undirected inference methods. A key limitation of factor graphs is their lack of inherent directionality, which prevents them from naturally capturing causal semantics; s excel in tasks, such as interventions, due to their directed edges that align with causal mechanisms. For example, consider a simple chain with variables A \to B \to C, factorizing as P(A, B, C) = P(A) \cdot P(B \mid A) \cdot P(C \mid B). The equivalent factor graph includes variable nodes for A, B, and C, with factor nodes connected respectively to A alone, to A and B, and to B and C, directly mirroring the conditionals without additional structure.

Comparison with Markov Random Fields

Markov random fields (MRFs) are undirected graphical models in which the over a set of random variables factors according to the maximal cliques of the graph, expressed as P(X) = \frac{1}{Z} \prod_{c} \psi_c(X_c), where Z is the partition function, \psi_c are potential functions defined over the variables X_c in clique c, and the product is taken over all maximal cliques c. This factorization arises from the Hammersley-Clifford theorem, which links the local Markov properties of the undirected graph to a global Gibbs distribution. Factor graphs and MRFs are equivalent in expressive power, as any MRF can be represented as a factor graph and vice versa, with both capable of encoding the same joint distributions through into local functions. In an MRF, the clique potentials \psi_c are implicitly associated with subsets of variables connected by edges, requiring identification of maximal s to specify the . In contrast, factor graphs make these potentials explicit by introducing dedicated factor nodes connected only to the relevant nodes, thereby avoiding the need to infer cliques from the structure. Both models share an undirected nature, with factor graphs inheriting the pairwise or higher-order interactions of MRFs without imposing directionality. Factor graphs offer advantages over MRFs in visualizing high-order interactions, as the bipartite structure clearly delineates factors from variables, making complex dependencies—such as those involving more than two variables—more intuitive to depict without relying on clique enumeration. This explicit representation also facilitates message-passing algorithms like sum-product, which operate directly on the factor graph without requiring transformations into auxiliary structures, such as junction trees, that might be needed for MRFs. MRFs may be preferable in spatial modeling applications, such as image analysis, where the emphasis on local neighborhoods and pairwise potentials captures locality without the overhead of explicit factor nodes for every interaction. To convert an MRF to a factor graph, introduce a factor node for each potential \psi_c and connect it to the variable nodes comprising X_c, preserving the original while making the local functions explicit. The number of edges in the equals the sum of the sizes of the maximal cliques; this can result in a sparser graph than the MRF when there are large cliques (e.g., a fully connected graph with one clique of size N has N edges in the factor graph versus O(N^2) edges in the MRF), but more edges in purely pairwise models (e.g., twice as many in bipartite graphs).

Examples and Applications

Introductory Examples

A basic introductory example of a factor graph involves two random variables, x_1 and x_2, connected to a single factor node. The factor function is defined as f(x_1, x_2) = \exp(-|x_1 - x_2|), which captures a pairwise dependency that decreases exponentially with the absolute difference between the variables. In this graph, the variable nodes are represented as circles labeled x_1 and x_2, while the factor node is a square labeled f, with undirected edges linking each variable to the factor. This simple bipartite structure visually represents the joint function as the product of this single local factor, proportional to the probability distribution over the two variables. For a slightly more complex example, consider three variables x_1, x_2, and x_3 arranged in a with two pairwise factors. The first factor f_1(x_1, x_2) connects x_1 and x_2, while the second factor f_2(x_2, x_3) connects x_2 and x_3, forming a linear bipartite layout: x_1 (circle) connected to f_1 (square), which connects to x_2 (circle), which in turn connects to f_2 (square), and finally to x_3 (circle). In a hand-drawn diagram, this appears as a horizontal of alternating circles and squares, emphasizing the shared x_2 as a bridge between the factors. This configuration highlights the graph's ability to encode multi-variable interactions through local constraints. The chain example illustrates how factor graphs visually encode the factorization of a joint function \psi(x_1, x_2, x_3) = f_1(x_1, x_2) \cdot f_2(x_2, x_3), where the product of the factors defines the global , and the connections explicitly show which variables influence each local term. This representation makes the conditional dependencies intuitive, with x_2 participating in both factors to propagate across the . A common pitfall when constructing such tree-structured factor graphs is creating cycles in the variable-factor connections, such as linking factors in a (e.g., adding a factor between x_3 and x_1), which transforms the graph into a loopy structure unsuitable for straightforward exact inference in introductory contexts. The examples provided avoid this by maintaining acyclic, tree-like topologies, ensuring the bipartite nature remains clear and interpretable.

Real-World Applications

Factor graphs were first introduced in the late 1990s for applications in , providing a graphical representation that facilitates efficient decoding algorithms for complex probabilistic models. In error-correcting codes, factor graphs represent low-density parity-check (LDPC) codes by modeling parity checks as factor nodes connected to variable nodes representing code bits, enabling iterative decoding via that approaches limits in performance. In , factor graphs underpin turbo decoding by depicting the iterative exchange of information between constituent decoders as messages on the , which has been instrumental in achieving near-capacity performance for convolutional codes. Similarly, factor graph-based equalization addresses in channels by representing channel models and priors as factors, allowing linear (LMMSE) turbo equalizers to iteratively refine estimates with reduced complexity compared to tree-search methods. In , factor graphs model hidden Markov models (HMMs) by factorizing the joint distribution into transition and emission factors, supporting scalable inference for sequence prediction tasks such as through sum-product . In , factor graphs are employed in (SLAM) by incorporating factors for , landmark observations, and loop closures, enabling robust pose estimation in large-scale environments like autonomous . For stereo matching, factor graphs optimize disparity estimation by defining factors for photometric consistency and smoothness priors across image pairs, improving accuracy in textured and occluded regions over traditional graph-cut methods. These applications highlight the benefits of factor graphs in handling large-scale, sparse models, where the bipartite structure exploits sparsity to enable efficient on problems with thousands of variables, such as or high-throughput decoding.

Inference Methods

Message Passing Algorithms

algorithms provide a general framework for performing in factor graphs by iteratively exchanging messages between variable nodes and factor nodes to compute marginal probabilities or approximate posterior distributions. In this approach, messages represent local information propagated along the edges of the structure, enabling the decomposition of global into local computations. The process typically aims to estimate the of each variable, which is obtained as the product of all incoming messages to that variable node, normalized appropriately. This leverages the factor graph's representation to efficiently handle complex joint distributions, particularly in models where direct computation of marginals would be intractable due to high dimensionality. The core of involves two types of update rules: messages from nodes to nodes and vice versa. A -to- message from i to a, denoted \mu_{i \to a}(x_i), is computed as the product of all incoming messages to i from other neighboring factors excluding a: \mu_{i \to a}(x_i) = \prod_{b \neq a} \mu_{b \to i}(x_i) Conversely, a factor-to-variable from a to i, denoted \mu_{a \to i}(x_i), involves summing over all variables in the argument set of a except x_i, weighted by the function and incoming messages excluding i: \mu_{a \to i}(x_i) = \sum_{X_a \setminus \{x_i\}} f_a(X_a) \prod_{j \neq i} \mu_{j \to a}(x_j) These rules, derived from the sum-product paradigm, ensure that messages encapsulate sufficient statistics for marginal computation while respecting the local factorization of the joint distribution. For exact inference, message passing assumes a tree-structured factor graph without cycles, where messages propagate unidirectionally from leaves to roots and back, yielding precise marginals in a single pass. In contrast, graphs with cycles—known as loopy factor graphs—require iterative message updates, leading to approximations of the posteriors that may not correspond to exact Bayesian inference but often perform well empirically. The algorithm is typically initialized with uniform or unit messages on all edges, and iterations continue until convergence or a fixed number of steps. To enhance stability in loopy cases, damping techniques are employed, where new messages are blended with previous ones using a weighted average (e.g., \mu^{\text{new}} = (1 - \alpha) \mu^{\text{old}} + \alpha \mu^{\text{computed}}, with $0 < \alpha < 1), mitigating oscillations and promoting convergence to a fixed point. While convergence is guaranteed on trees, loopy iterations may oscillate or diverge, though damping and careful scheduling often improve reliability. The bipartite nature of factor graphs decouples computations between variable and factor nodes, allowing messages to be updated in across non-adjacent edges, which facilitates efficient on distributed systems or for large-scale tasks. This decoupling not only reduces from exponential to linear in the graph size for tree cases but also enables scalable approximations in loopy settings by processing independent subgraphs simultaneously.

Belief Propagation

Belief propagation, specifically the sum-product variant, is a message-passing algorithm used for performing marginal in factor graphs by iteratively exchanging messages between and nodes to compute approximate or exact beliefs at each . The beliefs b_i(x_i) at a node i represent the over x_i and are obtained as the product of all incoming messages from neighboring nodes: b_i(x_i) = \prod_{a \in N(i)} \mu_{a \to i}(x_i), where N(i) denotes the set of nodes adjacent to i, and messages are typically normalized for numerical stability. The algorithm proceeds in two alternating steps: variable-to-factor messages and factor-to-variable messages. A message from a node i to a node a is the product of all incoming messages to i from other neighboring b \in N(i) \setminus a: \mu_{i \to a}(x_i) = \prod_{b \in N(i) \setminus a} \mu_{b \to i}(x_i). Conversely, a message from a node a to a node i marginalizes the factor potential over all variables in the factor except x_i, weighted by incoming messages: \mu_{a \to i}(x_i) = \sum_{X_a \setminus \{x_i\}} \left[ f_a(X_a) \prod_{j \in N(a) \setminus i} \mu_{j \to a}(x_j) \right], where X_a is the set of variables associated with a, and the sum is over all configurations of those variables excluding x_i. These updates are iterated until , with beliefs updated after each full pass. On tree-structured factor graphs, which contain no cycles, the sum-product algorithm yields exact marginals and converges in a finite number of iterations equal to the of the tree, equivalent to a single forward-backward pass in dynamic programming. This exactness arises because messages propagate without interference from loops, directly computing the true beliefs proportional to the joint distribution marginalized over other variables. For factor graphs with cycles, known as loopy belief propagation, the algorithm is applied iteratively as a fixed-point , providing an approximation to the marginals that often performs well empirically despite lacking theoretical guarantees for or accuracy. In practice, techniques—such as linearly interpolating new and old messages—can stabilize the iterations and improve results on loopy graphs. The of each iteration in the sum-product is O(|E| \cdot d), where |E| is the number of edges in the factor graph and d is the size of the variable domains, assuming factors of bounded ; this enables to large graphs in modern applications like error-correcting codes and , far surpassing exhaustive .

Properties and Extensions

Key Properties

Factor graphs exhibit sparsity in their structure, as edges connect variables only to the factors that directly depend on them, which facilitates efficient storage and computation in high-dimensional probabilistic models. This sparsity arises from the local nature of the factors, allowing representations where most potential interactions are absent, thus enabling to problems with thousands of variables, such as in () applications. For instance, in robotic perception tasks, the sparse matrices derived from factor graphs reduce the number of non-zero entries dramatically, from millions to hundreds of thousands, supporting on large graphs. The modularity of factor graphs permits straightforward addition or removal of factors without necessitating a complete restructuring of the model, making them adaptable for iterative model updates in dynamic environments. This property stems from the bipartite design, where factors and variables are distinct nodes, allowing independent specification and integration of local constraints or measurements. In practice, this modularity is leveraged in estimation frameworks for robotics, where new sensor data can be incorporated as additional factors to refine the posterior distribution incrementally. Inference complexity in factor graphs is closely tied to the tree-width of the underlying , defined as the minimum over all chordal completions of the maximum size minus one; graphs with low tree-width admit efficient exact via algorithms like , with computational cost polynomial in the number of variables but exponential only in the tree-width. This connection parallels the junction tree algorithm used in Bayesian networks, where factor graphs can be transformed into junction trees for loopy graphs, though cycles generally increase complexity unless approximated. Low tree-width structures, such as trees or near-trees, ensure tractable exact marginalization, while higher tree-width demands heuristics to manage fill-in during elimination. Normalization in factor graphs is achieved through the partition function Z, which scales the product of factors to form a valid : p(\mathbf{x}) = \frac{1}{Z} \prod_{a \in \mathcal{A}} f_a(\mathbf{x}_a), where Z = \sum_{\mathbf{x}} \prod_{a \in \mathcal{A}} f_a(\mathbf{x}_a) ensures the total probability sums to one, maintaining probabilistic consistency across the model. Factors themselves need not be , providing flexibility in specification, but the global Z is crucial for exact in acyclic graphs and approximated otherwise. This mechanism underpins the validity of marginal probabilities computed via . Any valid of a joint can be uniquely represented as a factor graph, capturing the precise dependencies without redundancy in the graphical , though not all such graphs are minimal—some factors may be decomposable into products of simpler ones. This uniqueness distinguishes factor graphs from other models like Markov random fields, where the graph may correspond to multiple equivalent factorizations, ensuring a of the model's . Minimal representations optimize sparsity by merging or splitting factors as needed.

Advanced Variants

Dynamic factor graphs extend the standard model to handle time-series by incorporating temporal dependencies through sequential factors that model transitions over time. These variants are particularly useful in applications like and , where they approximate nonlinear dynamic systems via methods such as the (EKF). For instance, in (SLAM), dynamic factor graphs represent robot poses and landmarks as evolving variables connected by motion and measurement factors, enabling efficient inference over trajectories. This structure facilitates the use of incremental smoothing algorithms, which update beliefs as new temporal arrives, outperforming traditional batch methods in real-time scenarios. Continuous-domain factor graphs address scenarios with non-discrete variables by employing Gaussian factors to represent probabilistic relationships, allowing for closed-form in linear-Gaussian systems. In these models, factors are defined as multivariate Gaussians encoding means and covariances, which propagate through to yield marginal posteriors without . For more complex nonlinear cases, techniques, such as or particle methods, approximate the continuous integrals during . This approach is foundational in robotic tasks, where measurements and kinematic constraints are modeled continuously to estimate poses and velocities. Gaussian in such graphs ensures computational tractability while maintaining accuracy for high-dimensional continuous spaces. Deterministic factor graphs adapt the framework for problems by using functions—either Kronecker deltas for discrete variables or Dirac deltas for continuous ones—as factors to enforce hard constraints between variables. This representation transforms logical or algebraic constraints into a graphical form amenable to , where seeks feasible solutions rather than probabilistic distributions. In constraint propagation, factors eliminate inconsistent variable assignments efficiently, making these graphs suitable for solving tasks like scheduling or . The use of deltas ensures exact satisfaction of constraints, distinguishing this variant from probabilistic in standard factor graphs. Hybrid models integrate factor graphs with neural networks to enhance expressiveness in deep probabilistic frameworks, particularly since the . In these setups, neural networks parameterize nonlinear factors within the graph, enabling amortized in complex generative models. For example, variational autoencoders (VAEs) can be structured as factor graphs where encoder networks approximate posterior factors and decoder networks define likelihood factors, facilitating scalable training via variational . This combination leverages the graphical structure for interpretability and modularity while harnessing for high-capacity function approximation in tasks like image generation or . Such hybrids have demonstrated improved performance in latent variable modeling by unifying probabilistic graphical models with neural architectures. Recent extensions include Factor Graph Neural Networks (FGNNs), which use graph neural networks to model higher-order relations for more effective and learning. Software libraries support the implementation and optimization of these advanced variants. GTSAM, a BSD-licensed C++ library developed at , provides tools for constructing and solving factor graphs in continuous and dynamic settings, with built-in support for Gaussian factors and incremental solvers used in applications. Similarly, libDAI is an open-source C++ library focused on approximate inference in discrete factor graphs, including implementations for and junction tree methods, suitable for and hybrid discrete-continuous models. These libraries enable researchers to prototype and deploy advanced factor graph models efficiently.

References

  1. [1]
    Factor graphs and the sum-product algorithm - IEEE Xplore
    Factor graphs and the sum-product algorithm | IEEE Journals & Magazine ... F.R. Kschischang; B.J. Frey; H.-A. Loeliger. All Authors. Sign In or Purchase.
  2. [2]
  3. [3]
    [PDF] Factor Graphs for Robot Perception
    We review the use of factor graphs for the modeling and solving of large-scale inference problems in robotics. Factor graphs are a family of.
  4. [4]
  5. [5]
  6. [6]
    [PDF] An Introduction to Factor Graphs - ETH Zürich
    This paper is an introduction to factor graphs and to the associated summary prop- agation algorithms, which operate by passing “messages” (“summaries”) along ...Missing: seminal | Show results with:seminal
  7. [7]
    [PDF] Extending Factor Graphs so as to Unify Directed and Undirected ...
    In (Frey et a!. 1998; Kschischang, Frey and Loeliger. 2001), we introduced factor graphs as a graphical model that can explicitly represent arbitrary factor.
  8. [8]
    A novel factor graph-based optimization technique for stereo ...
    Sep 16, 2022 · In this paper, we present a new factor graph-based probabilistic graphical model (FGS algorithm) that provides more accurate estimates of stereo ...
  9. [9]
    [PDF] Factor Graphs and the Sum-Product Algorithm - MIT
    Jul 24, 2000 · The work of F. R. Kschischang took place in part while on sabbatical leave at the. Massachusetts Institute of Technology, and was supported in ...
  10. [10]
    Governing convergence of Max-sum on DCOPs through damping ...
    Damping slows information propagation, and splitting adjusts asymmetry in factor graphs. Damping with Max-sum can improve performance, even when not converging.
  11. [11]
    [PDF] The Factor Graph Approach to Model-Based Signal Processing
    Factor graphs can model complex systems and help to design effective algorithms for detection and estimation problems. By Hans-Andrea Loeliger, Fellow IEEE, ...
  12. [12]
    [PDF] Understanding Belief Propagation and its Generalizations
    Yedidia, William T. Freeman, and Yair Weiss. TR2001-22 ... Belief propagation algorithms specific to Bayesian networks, pairwise MRF's, and factor graphs ...
  13. [13]
    [PDF] A Tutorial Introduction to Belief Propagation
    This tutorial introduces belief propagation in the context of factor graphs and demonstrates its use in a simple model of stereo.
  14. [14]
    [PDF] BELIEF PROPAGATION
    Message passing algorithms operate on 'messages' asso- ciated with edges of the factor graph, and update them recursively through local computations done at the ...<|control11|><|separator|>
  15. [15]
    [PDF] Learning Factor Graphs in Polynomial Time & Sample Complexity
    Factor graphs subsume both Bayesian networks and Markov networks, in that every Bayesian net- work or Markov network can be written as a factor graph of ( ...Missing: seminal | Show results with:seminal
  16. [16]
    None
    ### Summary of Factor Graphs and Sum-Product Algorithm (Sections on Graph Structure and Factorization)
  17. [17]
    [PDF] An Introduction to factor graphs - Signal Processing Magazine, IEEE
    Jan 28, 2004 · Factor graphs (FFGs) are diagrams representing the factorization of a function of several variables, with nodes for factors and edges for ...
  18. [18]
    [PDF] Factor Graphs in Logic and Constraint Satisfaction - Frank Dellaert
    Feb 1, 2012 · Factor graphs can be seen as a unifying representation of information about and relationships between variables.Missing: deterministic delta functions
  19. [19]
    GTSAM | GTSAM is a BSD-licensed C++ library that implements ...
    It uses factor graphs and Bayes networks as the underlying computing paradigm rather than sparse matrices to optimize for the most probable configuration or an ...
  20. [20]
    [PDF] libDAI: A Free and Open Source C++ Library for Discrete ...
    libDAI supports factor graphs with discrete variables. This paper describes the most recent libDAI release, version 0.2.7. The modular design of libDAI ...