Fact-checked by Grok 2 weeks ago

Dynamic Bayesian network

A Dynamic Bayesian network (DBN) is a probabilistic graphical model that extends static Bayesian networks to represent temporal processes, modeling sequences of random variables over discrete time steps through a directed acyclic graph that captures conditional dependencies within and across time slices. Introduced by Thomas Dean and Keiji Kanazawa in 1989 as a framework for reasoning about persistence and causation in probabilistic temporal domains, DBNs generalize hidden Markov models (HMMs) and linear state-space models by allowing multiple interacting hidden and observed variables, which can be discrete or continuous. At their core, DBNs are defined by a two-time-slice (2TBN), consisting of an intra-slice structure for dependencies within a single time step and an inter-slice structure for transitions between consecutive time steps, enabling compact representation of the over a of length T by "unrolling" the model without explicit cycles. This structure facilitates efficient inference techniques, such as forward-backward algorithms for filtering and , and parameter learning via methods like expectation-maximization (), making DBNs suitable for handling uncertainty in dynamic systems. Structure learning, which infers the DAG topology from data, often employs score-based approaches like (BIC) or constraint-based methods adapted for temporal data. DBNs have been widely applied in fields requiring temporal modeling, including for sequence prediction, bioinformatics for inference from time-series expression data, for state estimation in uncertain environments, and medical diagnostics for tracking patient outcomes over time. Their flexibility in representing non-linear dynamics and multi-modal interactions has led to influential developments, such as Kevin Murphy's 2002 thesis on representation, inference, and learning, which established foundational algorithms still in use today. Despite computational challenges in exact inference for large networks, approximations like particle filtering and variational methods have expanded their practicality in real-world, high-dimensional scenarios.

Background Concepts

Static Bayesian Networks

A , also known as a belief network, is a that represents a set of random variables and their conditional dependencies through a (DAG). In this representation, the nodes of the DAG correspond to the random variables, while the directed edges signify direct probabilistic influences or dependencies from parent to child variables. The acyclic property ensures no directed cycles exist, enabling a for efficient and of joint probabilities. Each node in the DAG is associated with a that can be either or continuous. For variables, the conditional dependencies are typically quantified using tables (CPTs), which enumerate the probability of each possible state of the child variable given every combination of states of its parent variables. For continuous variables, conditional probability functions, such as Gaussian distributions, are employed to describe these dependencies. This structure compactly encodes the full over the variables via the chain rule of probability, factoring it as the product of each variable's given its parents. A classic example is a simple diagnostic network for medical symptoms and diseases, such as one modeling the relationship between a disease like pneumonia and symptoms like cough and fever. Here, the disease node would point to symptom nodes, with CPTs specifying probabilities like P(cough | pneumonia) = 0.8 and P(fever | pneumonia) = 0.6, allowing inference about the likelihood of the disease given observed symptoms. Such networks facilitate diagnostic reasoning by propagating evidence through the graph to update beliefs about unobserved variables. The DAG's structure also captures conditional independencies among variables, which can be read off the using the d-separation criterion. D-separation determines whether two sets of nodes X and Y are conditionally independent given a set Z by checking if Z blocks all paths between X and Y in the , considering rules for head-to-head, serial, and diverging connections. This criterion ensures that the faithfully represents the independencies in the underlying , enabling sound and complete . Static Bayesian networks provide the foundational framework for extending to models that handle time-varying variables in dynamic settings.

Temporal Probabilistic Models

Temporal probabilistic models provide foundational frameworks for modeling sequential data by capturing dependencies over time through probabilistic mechanisms. These models address the evolution of systems where observations are influenced by underlying processes that unfold dynamically, often with uncertainty. Key examples include (HMMs), linear Gaussian state-space models such as , and statistical time series approaches like (ARMA) models, each offering distinct ways to represent temporal structures but with inherent constraints on complexity. Hidden Markov Models (HMMs) represent time series data using a hidden Markov process, where the system state is a discrete, unobserved variable that evolves according to first-order Markov dynamics, and observations are emitted probabilistically from the current state. The model parameters consist of transition probabilities between states, initial state probabilities, and emission probabilities that map states to observable outputs, typically assuming conditional independence of observations given the state. A core inference task in HMMs is decoding the most likely sequence of hidden states given observations, solved efficiently using the Viterbi algorithm, a dynamic programming method that maximizes the joint probability by recursively computing the highest-probability path through the state trellis. However, HMMs are limited by their assumption of a single, scalar state variable, which restricts their ability to model multifaceted dependencies in high-dimensional or structured state spaces, and they require discrete states, precluding direct application to continuous variables. Linear Gaussian state-space models, exemplified by the Kalman filter, extend temporal modeling to continuous variables by defining linear dynamics for the state evolution and observation process. The state transition equation is \mathbf{x}_t = \mathbf{A} \mathbf{x}_{t-1} + \mathbf{w}_t, where \mathbf{x}_t is the state vector at time t, \mathbf{A} is the transition matrix, and \mathbf{w}_t is Gaussian process noise with zero mean and covariance \mathbf{Q}; the observation equation is \mathbf{z}_t = \mathbf{H} \mathbf{x}_t + \mathbf{v}_t, with \mathbf{z}_t the observation, \mathbf{H} the observation matrix, and \mathbf{v}_t Gaussian measurement noise with zero mean and covariance \mathbf{R}. The Kalman filter recursively estimates the state by predicting forward and updating with new observations, minimizing mean squared error under Gaussian assumptions. Despite their optimality in linear Gaussian settings, these models falter with nonlinear dynamics or non-Gaussian noise, leading to biased estimates or divergence in real-world applications. Autoregressive Moving Average (ARMA) models serve as simpler statistical baselines for univariate forecasting, combining autoregressive components that depend on past values with terms that account for past errors. An ARMA(p, q) process is defined as y_t = \sum_{i=1}^p \phi_i y_{t-i} + \sum_{j=1}^q \theta_j e_{t-j} + e_t, where y_t is the series value, \phi_i are autoregressive coefficients, \theta_j are coefficients, and e_t is . These models assume stationarity and linear relationships, making them effective for short-term predictions but inadequate for capturing hidden states or complex multivariate interactions. Dynamic Bayesian networks address these limitations by unifying HMMs, Kalman filters, and similar models through factored state representations that accommodate arbitrary dependencies.

Model Representation

Two-Timeslice Bayesian Network

A two-timeslice Bayesian network (2TBN) forms the core graphical template for dynamic s (DBNs), compactly specifying the dependencies between variables across two consecutive time slices, denoted as t-1 and t. In this representation, each slice contains a set of nodes representing random variables, with directed divided into two types: intra-slice arcs that connect nodes within slice t to model contemporaneous influences, and inter-slice arcs that link nodes from slice t-1 to slice t to capture temporal evolution. This allows DBNs to represent sequences of states without explicitly drawing the full , reducing redundancy while preserving the underlying probabilistic relationships. The intra-slice arcs within slice t typically mirror the topology of a static , encoding conditional dependencies among variables at a single time point, such as how multiple factors jointly affect an outcome in the current moment. Inter-slice arcs, by contrast, propagate information forward, with parents in t-1 influencing children in t, ensuring the model remains a (DAG) when expanded. For instance, hidden state variables in t-1 might connect to both hidden and observed variables in t, enabling the modeling of evolving systems like sequences of sensor readings or processes with latent dynamics. To extend the 2TBN to a finite horizon of T time steps, the template is unrolled by instantiating slice t as a new slice for each subsequent time point (t=2 to T), replicating the intra-slice structure identically and appending the inter-slice arcs between consecutive slices. The resulting expanded graph is a large temporal DAG that jointly represents the variables across all T slices, with the first slice often defined by a separate initial static to set the prior distribution. This unrolling process facilitates analysis over extended periods while leveraging the compact 2TBN for efficient specification and computation. DBNs based on the 2TBN incorporate two foundational assumptions: stationarity, which requires that the slice structures, parameters, and distributions remain unchanged across time steps, promoting parameter sharing and model simplicity; and the first-order , stipulating that the distribution of variables at time t conditions solely on the variables at t-1, ignoring earlier history to bound . These assumptions enable tractable modeling of time-series data under the premise of consistent temporal dynamics. A representative example illustrates the 2TBN's application to everyday temporal reasoning, such as garden maintenance influenced by . Consider nodes for (weather condition), Sprinkler (irrigation decision), and GrassWet (observed wetness) in slices t-1 and t. Inter-slice arcs might include Rain_{t-1} → Rain_t (persistent rainy conditions) and Sprinkler_{t-1} → Sprinkler_t (consistent watering policy), while intra-slice arcs in t connect Rain_t → GrassWet_t and Sprinkler_t → GrassWet_t, showing how current rain and sprinkler activation determine grass wetness, with past states informing the present through the inter-slice links. Unrolling this over multiple slices would track how a rainy period affects ongoing sprinkler use and cumulative grass conditions.

Slice Structure

In a dynamic Bayesian network (DBN), the slice structure refers to the graphical representation of variables and their dependencies within and across discrete time slices, forming a repeating that captures temporal . Each slice corresponds to a specific time step t, containing nodes that model the system's state at that instant. This structure ensures the overall remains acyclic by directing all connections forward in time, allowing efficient representation of sequential data processes. Nodes in a DBN slice are divided into two primary types: hidden states and observations. Hidden state nodes, often denoted as Q_t or X_t, represent latent variables that evolve over time and can be either discrete (e.g., categorical states like weather conditions) or continuous (e.g., real-valued parameters like ). Observation nodes, denoted as Y_t, capture measurable data influenced by the hidden states and are similarly discrete or continuous, such as readings. These node types enable DBNs to model both unobserved and , with the choice of discreteness depending on the domain's requirements for probabilistic modeling. Intra-slice arcs connect nodes within the same time slice, encoding simultaneous or instantaneous dependencies. For instance, an arc from a hidden node to an node, such as X_t \to Y_t, indicates that the current state directly influences the current observation, reflecting correlations that occur without temporal lag. These form a (DAG) within the slice, allowing for complex intra-temporal relationships like causal influences between multiple states at time t. Inter-slice arcs link nodes between consecutive slices, modeling temporal transitions and ensuring the network's acyclicity by prohibiting backward connections from future to past slices. Typically, these arcs go from hidden states in slice t-1 to hidden states in slice t (e.g., X_{t-1} \to X_t), representing how past states propagate to the present, while observation nodes in slice t may depend only on intra-slice arcs from X_t. Self-loops, such as arcs from X_t to X_{t+1}, can model persistence or autoregressive effects, where a variable's value carries over to the next time step. Longer-range dependencies, beyond immediate predecessors, are handled by chaining multiple inter-slice paths or incorporating higher-order structures, such as arcs from t-2 to t in extended models, without violating the forward-directed topology. A representative example of slice structure in involves tracking a mobile robot's motion using nodes for , , and readings. In each slice t, hidden nodes represent the robot's Pos_t (continuous) and Vel_t (continuous), connected by an intra-slice arc Vel_t \to Pos_t to model how velocity affects position within the same time step. Inter-slice arcs link Pos_{t-1} \to Pos_t and Vel_{t-1} \to Vel_t, capturing motion persistence, while observation nodes Sensor_t (e.g., range finder data) connect via intra-slice arcs from both position and velocity, such as Pos_t \to Sensor_t and Vel_t \to Sensor_t. This setup allows the DBN to infer hidden trajectories from noisy sensor inputs over multiple slices.

Formal Definition

Joint Probability Factorization

The joint probability distribution in a dynamic Bayesian network (DBN) captures the temporal evolution of a system by factorizing the probability over a sequence of random variables across multiple time steps. For a DBN with variables X_{1:T} over T time steps, the full joint distribution is expressed as P(X_{1:T}) = \prod_{t=1}^T P(X_t \mid \mathrm{Pa}(X_t)), where \mathrm{Pa}(X_t) represents the parents of X_t in the graph, including both intra-slice parents within time t and inter-slice parents from time t-1. This factorization exploits the conditional independencies encoded in the DBN's directed acyclic graph structure, allowing efficient representation of complex temporal dependencies without explicitly modeling the entire joint over all variables. The approach stems from unrolling a compact two-timeslice representation into a longer sequence. For the initial time slice at t=1, the absence of a prior slice simplifies the conditioning, so P(X_1 \mid \mathrm{Pa}(X_1)) depends solely on intra-slice parents, typically modeled as a static . Subsequent slices for t \geq 2 incorporate transitions from the previous slice, with P(X_t \mid \mathrm{Pa}(X_t)) capturing both within-slice and between-slice dependencies. In stationary DBNs, the distributions (or tables) for these transitions remain identical across all slices beyond the initial one, enabling the model to parameterize unbounded temporal processes using a of probabilities. This stationarity assumption is central to applications in time-series modeling, where data sequences can vary in length. When partial observations, or evidence E_{1:t}, are available up to time t, the factorization supports the computation of the filtering distribution P(X_t \mid E_{1:t}) by on the and marginalizing over unobserved past states. This conditional form integrates seamlessly into the product structure, facilitating tasks like state estimation. For DBNs with continuous variables, the conditional probabilities in the are commonly specified using forms such as multivariate Gaussians; for instance, in linear Gaussian models, transitions follow P(X_t \mid X_{t-1}) = \mathcal{N}(X_t; A X_{t-1}, Q) and observations P(Y_t \mid X_t) = \mathcal{N}(Y_t; C X_t, R), where A, C, Q, and R are parameter matrices defining the dynamics. These forms preserve conjugacy for efficient computation while extending the discrete-case to real-valued domains.

Markov Properties

Dynamic Bayesian networks (DBNs) rely on specific Markov properties to model temporal dependencies efficiently while capturing conditional independencies in sequential data. The foundational first-order Markov assumption posits that the state at time t+1 is conditionally independent of all prior states given the state at time t, formally expressed as P(X_{t+1} \mid X_{1:t}) = P(X_{t+1} \mid X_t). This assumption simplifies the representation of time-series processes by limiting dependencies to adjacent time slices, enabling the unrolling of the network over arbitrary lengths without exponential growth in complexity. Within each time slice, intra-slice d-separation enforces conditional independencies analogous to those in static Bayesian networks, where nodes are independent of non-descendants given their parents or Markov blankets based on the directed acyclic graph (DAG) structure. Across slices, temporal d-separation extends this by blocking paths from distant past states to the future unless mediated through recent slices; for instance, variables at time t+1 are d-separated from those at t-1 given evidence at t. These separation criteria collectively ensure that influences from the past do not propagate indefinitely without passing through intervening slices, preserving the model's focus on local temporal dynamics. The Markov properties contribute to the compactness of DBNs by avoiding the need to store or compute the full history of states, reducing parameter requirements from potentially O(K^{2D}) in flat models to O(K^{I+1}), where K is the number of states and I the size of the interface between slices. Time-invariant conditional probability distributions (CPDs) and parameter tying across slices further enhance this efficiency, allowing DBNs to represent unbounded sequences with distributed state spaces rather than monolithic ones. A practical illustration appears in , where the current is modeled as independent of earlier phonemes given the recent acoustic context, leveraging the Markov to capture sequential dependencies in audio signals without retaining the entire history. This enables scalable in hierarchical structures, such as modeling phones within words.

Inference Methods

Exact Inference Techniques

Exact inference in dynamic Bayesian networks (DBNs) computes precise posterior distributions over hidden states given observations, leveraging the temporal structure to maintain tractability for certain models. These methods guarantee optimal results but are computationally intensive, often scaling exponentially with model complexity. By exploiting the Markov properties that limit dependencies across time slices, exact inference avoids full recomputation over long sequences through interface-based message passing. One primary approach involves unrolling the DBN into a static Bayesian network (BN) over T time slices. The unrolled DBN represents the joint distribution P(Z_{1:N,1:T}, y_{1:T}), which factorizes as P(Z_1, y_1) \prod_{t=2}^T P(Z_t \mid Z_{t-1}) P(y_t \mid Z_t) (or more generally according to the DBN structure). Standard exact inference algorithms for static BNs, such as the junction tree method, can then be applied to this unrolled graph to compute queries like filtering (P(Z_t \mid y_{1:t})) or smoothing (P(Z_t \mid y_{1:T})). However, while the graph size grows linearly with T, constructing and applying a junction tree to the full unrolled graph can become memory-intensive for very long sequences due to the overall size, though the per-slice complexity remains bounded. Specialized temporal inference methods, such as the interface algorithm, avoid full unrolling to maintain efficiency. To address this, belief propagation via the forward-backward algorithm is commonly used, treating the DBN as a chain of time slices. The forward pass computes filtering distributions recursively: \alpha_t(Z_t) = P(Z_t \mid y_{1:t}) \propto \sum_{Z_{t-1}} \alpha_{t-1}(Z_{t-1}) P(y_t \mid Z_t) P(Z_t \mid Z_{t-1}), using the outgoing interface I^\rightarrow_t as a sufficient statistic to propagate beliefs efficiently without storing the full history. The backward pass then refines these for smoothing: \beta_t(Z_t) = P(y_{t+1:T} \mid Z_t) = \sum_{Z_{t+1}} P(y_{t+1} \mid Z_{t+1}) P(Z_{t+1} \mid Z_t) \beta_{t+1}(Z_{t+1}), combining forward and backward messages to yield P(Z_t \mid y_{1:T}) \propto \alpha_t(Z_t) \beta_t(Z_t). This interface algorithm, a specialization of the junction tree for linear chain structures, enables exact smoothing by reusing computations across slices. The of these methods is O(|\mathcal{S}|^{|I^\rightarrow|}) per time slice for states, where |\mathcal{S}| is the number of states per and |I^\rightarrow| is the of the outgoing , which bounds the d of the intra-slice (d \approx |I^\rightarrow|). For tree-structured DBNs with low (e.g., chain-like dependencies), this remains efficient even for moderate sequence lengths. In practice, exact has been applied in bioinformatics to model sequences, where forward-backward propagation infers regulatory states from time-series data, such as in . Despite these advances, exact becomes intractable for loopy (high ) intra-slice structures or long sequences, where the interface size explodes, rendering full enumeration infeasible beyond small models. For instance, even modest increases in the number of interacting variables per slice can lead to state spaces exceeding practical limits, necessitating approximations in complex real-world applications.

Approximate Inference Algorithms

Approximate inference algorithms are essential for dynamic Bayesian networks (DBNs) when exact methods become computationally intractable due to high dimensionality, long time horizons, or nonlinear dynamics, providing scalable approximations to posterior distributions over state trajectories. These methods trade off precision for efficiency, often benchmarking their error against exact techniques like the forward-backward algorithm on smaller models. Particle filtering, also known as sequential , approximates the filtering distribution in DBNs by maintaining a set of weighted samples (particles) that represent possible state trajectories, propagating them forward in time and updating weights based on observed evidence. Each particle i at time t carries a weight w_t^{(i)} updated via , typically as w_t^{(i)} = w_{t-1}^{(i)} p(E_t | X_t^{(i)}), where E_t denotes the evidence at time t and X_t^{(i)} is the state sample, assuming a bootstrap proposal that matches the prior transition. To mitigate particle degeneracy—where most weights concentrate on few particles—resampling is performed periodically, drawing new particles with replacement proportional to their weights and resetting them to uniform weight. Enhancements like Rao-Blackwellisation condition some variables analytically within the DBN structure, reducing variance and improving efficiency for hybrid linear-Gaussian components. In , particle filters enable robust localization under noisy sensor data, such as fusing and observations in a DBN to track a mobile robot's pose over time, outperforming Kalman filters in non-Gaussian environments. Variational inference approximates the intractable posterior in DBNs by optimizing a simpler distribution that minimizes the Kullback-Leibler (KL) divergence to the true posterior, often using the (ELBO) as a tractable objective. The mean-field approximation assumes independence among latent variables, factorizing the variational distribution q(\mathbf{X}) as a product over individual states or slices, which simplifies coordinate ascent updates but can underestimate posterior variance. For DBNs, structured variational methods respect temporal dependencies by parameterizing q with recurrent forms or low-rank approximations across time slices, enabling scalable in infinite-horizon models via fixed-point iterations. These approaches are particularly effective for learning hidden dynamics in time series, such as inferring switching regimes in nonlinear systems. Markov chain Monte Carlo (MCMC) methods generate samples from the joint posterior over unrolled DBN trajectories, addressing intractability by exploring the high-dimensional space through reversible Markov chains that converge to the target distribution. Gibbs sampling sequentially samples each variable conditioned on all others in the unrolled graph, leveraging the DBN's conditional independencies to compute full conditionals efficiently, though it can mix slowly for long sequences due to temporal correlations. Blocked Gibbs variants group variables into temporal blocks (e.g., entire time slices) and sample them jointly using tailored proposals, accelerating convergence in DBNs with compact intra-slice structure while maintaining . This is useful for smoothing inference, where samples from the full trajectory posterior enable marginalization over past and future states.

Learning Procedures

Parameter Learning

Parameter learning in dynamic Bayesian networks (DBNs) involves estimating the tables (CPTs) or distributions for each node given its parents, assuming the network structure is fixed. This process uses time-series data consisting of observations over multiple time slices, where parameters are typically shared across slices to capture dynamics. The goal is to find parameters that maximize the likelihood of the observed data or incorporate prior knowledge for robust estimation. For complete data without hidden variables, (MLE) provides straightforward parameter estimates. In discrete DBNs, MLE computes CPT entries as normalized frequencies of each child-parent configuration across all time slices, leveraging sufficient statistics like counts summed over the sequence length. For continuous variables modeled with linear-Gaussian CPTs, MLE uses moment matching to estimate means and covariances from sample statistics, such as averaging observed values conditional on parent states. These local computations exploit the DBN's , allowing independent per node. When data includes hidden variables or missing observations, the algorithm iteratively estimates parameters by addressing incompleteness. In the E-step, expectations over hidden states are computed using methods like the forward-backward algorithm adapted to the DBN's unrolled structure. The M-step then maximizes the expected complete-data log-likelihood, updating parameters as in MLE but with expected sufficient statistics, such as fractional counts for discrete cases. is guaranteed to a local maximum of the observed-data likelihood. Bayesian parameter learning incorporates priors to regularize estimates, especially with limited , yielding posterior distributions over . For nodes, Dirichlet priors are conjugate to multinomial CPTs, with hyperparameters representing pseudo-counts that update via observed frequencies across time slices. For Gaussian nodes, Wishart priors on precision matrices (inverses of covariances) conjugate with the likelihood, enabling closed-form posterior updates using sufficient statistics like summed outer products. Predictions use the , integrating over for more robust forecasts in sequential . In time-series settings, sufficient statistics for all methods aggregate evidence across slices, accounting for intra- and inter-slice dependencies without re-estimating per time step due to parameter tying. For instance, transition counts in discrete DBNs sum occurrences of parent-child pairs over the entire sequence. This efficiency scales with sequence length while maintaining the model's temporal assumptions. A representative application is EM learning in HMM-like DBNs for , where hidden phonetic states and missing acoustic observations complicate training. Using the , EM estimates parameters for models with up to 500,000 entries, incorporating long-term articulatory context and short-term stream correlations; this yields 12-29% error rate reductions over standard HMMs on isolated-word tasks by handling incomplete data through expected posteriors over states.

Structure Learning

Structure learning in dynamic Bayesian networks (DBNs) involves discovering the intra-slice and inter-slice dependencies from temporal data, adapting methods from static Bayesian networks while respecting the temporal ordering that prohibits backward arcs across time slices. This process typically leverages either score-based or constraint-based approaches to identify the repeating structure of the network over time, assuming stationarity unless specified otherwise. Score-based methods evaluate candidate structures using decomposable scoring functions that balance model fit and complexity, such as the (BIC) or the Bayesian Dirichlet equivalent uniform (BDeu) score. The BIC penalizes the negative log-likelihood by a term proportional to the number of parameters, adapted for DBNs by separately scoring the initial slice prior network and the transition network across slices, with penalties scaled by sequence length and transition counts. The BDeu score incorporates Dirichlet priors with uniform hyperparameters to compute posterior probabilities, using virtual sample counts for both prior and transition components to handle sparse data effectively. These scores enable efficient search over the constrained space of acyclic graphs, where intra-slice arcs point forward within a slice and inter-slice arcs only connect from previous to current slices. Search procedures for score-based learning often employ greedy algorithms adapted from static cases, such as the K2 algorithm, which starts with an empty graph and iteratively adds, deletes, or reverses arcs to maximize the score locally, caching sufficient statistics for speed. For , (MCMC) sampling explores the structure space by proposing arc changes and accepting them based on score differences, useful for escaping local optima in complex temporal dependencies. These methods exploit the repeating nature of DBNs, learning the two-slice template and unrolling it across time, though they require handling incomplete via expected counts from algorithms like the structural . Constraint-based methods infer the structure by testing conditional independencies in the data, using statistical tests like chi-squared on lagged observations to identify d-separations in the temporal graph. An adaptation of the PC algorithm, known as the temporal PC, sequentially prunes edges based on unconditional and conditional independence tests, incorporating time lags to respect the Markov order and avoid testing impossible backward dependencies. This approach maps DBN learning to an augmented static Bayesian network, applying standard constraint tests while enforcing temporal constraints to ensure acyclicity. Recent advances address scalability and data distribution challenges in DBN structure learning. For instance, improved bacterial foraging optimization enhances global search by adapting step sizes and communication among agents, outperforming traditional methods on synthetic time-series benchmarks with up to 20 nodes. frameworks enable structure discovery from distributed time-series data without sharing raw observations, using and consensus algorithms like ADMM to aggregate local scores across sites. In 2025, score-based learning of DBNs was advanced using extended mixed graphical models that augment the with additional time-lagged structures for improved representation of temporal dependencies. Key challenges include enforcing strict temporal ordering, which limits arc directions and increases the risk of suboptimal local maxima, and handling non-stationarity where network parameters evolve over time, requiring extensions like regime-switching models or piecewise stationary assumptions. In practice, these methods have been applied to learn gene regulatory networks from time-series expression data, where score-based approaches with BDeu scores identify causal interactions among genes, as demonstrated in colorectal and breast cancer time-series gene expression datasets revealing key regulators like MCM proteins in cell cycle pathways.

Applications

Traditional Domains

Dynamic Bayesian networks (DBNs) have found early and enduring applications in domains requiring the modeling of temporal dependencies in sequential data, unifying diverse probabilistic models such as and under a graphical framework. Pre-2010 work demonstrated DBNs' versatility in handling hidden states and observations over time, enabling robust inference in noisy environments through exact and approximate methods like the and . In , DBNs model sequences by representing acoustic features as observations of hidden states capturing linguistic context and temporal dynamics, outperforming traditional hidden Markov models in capturing long-range dependencies. A seminal implementation used DBNs to factor the state space for continuous , achieving improved word error rates on the TIMIT by explicitly modeling durations and transitions. This approach unified segmental models with acoustic processing, facilitating scalable inference for large-vocabulary . Bioinformatics applications of DBNs predate 2010, particularly in prediction, where they model the sequential dependencies in chains to infer , sheet, or configurations from primary sequence data. One method employed a DBN to capture contextual probabilities across residues, achieving 75.9% Q3 accuracy on datasets like CB513 by integrating emission probabilities for local patterns with transition dynamics for global folding. In analysis, DBNs reconstruct regulatory networks from time-series data, inferring causal interactions between . For example, DBNs have been used to infer gene regulatory networks in data, identifying key interactions among cell cycle regulators. These models unify static Bayesian networks with temporal extensions, aiding in the discovery of dynamic pathways. In , DBNs facilitate for , integrating heterogeneous data streams like GPS positions and IMU accelerations to track pose amid uncertainty. Early applications modeled vehicle localization by fusing , inertial, and observations in a DBN, reducing localization error to under 1 meter in urban environments through . This graphical unification of linear and nonlinear dynamics supported in autonomous . A specific extension of Kalman filters using DBNs addressed nonlinear tracking by representing transitions as factored graphs, accommodating non-Gaussian noise and switching dynamics for improved trajectory in cluttered spaces, as demonstrated in experiments with error reductions of 20-30% over extended Kalman variants. For , DBNs analyze sequences of events in intrusion detection, modeling attack progressions as hidden state chains inferred from audit logs and network traffic. One approach predicted intruder goals using a DBN with a sliding observation window on data, achieving over 85% accuracy in classifying multi-stage attacks like buffer overflows by capturing temporal correlations in system calls. This enabled proactive forensics by unifying with , highlighting attack timelines without exhaustive rule-based systems.

Modern and Emerging Uses

In recent years, dynamic Bayesian networks (DBNs) have been applied in healthcare for dynamic risk modeling of gastrointestinal cancers, enabling the integration of temporal data from multi-omics sources to predict disease progression and survival outcomes. For instance, a comprehensive review highlights how DBNs facilitate by modeling time-series data on genetic, proteomic, and clinical variables, improving prognostic accuracy over static models in colorectal and gastric cancers. Similarly, DBNs have advanced the analysis of (fMRI) data to infer time-varying brain connectivity, capturing evolving neural interactions during cognitive tasks, with applications in understanding disorders like . In , DBNs support price trend prediction by uncovering causal relationships among temporal indicators such as trading , sentiment scores, and . Research from 2025 shows that DBNs achieve an average accuracy of 70.9% across major cryptocurrencies in forecasting price directions, surpassing baseline models by leveraging intra- and inter-slice dependencies in time-series data. For AI-driven engagement modeling, DBNs enable learner modeling in educational systems by tracking evolving knowledge states and affective responses over time. A 2024 framework using memory-based DBNs predicts performance in tasks with 86% accuracy, integrating interaction logs to adapt instructional interventions dynamically. In behavioral health, DBNs model habit formation processes, such as adherence, by representing transitions between motivational states and behavioral outcomes. A 2025 analysis of micro-randomized trial data reveals that DBNs quantify engagement trajectories in walking interventions. DBNs enhance battlefield awareness through uncertainty quantification in dynamic scenarios, fusing sensor data to estimate threat evolution under partial observability. A 2024 model employs DBNs for situational awareness in military decision-making, achieving 84.6% accuracy in predicting enemy intentions to recommend courses of action in simulated environments. Emerging federated DBN learning addresses privacy-preserving structure discovery across distributed devices, allowing collaborative inference without data centralization. A 2024 federated approach for temporal causal discovery in DBNs demonstrates scalable structure learning on heterogeneous time-series data. As an example of interdisciplinary use, DBNs assess the plausibility of nature-economy scenarios in , evaluating probabilistic consistency between ecological dynamics and socioeconomic narratives. Recent reviews model time-dependent interactions like and economic impacts, aiding policy simulations.

Historical Development

Early Foundations

The foundations of dynamic Bayesian networks (DBNs) trace back to the late , building directly on earlier probabilistic graphical models. Judea Pearl's development of static Bayesian networks in the provided the core representational framework for encoding conditional dependencies among variables, enabling efficient probabilistic inference in uncertain domains. Concurrently, hidden Markov models (HMMs), pioneered by Leonard E. Baum and colleagues in the 1960s and 1970s, offered a mechanism for modeling temporal sequences through hidden states and observable emissions, influencing the extension of graphical models to dynamic settings. A foundational contribution came from Thomas Dean and Keiji Kanazawa in 1989, who introduced DBNs—initially as time-sliced Bayesian networks—to reason about persistence, causation, and sequential decisions under uncertainty in temporal domains. Their work developed a framework for integrating with probabilistic temporal planning, generalizing static networks to handle evolving systems with both intra- and inter-time-slice dependencies. Building on this, Paul Dagum's work at Stanford University's Section on Medical Informatics from 1990 to 1993 was pivotal in extending and applying these ideas, particularly through dynamic network models (DNMs), a specific formulation of DBNs for forecasting. In collaboration with Adam Galper and Eric Horvitz, Dagum unified diverse temporal models—including HMMs for discrete states, Kalman filters for linear Gaussian dynamics, and autoregressive moving average (ARMA) models for time-series forecasting—under a single probabilistic graphical framework. This unification allowed representation of both contemporaneous dependencies within time slices and time-lagged influences across slices. Key early publications on these extensions emerged from Dagum's efforts, focusing on applications in and medical informatics to model dynamic dependencies in time-series data. Dagum, Galper, and Horvitz's 1992 paper introduced DNMs for tasks such as predicting U.S. car sales influenced by economic indicators, demonstrating the model's ability to integrate multiple time-series sources. This was followed in 1993 by Dagum's application to sleep apnea events, where DNMs captured nonlinear physiological interactions from sensor data to predict respiratory obstructions. The primary motivation for these developments was to address the limitations of classical time-series methods in , particularly their assumptions of linearity and , which failed to model complex, nonlinear, and non-Gaussian processes in real-world scenarios like medical monitoring or economic projection.

Key Advancements

Kevin Murphy's 2002 tutorial provided a foundational formalization of inference and learning algorithms for dynamic Bayesian networks (DBNs), unifying representation, exact methods like the forward-backward algorithm, and approximate techniques such as expectation-maximization for parameter estimation. His accompanying PhD expanded this framework, detailing scalable implementations for both discrete and continuous-state DBNs, which became a cornerstone for subsequent theoretical developments. In 2005, the introduction of relational dynamic Bayesian networks (RDBNs) extended traditional DBNs with object-oriented representations to handle relational data, enabling modeling of complex interactions among entities with varying numbers and types, such as in web or sequences. Developed by Sanghai, Domingos, and Weld, RDBNs incorporate probabilistic relational models to capture intra-slice and inter-slice dependencies in , improving applicability to structured, dynamic domains like evolution. During the 2010s, advances in DBN learning emphasized score-based structure search methods, which evaluate candidate network topologies using decomposable metrics like or decomposable negation to balance model fit and complexity in high-dimensional time-series data. These approaches, often integrated with searches, addressed the in structure learning by prioritizing globally optimal graphs through hill-climbing or genetic s. More recently, in 2024, an improved bacterial optimization was proposed for DBN structure learning, mimicking bacterial to explore the search space more efficiently and achieve global optima by enhancing population diversity and speed over traditional local search methods. Inference in DBNs saw significant improvements in the 2020s through refined variational and particle methods tailored for large-scale applications, as reviewed in a 2021 IEEE survey, which highlighted mean-field variational approximations for faster posterior estimation and sequential particle filters for handling nonlinear dynamics in . These techniques reduced computational demands while maintaining accuracy, with particle methods excelling in non-Gaussian settings by resampling trajectories to approximate filtering distributions. Emerging trends as of 2025 include time-varying DBNs designed for non-stationary data, where network structures evolve over time to model regime shifts, as demonstrated in applications like functional MRI analysis that segment temporal dependencies into adaptive slices for improved in evolving systems. Concurrently, hybrid integrations of DBNs with , such as combining recurrent neural networks for feature extraction with Bayesian layers for , have gained traction for robust predictions in domains like remaining useful life , blending probabilistic reasoning with neural .

Software and Tools

Open-Source Libraries

Several open-source libraries facilitate the implementation, learning, and inference of dynamic Bayesian networks (DBNs), particularly in and environments, enabling researchers to model temporal dependencies in probabilistic graphical models. These tools range from specialized packages for structure and parameter learning to general-purpose frameworks that support flexible DBN constructions. The pgmpy library in provides core support for DBNs as time-variant extensions of static Bayesian networks, allowing users to define intra-slice and inter-slice edges across time steps. It enables exact inference through on DBN structures, suitable for discrete and continuous variables, and includes parameter learning via on time-series data. While pgmpy offers general structure learning algorithms using (BIC) scores for static networks, users can adapt these for DBNs by learning slice structures separately. In , the bnlearn package extends capabilities to dynamic settings for time-series data, supporting structure learning of DBNs through constraint-based and score-based algorithms adapted for temporal dependencies. It includes dynamic extensions that model evolving networks over time slices, with parameter estimation using the Expectation-Maximization (EM) algorithm, particularly effective for handling in sequential observations. The dbnlearn package in is tailored for univariate time-series analysis with DBNs, focusing on structure learning from single-variable sequences and parameter estimation to capture autoregressive patterns. It provides tools by propagating probabilities forward in time, making it suitable for predictive modeling in domains like . For lower-level implementations, the libDAI C++ library offers approximate inference methods for graphical models, including support for DBNs via representations. It implements (MCMC) techniques, such as , to approximate posterior distributions in loopy or complex DBN structures where exact methods are intractable. Probabilistic programming libraries like and PyMC in enable flexible DBN modeling through declarative code, treating time slices as sequential dependencies within processes. supports variational inference for scalable posterior approximation in DBNs, such as hidden Markov models generalized to , while PyMC provides similar variational methods alongside MCMC for time-series models akin to DBNs. Recent developments in these frameworks enhance capabilities. aGrUM is an open-source C++ framework from LIP6, with pyAgrum Python bindings that enable DBN structure learning via score-based algorithms like on temporal data. The bindings facilitate end-to-end workflows, from data to structure discovery and parameter estimation, with for large datasets. It offers optional commercial support through integration services. Historical tools, such as the , provided early open-source foundations for DBN and learning in academic research.

Commercial Packages

Several commercial software packages provide robust support for Dynamic Bayesian Networks (DBNs), emphasizing reliability, professional support, and integration for industrial-scale applications such as decision support systems and predictive modeling. These tools often include proprietary libraries and graphical interfaces that facilitate DBN construction, , and learning while ensuring compliance with standards. Unlike open-source alternatives used primarily for prototyping, commercial options prioritize , , and specialized features for domains requiring high-stakes . BayesFusion's is a C++ library serving as a core engine for graphical probabilistic models, including full support for DBNs through mechanisms that model temporal influences. It enables both exact via junction tree algorithms and approximate methods like for handling complex, time-evolving systems. has been applied in forensics for Bayesian analysis of , supporting probabilistic reasoning in investigative scenarios. Complementing , BayesFusion's offers a for interactive DBN modeling, allowing users to build, edit, and learn dynamic models with temporal arcs of arbitrary order. It supports parameter learning from data and visualization of time-slice expansions, making it suitable for exploratory and production environments. integrates seamlessly with for backend computation, enabling hybrid discrete-continuous DBNs. HUGIN, developed by HUGIN EXPERT A/S, is a commercial tool for Bayesian networks with extensions for dynamic models, where time slices represent system evolution and support unrolling for over multiple periods. It incorporates learning for parameter estimation from incomplete datasets and employs junction tree algorithms for exact probabilistic in and multiply connected DBNs. These features make HUGIN effective for applications in and diagnostics. Netica, from Norsys Software Corp., supports temporal Bayesian networks—also known as DBNs—through time-delayed arcs and persistence nodes that model evolving states, with automatic expansion into static networks for a specified horizon. It has been utilized in bioinformatics for modeling gene regulatory networks and protein interactions over time, aiding in predictive simulations of biological processes. Netica's intuitive interface and learning capabilities, including , suit applied research in life sciences. BayesFusion tools feature integrations with services for scalable deployment.

References

  1. [1]
    [PDF] Dynamic Bayesian Networks∗ - UBC Computer Science
    Murphy. Dynamic Bayesian Networks: Representation, Inference and Learning. PhD thesis, Dept. Computer Science, UC Berkeley, 2002. [MW01].
  2. [2]
    [PDF] A Model for Reasoning About Persistance and Causation - Brown CS
    Jan 25, 2006 · Finally, we discuss how our probabilistic model addresses certain classical problems in temporal reasoning (e.g., the frame and qualification ...
  3. [3]
    Learning Dynamic Bayesian Networks from Data: Foundations, First ...
    In this paper we give a comprehensive presentation on the training of Dynamic Bayesian Networks (DBNs), including both structure and parameters, from data.
  4. [4]
    Dynamic Bayesian Network - an overview | ScienceDirect Topics
    A Dynamic Bayesian Network (DBN) is an extension of a Bayesian network that is used to model the relationships and uncertainties among genes in a gene ...
  5. [5]
    [PDF] BAYESIAN NETWORKS* Judea Pearl Cognitive Systems ...
    Bayesian networks are directed acyclic graphs (DAGs) in which the nodes represent vari-.
  6. [6]
    d-SEPARATION WITHOUT TEARS (At the request of many readers)
    d-separation is a criterion for deciding, from a given a causal graph, whether a set X of variables is independent of another set Y, given a third set Z.
  7. [7]
    [PDF] An Introduction to Hidden Markov Models and Bayesian Networks
    It is not possible to sum over each variable independently. Like the likelihood, computing the posterior probability of a single state variable given the ...
  8. [8]
    The viterbi algorithm | IEEE Journals & Magazine
    This paper gives a tutorial exposition of the algorithm and of how it is implemented and analyzed. Applications to date are reviewed. Increasing use of the ...
  9. [9]
    [PDF] A New Approach to Linear Filtering and Prediction Problems1
    Using a photo copy of R. E. Kalman's 1960 paper from an original of the ASME “Journal of Basic Engineering”, March. 1960 issue, I did my best to make an ...
  10. [10]
    [PDF] An Elementary Introduction to Kalman Filtering - arXiv
    Kalman filtering is a state estimation technique invented in 1960 by Rudolf E. Kálmán [16]. Because of its ability to extract useful information from noisy data ...
  11. [11]
    Dynamic Bayesian network modeling for longitudinal brain ...
    The DBN model was introduced in (Dean and Kanazawa (1989); Murphy (2002)); DBNs have been used to infer transcriptional regulatory networks from gene-expression ...
  12. [12]
    [PDF] Multibody dynamic systems as Bayesian Networks - UAL
    Sensor observations ot have been modeled as depending only on the current mechanism position, velocity and acceleration; note how this simple model. Page 7 ...
  13. [13]
    [PDF] Learning Dynamic Bayesian Networks* Zoubin Ghahramani ...
    We first provide a brief tutorial on learning and Bayesian networks. We then present some dynamic Bayesian networks that can capture much richer structure than ...
  14. [14]
    [PDF] Dynamic Bayesian Networks: Representation, Inference and ...
    Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete random variable ...
  15. [15]
    [PDF] Modelling Gene Expression Data using Dynamic Bayesian Networks
    The inference algorithms which are needed to do this are briefly discussed in Section 5.1. Note that, if all the nodes are observed, there is no need to do ...
  16. [16]
    [PDF] Rao-Blackwellised Particle Filtering for Dynamic Bayesian Networks
    In this paper, we develop the general theory of RBPFs, and apply it to several novel types of DBNs. We omit the proofs of the theorems for lack of space: please.
  17. [17]
    [PDF] Particle Filters in Robotics - Sebastian Thrun
    Rao-Blackwellised particle filtering for dynamic Bayesian networks. In UAI ... A Bayesian algorithm for simultaneous localization and map building. In.
  18. [18]
    [PDF] Learning dynamic Bayesian networks - MLG Cambridge
    A Bayesian network is a graphical model representing conditional dependencies between random variables, using nodes and directed arcs to show conditional ...
  19. [19]
    [PDF] Speech Recognition with Dynamic Bayesian Networks
    Given a DBN structure capable of representing these long- and short-term correlations, we applied the EM algorithm to learn models with up to 500,000 parameters ...
  20. [20]
    [PDF] Learning the Structure of Dynamic Probabilistic Networks
    In this paper, we extend the BIC and BDe scores to han- dle the problem of learning DPN structure from complete data. More importantly, we extend the SEM ...
  21. [21]
    Comparative evaluation of score criteria for dynamic Bayesian ...
    In DBNs, structure learning can be approached through three distinct methodologies. These methodologies include score-based, constraint-based, and hybrid ...
  22. [22]
    Dynamic Bayesian network structure learning based on an improved ...
    Apr 9, 2024 · This study proposes an improved bacterial foraging optimization algorithm (IBFO-A) to solve the problems of random step size, limited group communication.
  23. [23]
    [2412.09814] Federated Learning of Dynamic Bayesian Network via ...
    Dec 13, 2024 · In this study, we first introduce a federated learning approach for estimating the structure of a Dynamic Bayesian Network from homogeneous time series data.
  24. [24]
    Learning Non-Stationary Dynamic Bayesian Networks
    In this paper, we introduce a new class of graphical model called a non-stationary dynamic Bayesian network, in which the conditional dependence structure of ...Missing: seminal | Show results with:seminal
  25. [25]
    Discovering gene regulatory networks of multiple phenotypic groups ...
    Jun 10, 2022 · Dynamic Bayesian networks (DBNs) can be used for the discovery of gene regulatory networks (GRNs) from time series gene expression data.Missing: arcs | Show results with:arcs
  26. [26]
    A dynamic Bayesian network approach to protein secondary ...
    Jan 25, 2008 · In this paper, we report a new method of probabilistic nature for protein secondary structure prediction, based on dynamic Bayesian networks ( ...Missing: 2010 | Show results with:2010
  27. [27]
    Comprehensive review of Bayesian network applications in ...
    Jun 24, 2025 · ... Dynamic Bayesian networks; COAD: Colorectal ... BN applications in multi-cancer research for gastrointestinal cancers: BN models ...
  28. [28]
    Dynamic Bayesian Networks for Predicting Cryptocurrency Price ...
    Aug 8, 2025 · 3 Dynamic Bayesian networks. BNs are powerful tools for modelling complex systems and can be used to extract the underlying causal relationships ...
  29. [29]
    Memory-Based Dynamic Bayesian Networks for Learner Modeling
    Aug 21, 2024 · Dynamic Bayesian networks (DBNs) not only provide interpretability and the ability to integrate data-driven insights with expert judgment for ...
  30. [30]
    A dynamic Bayesian network approach to modeling engagement ...
    Sep 18, 2025 · In this study, Bayesian credible intervals were calculated via MCMC simulation described above. Model results: across participant findings.
  31. [31]
    Dynamic Bayesian network-based situational awareness and ...
    Oct 15, 2024 · Rothkrantz, Dynamic Bayesian networks for situational awareness in ... Uncertainty quantification · Course of action. Qualifiers. Research ...
  32. [32]
    Nature scenario plausibility: A dynamic Bayesian network approach
    We provide an algorithm that utilizes the framing of dynamic Bayesian networks to probabilistically evaluate whether the QS is consistent with such a narrative.
  33. [33]
    Dynamic Bayesian networks with application in environmental ...
    Aug 9, 2025 · Compared with BN, the dynamic Bayesian networks (DBNs) has an advantage in handling the uncertainty problem of timing nonlinearity (Chang et al.
  34. [34]
    [1303.5396] Dynamic Network Models for Forecasting - arXiv
    Mar 13, 2013 · View a PDF of the paper titled Dynamic Network Models for Forecasting, by Paul Dagum and 2 other authors. View PDF. Abstract:We have developed ...
  35. [35]
    [1303.1461] Forecasting Sleep Apnea with Dynamic Network Models
    Mar 6, 2013 · Title:Forecasting Sleep Apnea with Dynamic Network Models. Authors:Paul Dagum, Adam Galper. View a PDF of the paper titled Forecasting Sleep ...Missing: Horvitz | Show results with:Horvitz
  36. [36]
    [PDF] A Tutorial on Dynamic Bayesian Networks - UBC Computer Science
    Nov 12, 2002 · Kevin P. ... Parameter learning for undirected and chain graph models. • Structure learning. Discriminative learning. Bayesian learning.
  37. [37]
    Kevin Murphy's PhD Thesis - UBC Computer Science
    Kevin Murphy's PhD Thesis. "Dynamic Bayesian Networks: Representation, Inference and Learning". UC Berkeley, Computer Science Division, July 2002.
  38. [38]
    [PDF] Relational Dynamic Bayesian Networks - University of Washington
    Definition 6 (Two-time-slice relational dynamic Bayesian network: 2-TRDBN). A 2-TRDBN is a graph which given the state of the domain at time t gives a ...<|control11|><|separator|>
  39. [39]
    A survey of Bayesian Network structure learning
    Jan 17, 2023 · This paper provides a comprehensive review of combinatoric algorithms proposed for learning BN structure from data, describing 74 algorithms.Missing: seminal | Show results with:seminal
  40. [40]
    (PDF) Dynamic Bayesian Network Modeling, Learning, and Inference
    The aim of this study is to provide a systematic review of the literature that details the evolution and advancement of DBNs, focusing in the period 1997–2019 ...
  41. [41]
    Dynamic Bayesian Network (DBN) — 1.0.0 | pgmpy docs
    This is a time variant model of the static Bayesian model, where each time-slice has some static nodes and is then replicated over a certain time period.
  42. [42]
    bnlearn - Bayesian network structure learning
    bnlearn is an R package for learning the graphical structure of Bayesian networks, estimating their parameters and performing probabilistic and causal ...Examples · Bayesian network structure... · Documentation · Marco ScutariMissing: dynamic time- series
  43. [43]
    Dynamic Bayesian Network Inference — pgmpy 1.0.0 documentation
    References. [1] Dynamic Bayesian Networks: Representation, Inference and Learning. by Kevin Patrick Murphy http://www.cs.ubc.ca/~murphyk/Thesis/thesis.pdf.
  44. [44]
    Bayesian network structure learning - bnlearn
    Time Series: Dynamic Bayesian Networks · Introductory Example: Domotics (NEW) · Graphical Representation (NEW) · Probabilistic Representation (NEW) · Learning a ...Missing: EM | Show results with:EM
  45. [45]
    Parameter learning from data with missing values - bnlearn
    Aug 1, 2025 · On the other hand, the Expectation-Maximization (EM) algorithm is explicitly designed to incorporate incomplete data into parameter estimates.
  46. [46]
    [PDF] dbnlearn: Dynamic Bayesian Network Structure Learning ...
    Jul 30, 2020 · The package allows learning the structure of univariate time series, learning parameters and forecasting. Details. Package: dbnlearn-package.Missing: EM | Show results with:EM
  47. [47]
    [PDF] libDAI: A Free and Open Source C++ Library for Discrete ...
    Abstract. This paper describes the software package libDAI, a free & open source C++ library that provides implementations of various exact and approximate ...Missing: MCMC | Show results with:MCMC
  48. [48]
    BayesFusion
    BayesFusion provides artificial intelligence modeling and machine learning software based on Bayesian networks. Our software runs on desktops, mobile devices, ...
  49. [49]
    A guide to bayesian networks software for structure and parameter ...
    Throughout this paper, we include structure learning algorithms developed for both probabilistic modeling and causal discovery. For a detailed discussion of the ...
  50. [50]
    SMILE Engine – BayesFusion
    SMILE is a reasoning and learning/causal discovery engine for graphical models, such as Bayesian networks, influence diagrams, and structural equation models.
  51. [51]
    Dynamic Bayesian Networks – BayesFusion
    ​A dynamic Bayesian network (DBN) is a Bayesian network extended with additional mechanisms that are capable of modeling influences over time (Murphy, 2002).
  52. [52]
    [PDF] SMILE: Structural Modeling, Inference, and Learning Engine
    Apr 27, 2024 · SMILE (Structural Modeling, Inference, and Learning Engine) is a software library for performing Bayesian inference, written in C++, ...Missing: forensics | Show results with:forensics
  53. [53]
    Publications based on our software - BayesFusion
    Evidence-based evidence: a practical method for Bayesian analysis of forensic evidence. ... SMILE: Structural Modeling, Inference, and Learning Engine and GeNIe: ...
  54. [54]
    GeNIe Modeler – BayesFusion
    GeNIe Modeler is a graphical user interface (GUI) to SMILE Engine and allows for interactive model building and learning.Missing: DBN | Show results with:DBN
  55. [55]
    Using GeNIe > Dynamic Bayesian networks > Creating a discrete DBN
    Please note that a DBN, as implemented in GeNIe, can have temporal arcs of any order, which means that DBNs in GeNIe can model dynamic processes of any order.
  56. [56]
    Using GeNIe > Dynamic Bayesian networks > Creating a hybrid DBN
    Similarly to hybrid static Bayesian networks, it is possible to create hybrid DBNs, i.e. DBNs that contain both discrete and continuous nodes.
  57. [57]
    Hugin Expert | Hugin Expert
    **Summary of Hugin Support:**
  58. [58]
    Dynamic Bayesian Network — HUGIN GUI 8.8 documentation
    Dynamic Bayesian (or Belief) Networks (DBNs) are used to model systems that evolve over time. The state of the system at a single time instant is modeled using ...
  59. [59]
    Junction Tree — HUGIN GUI 8.9 documentation
    The junction tree is obtained through transformation that involves the processes of moralization and triangulation. It can be shown that exact inference in a ...Missing: EM | Show results with:EM
  60. [60]
    aGrUM/pyAgrum
    - **Commercial Support**: Yes, available. Contact info@agrum.org for integration with a different license.
  61. [61]
    Learning — pyagrum 2.3.0 documentation
    pyAgrum provides a complete framework for learning Bayesian networks from data. It includes various algorithms for structure learning, parameter learning, and ...
  62. [62]
    Introduction to pyAgrum — pyagrum 2.3.0 documentation
    pyAgrum is a scientific C++ and Python library dedicated to Bayesian networks (BN) and other Probabilistic Graphical Models. Based on the C++ aGrUM library ...
  63. [63]
    Norsys - Netica Application
    Netica is a powerful, easy-to-use, complete program for working with belief networks and influence diagrams. It has an intuitive and smooth user interface for ...
  64. [64]
    Dynamic Bayes Nets
    Steps for working with a Dynamic Bayes Net: 1. Create DBN. 2. Generate time expansion. 3. Compile and use. If you have any requests or suggestions for Netica's ...
  65. [65]
    Modelling complex systems with Bayesian Networks using Netica
    Dec 2, 2019 · This is an Introduction Presentation for Learning Bayesian Networks by hands-on practice with examples which may lead to a 3-day workshop. - The ...
  66. [66]
    Bayesian Networks in the Cloud - BayesFusion
    Nov 24, 2021 · How to use AWS Lambda with SMILE and more – see our new video: November 24, 2021 /by BayesFusion. Share this entry.
  67. [67]
    SMILE Anywhere - Bayesian Networks in the Cloud - YouTube
    Nov 23, 2021 · bayesfusion #smile #genie #bayesiannetwork #ai #machinelearning #cloud #aws #lambda #iot SMILE Anywhere - Bayesian Networks in the Cloud ...<|separator|>