Fact-checked by Grok 2 weeks ago

Dynamic causal modeling

Dynamic causal modeling (DCM) is a Bayesian framework for inferring the causal architecture of coupled dynamical systems from observed time-series data, particularly in to estimate effective between regions. It employs generative models based on equations to describe how neuronal states evolve and interact under experimental inputs, linking biophysical mechanisms to measured signals like blood-oxygen-level-dependent (BOLD) responses in (fMRI). Originally formulated as a bilinear approximation to nonlinear , DCM allows for the quantification of context-dependent modulations in , such as those induced by cognitive tasks or pharmacological interventions. Introduced by Karl Friston and colleagues in 2003, DCM was initially developed for evoked responses in fMRI data, building on earlier work in and dynamical modeling in . The approach uses variational to estimate posterior distributions of model parameters, including intrinsic connectivity (baseline coupling between regions) and exogenous influences from stimuli. A key strength lies in its emphasis on model comparison via Bayesian evidence, enabling researchers to select among competing hypotheses about network structures without . This probabilistic formulation distinguishes DCM from correlational methods like functional connectivity analysis, as it explicitly models directed influences and their perturbations. Since its inception, DCM has been extended to other neuroimaging modalities, including (EEG) and (MEG), where it accounts for spatiotemporal dynamics of evoked and induced responses. For EEG/MEG, the framework incorporates electromagnetic forward models to map neuronal sources to data, facilitating inferences about oscillatory and interactions. Applications span cognitive domains such as , language processing, and , and has been used in numerous studies to test theories of function. Recent advancements as of 2025 include nonlinear extensions for dense graphs, integrations with for group-level analyses, and applications in probabilistic programming languages and modeling complex neural networks.

Introduction

Definition and Principles

Dynamic causal modeling (DCM) is a Bayesian framework designed to infer effective connectivity among regions from neuroimaging data, such as (fMRI) or (EEG), by employing generative models that simulate directed influences between neuronal s. This approach treats the as a nonlinear dynamic perturbed by experimental inputs, generating observable responses through a forward model that links hidden neural states to measured signals. Unlike purely correlational methods, DCM explicitly models causal interactions, enabling the estimation of how activity in one region influences another under specific conditions. The core principles of DCM revolve around forward modeling, Bayesian inversion, and the clear demarcation from other forms of connectivity analysis. In forward modeling, neural dynamics are first specified as differential equations describing how hidden states evolve in response to inputs, which are then transformed into predicted observations via a biophysical observation model, such as a hemodynamic response function for fMRI. Bayesian inversion follows, where observed data are used to update prior beliefs about model parameters, yielding posterior distributions that quantify uncertainty in connectivity estimates. This distinguishes DCM from functional connectivity, which relies on undirected correlations without , and from structural connectivity, which maps anatomical pathways but ignores dynamic interactions. Effective connectivity in DCM captures context-dependent coupling between brain regions, where the strength of directed influences can be modulated by experimental or endogenous inputs, allowing for the investigation of task-specific or state-dependent network changes. At its foundation, the generative model posits that observed data y arise from hidden neural states x according to the equation y = g(x, \theta) + \epsilon, where g is the observation function, \theta represents the parameters governing (e.g., intrinsic coupling matrices), and \epsilon is additive measurement noise. The evolution of hidden states x is driven by a state equation incorporating inputs, enabling DCM to model bilinear modulations that reflect how experimental factors alter inter-regional influences.

History and Evolution

Dynamic causal modeling (DCM) was introduced in 2003 by Karl Friston and colleagues as a Bayesian framework for inferring effective connectivity from (fMRI) data, treating the as a nonlinear perturbed by external inputs. This seminal work extended prior hemodynamic modeling approaches by incorporating bilinear approximations to capture context-dependent interactions among brain regions. Initial extensions to (EEG) and (MEG) occurred in 2006, with David et al. developing for evoked responses using neural mass models to simulate cortical dynamics and forward models for electromagnetic fields. In 2006, further refinements included parametric empirical Bayes for lead field parameterization, enabling more robust inferences on hierarchical networks. Nonlinear emerged in 2008, allowing second-order interactions at the neuronal level to model modulatory effects like on . In the , evolved to address steady-state responses, with et al. (2009) proposing spectral formulations based on Fokker-Planck equations for frequency-domain analyses of ongoing activity. Resting-state was formalized in 2014 by Friston et al., adapting the framework to infer intrinsic connectivity fluctuations without external tasks, using stochastic inputs to model endogenous dynamics. As of 2025, recent advancements include integration with languages, as detailed by Baldy et al., enabling scalable via tools like and for complex neural models. Multi-scale parcellation schemes have been proposed by Zarghami et al. on , facilitating hierarchical region definitions in to bridge meso- and macro-scale brain organization. These developments underscore DCM's enduring influence, highlighted in the 2025 commemoration of the (SPM) software's 30-year milestone, where remains a cornerstone for analyses.

Theoretical Foundations

Bayesian Framework

Dynamic causal modeling (DCM) employs a Bayesian framework to infer the parameters of generative models from observed data, treating model parameters θ as random variables. The posterior distribution over these parameters, given the data y and model m, is computed according to as p(θ|y, m) ∝ p(y|θ, m) p(θ|m), where p(y|θ, m) is the likelihood and p(θ|m) is the prior distribution. This approach enables the estimation of effective connectivity by integrating prior beliefs with the evidence provided by the data, facilitating robust inference even with noisy measurements typical in (fMRI) or (EEG). Priors in DCM play a crucial role in regularizing the inference process, particularly through hierarchical structures that encode anatomical and physiological knowledge about brain connectivity. For connectivity parameters, such as intrinsic coupling matrices, Gaussian priors are often specified with means centered at zero and variances tuned to ensure system stability, while hierarchical extensions allow subject-specific parameters to be drawn from group-level hyperpriors informed by diffusion tensor imaging tractography or known neuroanatomy. These priors prevent overfitting and incorporate domain-specific constraints, such as sparsity in long-range connections, thereby improving the biological plausibility of the estimated directed influences. To approximate the intractable posterior, DCM utilizes the variational free-energy principle, which provides a lower bound on the model ln p(y|m). The F is defined as
F = \ln p(y|m) - D_{\text{KL}}[q(\theta) \| p(\theta|y,m)],
where D_{\text{KL}} is the Kullback-Leibler divergence between an approximate variational q(θ) and the true posterior, and F is minimized with respect to q(θ) to tighten the bound. This principle underpins by approximating the , balancing model fit and complexity. For posterior covariance estimation, the Laplace assumes a Gaussian form around the maximum (MAP) estimate, yielding the
\Sigma = \left( \frac{\partial^2 \ln p(\theta|y)}{\partial \theta^2} \right)^{-1},
computed as the inverse of the negative log-posterior at the mode, enabling efficient characterization of parameter uncertainty.

Generative Models of Brain Dynamics

Dynamic causal modeling (DCM) employs generative models to simulate the evolution of hidden neural states and their mapping to observed data, enabling inferences about effective in the . These models treat the as a nonlinear perturbed by external inputs, where the forward process generates that can be compared to empirical measurements. The core of the generative framework consists of neural state equations describing the temporal dynamics of neuronal activity and observation equations linking these states to measurable signals.00202-7) The neural dynamics are formalized through ordinary differential equations of the form \dot{x} = f(x, u, \theta), where x represents the vector of hidden neural states (such as membrane potentials or population activities across brain regions), u denotes exogenous inputs (e.g., sensory stimuli or tasks), and \theta encompasses model parameters governing the system's behavior. In DCM, \theta includes connectivity matrices: the intrinsic connectivity matrix A, which captures baseline coupling between regions; modulatory matrices B_j, which encode context- or input-dependent changes in connectivity; and the driving matrix C, which specifies direct influences of inputs on states. For fMRI applications, these dynamics are often approximated using a bilinear form derived from a first-order Taylor expansion around a steady-state point, \dot{x} = (A + \sum_j u_j B_j) x + C u, which linearizes nonlinear interactions while preserving essential modulatory effects. This approximation facilitates tractable simulations of regional interactions under experimental perturbations.00202-7) For EEG and , DCM generative models incorporate canonical microcircuit architectures to represent local cortical dynamics, typically comprising interconnected excitatory and inhibitory neuronal populations. These models, often based on neural mass formulations, simulate four key subpopulations per source region: spiny stellate cells (excitatory input layer), inhibitory , superficial pyramidal cells (excitatory feedback), and deep pyramidal cells (excitatory output). Connectivity within and between these populations is parameterized to reflect hierarchical , with parameters in \theta modulating synaptic gains and delays to generate spatiotemporal patterns of electrical activity. This structure allows the model to capture oscillatory phenomena and directed influences at the circuit level. The observation component of the generative model bridges hidden states to data via y = g(x, \theta, \lambda) + z, where y is the observed signal, g is a nonlinear mapping function parameterized by \theta and observation-specific parameters \lambda (e.g., lead fields or hemodynamic responses), and z represents measurement noise. This equation encapsulates the forward process from neural activity to sensor measurements, ensuring the overall model predicts empirical time series under specified priors on parameters.00202-7)

Experimental Design

Task-Based Paradigms

Task-based paradigms in dynamic causal modeling (DCM) utilize controlled experimental manipulations to probe how specific stimuli and cognitive factors influence effective among regions. These paradigms typically involve presenting exogenous sensory inputs or tasks that drive neural activity, allowing researchers to test hypotheses about context-dependent interactions, such as how alters signal in sensory hierarchies. By design, task-based approaches enable the isolation of driving effects from modulatory influences, providing a framework to infer causal mechanisms underlying observed signals. Factorial designs are particularly suited for in task-based settings, as they facilitate the examination of main effects and interactions on parameters. For instance, a 2x2 setup might cross sensory stimuli (e.g., visual vs. auditory) with cognitive modulators (e.g., vs. no attention), enabling the assessment of how attentional load changes coupling strengths between regions like the and prefrontal areas. This design structure supports Bayesian model comparison to evaluate competing hypotheses, such as whether occurs via top-down enhancement or bottom-up gating. Such approaches have been shown to enhance the sensitivity of DCM to detect subtle connectivity changes induced by experimental factors. In , driving inputs, represented by the , capture the direct influence of exogenous stimuli on specific regions, modeling how task onsets perturb neural states from . Conversely, modulatory inputs, encoded in the , reflect experimental factors that nonlinearly alter intrinsic connections, such as increased on forward connections during selective . These distinctions allow to disentangle coupling () from task-induced perturbations, ensuring that inferred effective reflects experimentally controlled variations rather than endogenous fluctuations. The bilinear underlying these matrices provides a computationally tractable way to estimate how stimuli propagate through networks. A representative application involves face-processing tasks, where DCM models differential responses to faces versus houses to infer top-down versus bottom-up influences in the ventral visual stream. In such paradigms, stimuli like faces drive activity in early visual areas (e.g., ), while attentional instructions modulate connectivity from higher regions like the to infer hierarchical processing. Bayesian inversion of these models reveals, for example, strengthened backward connections under top-down conditions, supporting theories of in . This task-based setup has demonstrated robust of parameters when contrasts are optimized to maximize signal variance across conditions. Design efficiency in task-based DCM paradigms is achieved by optimizing experimental contrasts to improve parameter identifiability and reduce estimation uncertainty. Efficient designs prioritize high-variance inputs that uniquely perturb targeted connections, such as rapid event-related sequences that deconvolve overlapping responses. Simulations indicate that balanced layouts, with sufficient trials per condition, yield posterior estimates with narrow credible intervals, particularly for modulatory effects. This optimization ensures that DCM inferences are reliable, minimizing among parameters and enhancing the generalizability of findings across subjects.

Resting-State Approaches

Resting-state dynamic causal modeling (rsDCM) extends the standard DCM framework to analyze intrinsic brain fluctuations in the absence of external tasks or stimuli, treating these fluctuations as inputs that drive neuronal . Introduced in 2014, rsDCM models the endogenous variability observed in resting-state fMRI data by incorporating noise terms into the , enabling inferences about effective from cross-spectral densities rather than evoked responses. This approach shifts the focus from task-induced changes to the baseline architecture of brain networks, capturing the slow fluctuations (typically 0.01–0.1 Hz) characteristic of resting states. At its core, rsDCM augments the deterministic state equations of classical with to represent endogenous perturbations. The neuronal dynamics are described by the : \frac{dx}{dt} = f(x, \theta) + w, where x denotes the state variables (e.g., neuronal activity), f(x, \theta) captures the deterministic evolution governed by parameters \theta, and w is zero-mean with a specified , often assuming a power-law to match the 1/f-like characteristics of resting-state signals. This formulation allows rsDCM to generate predicted functional through the of these fluctuations across coupled regions, with model inversion performed using variational Bayes to estimate parameters and their uncertainties. A prominent application of rsDCM involves inferring effective connectivity within the (DMN), a set of regions including the medial prefrontal cortex (mPFC), (PCC), and inferior parietal lobules that exhibit coordinated activity during rest. For instance, analyses of resting-state fMRI data have revealed directed influences such as from mPFC to PCC and from right inferior parietal lobule to both mPFC and PCC, highlighting asymmetric right-hemisphere dominance in DMN interactions. These findings demonstrate rsDCM's utility in elucidating the causal structure underlying intrinsic network coherence, with Bayesian model selection used to compare competing network topologies. Unlike task-based DCM, which emphasizes modulatory effects on connectivity induced by experimental inputs, rsDCM primarily estimates the baseline coupling matrix (A) that governs unconditional interactions between regions, without bilinear modulations from external drivers. This distinction allows rsDCM to probe the intrinsic repertoire of brain states, providing a complementary perspective to the context-dependent inferences from task paradigms.

Model Specification

Neural Models

In dynamic causal modeling (DCM), neural models begin with the specification of regions of interest (ROIs) that serve as nodes in the network, representing key areas whose interactions are hypothesized to underlie observed . These ROIs are typically selected based on anatomical or functional , often derived from standardized atlases such as the Automated Anatomical Labeling (AAL) atlas or probabilistic functional parcellations from meta-analytic databases like the BrainMap or Neurosynth. For instance, in visual processing studies, ROIs might include primary (V1) and motion-sensitive area V5/MT, defined by spherical volumes centered on peak coordinates from group-level activations or atlas labels to ensure reproducibility across subjects. This selection constrains the model to a manageable number of nodes, usually 4–8, to balance biological plausibility with computational feasibility. The core of the neural model is captured by connectivity matrices that parameterize the directed influences among ROIs, governed by a of equations describing the of hidden neural states. The matrix A encodes baseline or intrinsic , representing the fixed coupling strengths between regions in the absence of experimental perturbations, such as the default forward and backward connections in hierarchical sensory systems. The matrix B specifies modulatory effects, where experimental conditions (e.g., attentional tasks) alter the strengths in A, allowing context-dependent changes in effective . The matrix C defines direct input effects, modeling how exogenous stimuli (e.g., sensory inputs) drive specific ROIs without intermediary . To promote neurobiologically realistic sparsity—reflecting that not all regions are densely interconnected— distributions on these matrices impose shrinkage, often using Gaussian priors with zero means for absent or hierarchical priors informed by anatomical atlases like the CoCoMac database. This sparsity regularization prevents and favors parsimonious models aligned with known cortical hierarchies. Nonlinear extensions to these linear formulations enable the modeling of context-sensitive interactions, particularly through bilinear approximations that approximate higher-order dynamics without full nonlinearity. In bilinear DCM, activity in one region acts as a gating on connections between others, parameterized by an additional D that captures multiplicative interactions (e.g., prefrontal activity modulating sensory in V1-to-V5 pathways during ). This approach, introduced to handle phenomena like top-down in perceptual inference, expands the state equations to second-order terms while maintaining tractability for , and has been validated in applications such as where nonlinear gating outperforms linear models in explaining evoked responses. Such extensions are particularly useful for capturing emergent behaviors in distributed systems without resorting to computationally intensive full nonlinear simulations. To explore uncertainty in neural architecture, DCM employs a hierarchical model space where families of models are defined by varying structural assumptions, such as the presence/absence of specific connections or input regimes. Each family partitions the parameter space—for example, one family assuming only forward connectivity versus another including bidirectional links—and or averaging is applied across families to identify the most plausible architecture at individual or group levels. This hierarchical approach, often using priors like the for model probabilities, facilitates inference on overarching hypotheses (e.g., ) by pooling evidence from multiple competing specifications, enhancing robustness in group studies.

Observation Models for fMRI

In dynamic causal modeling (DCM) for (fMRI), the observation model specifies the generative process that transforms hidden neural states into observable blood oxygenation level-dependent (BOLD) signals. This forward mapping is essential for inferring effective , as it accounts for the convoluted and nonlinear relationship between neuronal activity and measured hemodynamic responses. The model assumes that regional neural activity, derived from the underlying state equations, drives localized changes in cerebral blood flow, which in turn modulate vascular volume and deoxyhemoglobin concentration to produce the BOLD contrast. The core of this observation model is the balloon model of , which conceptualizes the cerebral vasculature—particularly postcapillary venules—as compliant balloons that expand in response to neural-induced . Introduced by Buxton et al. and extended for nonlinear fMRI analyses, the model links neural input to blood flow dynamics, blood volume changes, and deoxyhemoglobin dissipation, with the BOLD signal emerging as a monotonic, nonlinear of the intravascular-to-extravascular signal . This framework captures key physiological features, such as flow-volume coupling via Grubb's law (where blood volume scales as flow raised to an exponent α ≈ 0.38) and oxygen extraction fraction adjustments during activation. The dynamics of the balloon model are described by a system of ordinary differential equations for the hemodynamic state variables: normalized blood flow f, v, and normalized deoxyhemoglobin content q. Neural activity n (typically the activity of excitatory neuronal populations) serves as the driving input. A key equation governs the rate of change in blood flow, incorporating neural drive, a term, and an autoregulatory term: \frac{df}{dt} = n - \kappa f - \gamma \left( f - \frac{1}{v} \right) Here, \kappa is the decay rate of the vasodilatory signal (~0.4 s^{-1}), and \gamma is the autoregulation rate (~0.2 s^{-1}), which stabilizes around its resting value (normalized to 1) adjusted for . Complementary equations model and deoxyhemoglobin evolution, incorporating Grubb's law: \frac{dv}{dt} = f v^{\alpha - 1} - \frac{v}{\tau} \frac{dq}{dt} = \frac{ f E_0 / v - q f^{\alpha} / v }{\tau} where \tau is the hemodynamic time (typically ~0.9 s), E_0 is the baseline oxygen extraction fraction (~0.34), and \alpha ≈ 0.38 from Grubb's law. The observed BOLD signal y for each region is then given by a weighted sum of hemodynamic states: y = k_1 (1 - q) + k_2 \left(1 - \frac{q}{v}\right) + k_3 (1 - v) + \epsilon with parameters k_1, k_2, k_3 reflecting magnetic field strength and tissue properties (e.g., for 1.5 T, k_1 = 7 \times 10^{-3}, etc.), and \epsilon as measurement noise. These equations ensure the model reproduces canonical hemodynamic response functions (HRFs) peaking ~5-6 s post-stimulus. To handle nonlinear interactions, such as supralinear flow-volume coupling or history-dependent responses, the balloon model employs Volterra kernels for convolution of neural activity with the HRF. The first-order kernel approximates the standard linear HRF, while higher-order (second- and third-order) kernels capture interactions between successive neural inputs, enabling DCM to model phenomena like post-stimulus undershoot or refractory effects without region-specific tuning. These kernels are derived analytically from the differential equations, providing a kernel expansion of the input-output mapping up to second order in most implementations. Bayesian estimation in DCM imposes priors on hemodynamic parameters to ensure physiological plausibility and stationarity across brain regions, reflecting the assumption that neurovascular is regionally invariant. Specifically, parameters like \kappa (signal decay , prior ~0.4 s^{-1}), \gamma (autoregulation , ~0.2 s^{-1}), \tau (~0.9 s), \alpha (0.32), E_0 (0.34), and \epsilon (intravascular , ~0.34) receive Gaussian or log-normal priors centered on empirical values from invasive and fMRI validations, with variances allowing modest deviation (~10-20% ). This stationarity simplifies model inversion while enabling group-level inferences on parameters. Neural states from the core DCM model are convolved with these hemodynamic dynamics to generate predicted BOLD for each region.

Observation Models for EEG/MEG

Observation models in dynamic causal modeling (DCM) for (EEG) and (MEG) describe the biophysical processes linking underlying neural activity to measured signals, emphasizing the high of electromagnetic recordings compared to the slower vascular responses in (fMRI). These models integrate neural mass approximations of cortical dynamics with electromagnetic forward solutions to generate observable data, enabling inference on effective connectivity at scales. The core structure posits that potentials or arise from synchronized postsynaptic currents in populations, modeled as equivalent current dipoles. Central to these observation models are neural mass models (NMMs), which approximate the collective behavior of neuronal ensembles without resolving single-neuron spiking. The canonical NMM in DCM for EEG/MEG is the Jansen-Rit model, originally developed to simulate alpha rhythms and later adapted for evoked responses. This model represents each cortical source as three interconnected subpopulations: superficial pyramidal cells (excitatory output), spiny stellate cells (excitatory input), and inhibitory interneurons, with dynamics governed by mean membrane potentials and firing rates via sigmoid functions. The excitatory-inhibitory balance captures local amplification and suppression, producing oscillatory patterns observed in EEG/MEG, such as event-related potentials or induced rhythms. The link between these dipole sources and sensor measurements is provided by lead-field matrices, which encode volume conduction and magnetic induction effects. The observation equation is given by: \mathbf{y}(t) = \mathbf{L} \mathbf{J}(t) + \boldsymbol{\epsilon}(t) where \mathbf{y}(t) denotes the vector of EEG/MEG channel data at time t, \mathbf{L} is the lead-field matrix (derived from head geometry), \mathbf{J}(t) represents the dipole moments from neural sources, and \boldsymbol{\epsilon}(t) is additive sensor noise. The lead-field \mathbf{L} is computed using boundary element methods or realistic head models from structural MRI, projecting source activity onto sensor space while accounting for tissue conductivities. This forward model assumes quasi-static approximations, suitable for the frequencies (up to ~100 Hz) relevant to EEG/MEG. An advancement in these models is the canonical microcircuit (CMC), introduced in 2017 to incorporate layer-specific cortical dynamics for more biologically plausible simulations of laminar EEG/MEG signals. The CMC extends the Jansen-Rit framework to four populations—spiny stellate, superficial pyramidal, deep pyramidal, and inhibitory —reflecting the canonical cortical column with and connections across layers. This structure allows DCM to estimate layer-resolved effective , such as excitatory inputs to granular layers and inhibitory in supragranular regions, enhancing interpretations of source-specific contributions to scalp data. Source locations in these models are informed by priors derived from structural MRI, ensuring anatomical plausibility. Priors typically constrain dipoles to cortical gray matter surfaces segmented from individual MRIs, with initial positions guided by functional localizations from task data or atlases. This Bayesian approach incorporates uncertainty in location estimates, often tightening variances based on co-registered fMRI peaks or prior source reconstructions to improve model identifiability. Such priors mitigate ill-posedness in the inverse problem, facilitating robust inference on neural causes of observed electromagnetic fields.

Model Estimation

Variational Bayes Approximation

In dynamic causal modeling (DCM), the posterior distribution over model parameters given observed data and the model structure, p(\theta | y, m), is approximated using variational Bayes (VB) under a mean-field assumption. This approach factorizes the approximate posterior q(\theta) into independent marginals over parameter subsets, q(\theta) = \prod_i q_i(\theta_i), to render tractable for complex nonlinear generative models. By minimizing the divergence between q(\theta) and the true posterior, VB provides a deterministic scheme for approximate that balances model fit and complexity. The objective function in this framework is the variational free energy, defined as F = \left\langle \ln p(y, \theta | m) \right\rangle_q - \left\langle \ln q(\theta) \right\rangle_q, where the expectation \langle \cdot \rangle_q is taken with respect to q(\theta). This serves as a lower bound on the log model evidence \ln p(y | m), and its maximization equates to minimizing the KL divergence \mathrm{KL}[q(\theta) || p(\theta | y, m)]. The first term captures the expected log joint density of data and parameters under the , while the second is the of the approximate posterior, promoting parsimonious approximations. Optimization proceeds iteratively through gradient-based updates on the with respect to the sufficient statistics of q(\theta), typically the \mu and \Pi (inverse covariance). These updates employ a Gauss-Newton scheme, akin to an , to adjust \mu and \Pi until : \mu \leftarrow \mu - \Pi^{-1} \nabla_\mu F, \quad \Pi \leftarrow -\nabla_\mu^2 F. This process inverts the by yielding point estimates and uncertainties for parameters, with convergence guaranteed under the variational framework. The Laplace assumption underpins the Gaussian form of the approximate posteriors, approximating q_i(\theta_i) \sim \mathcal{N}(\mu_i, \Sigma_i) via a second-order Taylor expansion of the log joint around its mode. This local approximation handles the nonlinearities in DCM's generative models efficiently, providing conditional covariances that encode parameter uncertainties without requiring Monte Carlo sampling.

Parameter and Uncertainty Estimation

In dynamic causal modeling (DCM), the variational Bayes (VB) approximation provides posterior estimates of the model parameters, which characterize the effective connectivity among brain regions. These parameters include the intrinsic connectivity matrix \mathbf{A}, which encodes baseline coupling; the modulatory input matrix \mathbf{B}, which captures context-dependent changes; and the direct input matrix \mathbf{C}, which specifies exogenous influences on regional activity. The posterior distribution is approximated as a multivariate Gaussian, with the mean providing point estimates of these matrices and the covariance matrix quantifying parameter uncertainties. The conditional uncertainties arise from the posterior , reflecting the of the estimates given the observed and beliefs. Diagonal of this yield variances for individual parameters, such as the strength of a specific , while off-diagonal indicate correlations between parameters. To assess statistical significance, 95% credible intervals are computed from this Gaussian approximation, typically spanning \mu \pm 1.96 \sqrt{\Sigma_{ii}}, where \mu is the posterior mean and \Sigma_{ii} is the variance for parameter i. These intervals allow researchers to determine whether a strength differs reliably from zero or a expectation. DCM addresses inherent non-identifiability in connectivity estimation—where multiple parameter sets can produce similar observations—through informative priors and structural model constraints. Priors, such as Wishart distributions on \mathbf{A} to promote sparse or physiologically plausible connectivity, regularize the posterior and mitigate . Model constraints, like fixing certain connections to zero based on anatomical knowledge, further reduce ambiguity, ensuring interpretable estimates.00202-7) For instance, in tasks, estimates the strength in \mathbf{B} for connections like to V5, where posterior means might indicate a positive attentional (e.g., 0.2 Hz increase in ), with 95% credible intervals excluding zero to confirm task-specific enhancement. Uncertainties here are often smaller for well-constrained modulatory parameters due to strong experimental designs that isolate attentional effects.00202-7)

Model Comparison

Bayesian Model Selection

In dynamic causal modeling (DCM), Bayesian model selection (BMS) enables the comparison of competing models to infer the most plausible causal architecture underlying neuroimaging data at the single-subject level. The core quantity for this comparison is the model evidence, denoted as p(y \mid m), which represents the probability of observing the data y given a specific model m. This evidence quantifies how well the model predicts the data while accounting for model complexity, allowing researchers to select among alternative hypotheses, such as different connectivity structures or input regimes. The model is computationally approximated using the variational F, such that p(y \mid m) \approx \exp(F), where F provides a lower bound on the log- derived from variational Bayes . This facilitates efficient model comparison by balancing the accuracy of the model's fit to the against its , as expressed in the decomposition \log p(y \mid m) = \text{Accuracy}(m) - \text{Complexity}(m). For comparing families of models—groups sharing common features like versus —the evidences are aggregated to yield family-level posteriors, enabling robust even when individual is sensitive to priors or noise. Bayes factors provide a direct metric for pairwise model comparisons, defined as the ratio of evidences B_{ij} = p(y \mid m_i) / p(y \mid m_j) for nested models, where values greater than 3 indicate substantial favoring model i over j. For non-nested models or families, this ratio extends naturally to assess relative support. Posterior probabilities over models are then obtained by applying a to the log-evidences, assuming equal prior probabilities: p(m_k \mid y) = \frac{\exp(\log p(y \mid m_k))}{\sum_j \exp(\log p(y \mid m_j))}, yielding probabilities that sum to 1 and reflect the updated in each model after observing the data. A key advantage of this Bayesian approach is its embodiment of , which automatically penalizes overly complex models through the complexity term in the . This term incorporates the volume of the over parameters, such as \frac{1}{2} \log |C_p| where C_p is the , effectively favoring parsimonious models that explain the data without unnecessary parameters. In applications, this prevents in scenarios with sparse hypotheses, ensuring selected models generalize well to the observed .

Group-Level Inference

Group-level inference in dynamic causal modeling (DCM) addresses the need to generalize findings from individual subjects to populations, accounting for inter-subject variability through hierarchical Bayesian frameworks. Unlike single-subject analyses, which compute log-model evidences for each participant, group-level methods pool these evidences or posterior parameters to infer population-level effects, such as the prevalence of specific models or differences in parameters between clinical and control groups. This approach is essential for studying heterogeneous populations, enabling robust conclusions about neural mechanisms at the cohort level. Random-effects Bayesian model selection (RE-BMS) is a primary method for group-level model comparison in DCM, treating log-model evidences from individual subjects as random samples from a group distribution rather than assuming uniformity (as in fixed-effects BMS). In RE-BMS, a hierarchical model is fitted to the subjects' log-evidences using variational Bayes, estimating the that a particular model is the most frequent in the population. Exceedance probabilities, derived from this posterior, quantify the likelihood that one model exceeds all others in prevalence across the group, providing a protected measure against multiple comparisons when evaluating families of models. For instance, RE-BMS has been applied to compare connectivity models in healthy versus cohorts, revealing group-specific patterns without requiring identical model fits for every subject. Parametric empirical Bayes (PEB) complements RE-BMS by enabling hierarchical on model parameters rather than entire models, treating individual posterior means as observations in a second-level with group priors. PEB estimates group mean parameters and between-subject variances using empirical Bayes, allowing tests for differences in effective or strengths across populations via posterior probabilities or t-contrasts. This method is particularly suited for quantifying subtle group effects, such as altered synaptic gains, while incorporating uncertainty from first-level estimations. In clinical applications, such as research, group-level DCM inference has identified disrupted effective connectivity. For example, DCM analyses have revealed attenuated fronto-thalamic coupling in patients with delusions, with posterior probabilities exceeding 0.99 indicating significant group differences. Recent advancements as of 2025 include transformer-aided approaches for scalable group-level estimation in large-scale networks. These findings underscore DCM's utility in delineating disorder-specific network alterations at the population level.

Validation and Applications

Validation Techniques

Face validity in dynamic causal modeling (DCM) is established through simulations where known neuronal architectures and parameters are used to generate , allowing assessment of whether DCM can recover the imposed strengths and patterns. In early validation efforts, Bayesian procedures successfully identified predefined bilinear modulations and intrinsic connections in three-region models, demonstrating robustness to added levels up to 2 units and temporal misalignments of ±1 second. These simulations confirm that DCM's mechanism accurately detects what it is designed to estimate, providing a foundational check on the method's . Construct validity assesses DCM's alignment with independent anatomical and experimental evidence, such as diffusion-weighted imaging or targeted neural perturbations. -derived priors have been integrated into DCM to inform plausible graphs, enhancing model realism by constraining effective estimates to respect white-matter pathways observed in probabilistic tracking. For instance, comparisons with and Volterra kernel analyses in attentional tasks have shown DCM to reliably capture backward connection modulations, consistent with established neurophysiological principles of hierarchical processing. Such convergences validate DCM's theoretical framework against complementary techniques, ensuring inferences reflect biologically plausible mechanisms. Predictive validity evaluates DCM's ability to forecast unobserved data, such as held-out responses in neuroimaging time series. Applied to fMRI data from repeated single-word reading tasks, DCM yielded stable estimates of forward connectivity hierarchies (e.g., from auditory to word-form areas), with predicted responses matching empirical signals at noise levels of 0.8-1%. Further support comes from cross-validation with invasive measures, where DCM predictions of synaptic gain changes aligned with microdialysis and electrophysiological recordings in animal models. This capacity to anticipate brain responses beyond fitted data underscores DCM's utility for hypothesis testing in effective connectivity. Post-2017 studies have emphasized sensitivity analyses to evaluate influences on inferences, particularly in Bayesian parameter estimation. In analyses of alpha power modulation in using EEG, priors for intrinsic connectivity (e.g., log-normal distributions with mean 0 and variance 0.25) were perturbed via computations, revealing that local inhibitory parameters exerted the strongest effects on features around 10 Hz. These assessments, averaging impacts across subjects by incrementing parameters (e.g., by e^{-6}), demonstrated robustness of extrinsic versus intrinsic modulations, informing selection for reliable group-level inferences. Such techniques highlight how specifications shape without altering core model predictions.

Key Applications in Neuroscience

Dynamic causal modeling (DCM) has been extensively applied to investigate in the , particularly to infer the directionality of and connections during perceptual tasks. In studies of , DCM analyses of fMRI data have revealed that connections from primary () to higher areas like V4 are strengthened during stimulus-driven processing, while from lateral occipital cortex to modulates and mechanisms. For instance, during silent reading tasks, DCM demonstrated top-down predictive signals from higher cortical areas suppressing sensory responses in , supporting hierarchical models of . Similarly, in electrocorticographic recordings, asymmetries in forward and backward connections were quantified, showing faster forward propagation for exogenous stimuli and slower for endogenous attention. In cognitive , has elucidated modulation within prefrontal networks during , highlighting dynamic interactions that underpin adaptive behavior. Analyses of (LPFC) activity during tasks using combined with showed that transient disruptions in LPFC impair downstream signals to motor areas, providing causal for its in suppressing impulsive responses. In value-based , revealed context-dependent strengthening of frontostriatal connections, where prefrontal regions exert top-down influence on striatal valuation processes to resolve conflicts between options. Recent extensions in 2025 have further demonstrated frontostriatal dynamics in cognitive , with identifying task-specific modulations that enhance flexibility in uncertain environments, including predictions of individual differences in response speed and age from task-evoked effective . Clinically, DCM has uncovered altered connectivity patterns in neurodegenerative and psychiatric disorders, offering insights into pathological mechanisms. In , DCM of fMRI data from patients at different progression stages showed reduced effective connectivity from the to , reflecting impaired memory retrieval and disruption; as of 2025, longitudinal (MEG) analyses have further revealed neurophysiological progression through disrupted oscillatory coupling. For , spectral DCM analyses of resting-state networks indicated dysconnections in frontotemporal circuits, with weakened feedback inhibition contributing to deficits and auditory hallucinations. These findings align with broader evidence of hippocampal-prefrontal decoupling in early , where DCM quantified diminished top-down regulation. Recent applications in 2025 have expanded to social neuroscience, particularly examining cerebello-cerebral interactions during action observation. A large-scale study involving 99 participants across four datasets revealed enhanced effective connectivity from Crus II in the to prefrontal areas during social navigation tasks, with modulations triggered by social norm violations, underscoring the 's role in mentalizing. Extensions of active vision frameworks using have also progressed, integrating real-time fMRI to model predictive eye movements, where feedback loops between parietal and visual cortices adapt to dynamic scenes, building on earlier empirical validations.

Limitations and Future Directions

Methodological Constraints

Dynamic causal modeling (DCM) is inherently hypothesis-driven, requiring researchers to specify models a priori based on theoretical assumptions about neural interactions and experimental manipulations, which precludes its use for purely exploratory analyses of brain connectivity. This approach ensures mechanistic interpretability but limits DCM to testing predefined causal hypotheses rather than discovering novel network structures from data. The framework relies on bilinear approximations to model neuronal dynamics, which linearize nonlinear interactions into extrinsic inputs, intrinsic couplings, and modulatory effects, potentially failing to capture highly nonlinear or multistable cortical processes. Such approximations assume instantaneous and deterministic interactions, oversimplifying like those involving recurrent or neural states. For instance, in multistable systems, may inadequately evaluate switching between stable states due to these linear constraints. DCM exhibits sensitivity to the selection of regions of interest (ROIs), where outcomes depend on how ROIs are defined—typically using eigenvariates from activation peaks—which can introduce variability if spatial precision or homogeneity assumptions are violated. Parameter estimation is further influenced by distributions, such as Gaussian shrinkage priors on strengths that enforce sparsity and but may results if priors do not align with true underlying . Scalability poses a significant constraint for standard DCM, as the number of possible connections grows quadratically with the number of regions, leading to exponential increases in computational demands and parameter estimation challenges for networks exceeding 10 regions. Pre-2025 critiques highlighted this as a barrier to whole-brain analyses, restricting applications to small-scale circuits despite the method's flexibility in connectivity specification. Recent extensions have begun addressing these issues through scalable approximations, though core methodological limits persist.

Emerging Challenges and Extensions

One prominent challenge in advancing dynamic causal modeling (DCM) lies in integrating multi-modal data, such as combining , voltage-sensitive dye imaging, and blood-oxygen-level-dependent (BOLD) signals, which differ in spatial coverage, , and temporal . This integration is complicated by partial and heterogeneous representations, often leading to inaccurate inferences of neural circuits without iterative refinement of priors across scales. To address this, a multi-modal multi-scale DCM (mms-DCM) framework has been proposed, employing shared neural state models with modality-specific observations and reciprocal to enhance connectivity estimation accuracy in virtual experiments. Another key hurdle is handling non-stationarity in signals, where time-varying and degeneracy in biological systems hinder model convergence and reliable . Non-stationarity arises from fluctuating neural dynamics, complicating the assumption of stable parameters in traditional formulations. Recent approaches mitigate this by modeling time-varying effective through parametric expansions of , enabling the capture of slow fluctuations in data without assuming . Extensions to DCM have incorporated probabilistic programming languages (PPLs) like , PyMC, and NumPyro to facilitate on nonlinear ordinary differential equations describing brain dynamics. These PPLs leverage gradient-based (e.g., NUTS) and variational methods (e.g., ADVI) for faster posterior estimation, achieving up to 50% effective sample sizes and reducing computation times to minutes, while addressing multi-modality via hyperparameter tuning and chain stacking. Deep generative models have further extended DCM by enabling hypothesis generation for dynamic causal graphs, modeling time-varying interactions as superpositions of static graphs to handle non-stationarity and nonlinearity beyond linear assumptions. This approach improves F1-scores by 22-28% over baselines in synthetic and real data, uncovering state-dependent relationships linked to . For whole-brain applications, multi-scale parcellation techniques have been developed using top-down Bayesian model comparison on s for hierarchical partitioning. Innovations include naïve Bayesian model reduction for scalability to thousands of regions, revealing modular structures invariant across scales in empirical fMRI data. Looking ahead, DCM holds promise for applications, where connectivity-based feedback could enable volitional control of brain networks during fMRI sessions, building on foundational implementations to support therapeutic interventions in disorders like anxiety. Recent advances in fMRI underscore its potential for self-regulation, with ongoing efforts to integrate DCM for more precise, causal-targeted training, such as in co-adaptive EEG-fMRI fusion protocols as of May 2025.

Software Implementations

SPM Integration

Dynamic Causal Modeling (DCM) is primarily implemented within the (SPM) software suite, starting from SPM12, providing a comprehensive framework for on effective connectivity in data. This integration enables users to model directed interactions among brain regions using generative models that link neuronal dynamics to observed signals like fMRI or EEG/ time series. Key features include a (GUI) for specifying model structures, such as defining regions of interest (ROIs), endogenous and exogenous connections, and input perturbations, which streamlines the setup process without requiring extensive scripting. Parameter estimation employs variational Bayes (VB) methods to approximate posterior distributions efficiently, balancing model fit and complexity through free-energy minimization. For model comparison, (BMS) and parametric empirical Bayes (PEB) frameworks facilitate hierarchical inference, allowing evaluation of competing hypotheses at individual and group levels. These tools, introduced in SPM8 and refined in SPM12, support both deterministic and formulations of DCM. The workflow for in is tightly integrated with standard preprocessing and (GLM) pipelines, ensuring seamless analysis from raw data to estimates. For fMRI, users first perform spatial realignment, slice-timing correction, to a standard space, and smoothing, followed by GLM specification to identify task-relevant activations. are then extracted from volumes of interest (VOIs) centered on GLM-derived coordinates, formatted as .mat files for input. Similar pipelines apply to EEG/ data, involving source reconstruction and sensor-level preprocessing before fitting. This end-to-end integration minimizes data handling errors and leverages 's batch system for automation across subjects. Recent versions of , including SPM25 (released in 2025), support resting-state (rsDCM), also known as spectral (spDCM), for modeling intrinsic fluctuations using stochastic differential equations and cross-spectral densities in the (0.01–0.1 Hz), integrated with 's fMRI preprocessing for group-level effective analyses without task inputs. Canonical microcircuit () models, refinements of Jansen-Rit neural mass models with four subpopulations (excitatory pyramidal cells, inhibitory ), are available for cross-spectral density analyses in M/EEG, enabling detailed simulations of laminar-specific interactions. Bayesian inversion via variational Laplace approximations supports scalability for large datasets. Tutorials and community resources for SPM DCM are extensively provided by the Wellcome Centre for Human Neuroimaging (FIL) at , including step-by-step guides for first- and second-level analyses using exemplar datasets like the "attention to visual motion" fMRI data. These materials cover GUI-based model specification, estimation, and inference, with scripts available on the SPM repository and annual courses at FIL . Community support occurs via the SPM mailing list and workshops, fostering adoption in research.

Alternative Toolboxes

The toolbox, developed by the at , provides specialized tools for (DCM) with a particular emphasis on resting-state (fMRI) data through its regression DCM (rDCM) implementation. This approach enables efficient inference on effective by regressing regional against noise and endogenous fluctuations, incorporating advanced priors such as hierarchical shrinkage to handle inter-regional variability and improve model stability in low-signal scenarios. rDCM supports massively parallel computations for group-level analyses, making it suitable for large datasets while maintaining Bayesian model inversion via variational Laplace methods. The VBA-M toolbox offers a general framework for variational applicable to nonlinear models, including customizable implementations for across and behavioral data. Hosted on , it allows users to define structures through modular functions for model specification, , and posterior estimation, emphasizing flexibility in prior selection and observation models without being tied to specific imaging modalities. This generality facilitates extensions to non-standard variants, such as those incorporating stochastic differential equations for dynamic noise processes. In 2025, integrations of with probabilistic programming languages (PPLs) have emerged, providing wrappers for frameworks like , PyMC, and NumPyro to enable sampling and more flexible hierarchical modeling. These implementations, exemplified by open-source repositories for () DCM, allow declarative specification of causal models in or , supporting advanced features like for gradient-based inference and posterior predictive checks. Benchmarks indicate that PyMC and achieve comparable sampling efficiency to traditional variational methods for standard DCMs, with NumPyro offering advantages in scalability for high-dimensional connectomes due to its backend. Compared to SPM's workflows tailored for neuroimaging-specific pipelines, VBA-M prioritizes algorithmic flexibility for bespoke model adaptations, while wrappers enhance accessibility for interdisciplinary users beyond traditional environments. , in contrast, bridges specificity and efficiency for resting-state analyses, often outperforming general tools in computational speed for parallel fMRI inversions.

References

  1. [1]
  2. [2]
    Ten simple rules for dynamic causal modeling - PMC - NIH
    Dynamic causal modeling (DCM) is a generic Bayesian framework for inferring hidden neuronal states from measurements of brain activity.
  3. [3]
    Dynamic causal modeling - Scholarpedia
    Feb 15, 2010 · The aim of dynamic causal modeling (DCM) is to infer the causal architecture of coupled or distributed dynamical systems.Motivation · Model evidence and selection · Applications: fMRI · DCM developments
  4. [4]
    Dynamic causal modeling for EEG and MEG - PMC - PubMed Central
    We present a review of dynamic causal modeling (DCM) for magneto‐ and electroencephalography (M/EEG) data. DCM is based on a spatiotemporal model.
  5. [5]
    None
    Summary of each segment:
  6. [6]
    Dynamic causal modelling - PubMed
    In this paper we present an approach to the identification of nonlinear input-state-output systems. By using a bilinear approximation to the dynamics of ...
  7. [7]
    Dynamic causal modelling - ScienceDirect
    In this paper we present an approach to the identification of nonlinear input–state–output systems. By using a bilinear approximation to the dynamics of ...
  8. [8]
    Dynamic causal modeling of evoked responses in EEG and MEG
    Dynamic causal modeling of evoked responses in EEG and MEG. Neuroimage. 2006 May 1;30(4):1255-72. doi: 10.1016/j.neuroimage.2005.10.045. Epub 2006 Feb 9.Missing: 2007 | Show results with:2007
  9. [9]
    Dynamic causal modelling of evoked responses in EEG/MEG with ...
    Dynamical causal modeling (DCM) of evoked responses is a new approach to making inferences about connectivity changes in hierarchical networks.Missing: 2007 | Show results with:2007
  10. [10]
    Nonlinear dynamic causal models for fMRI - PubMed
    Aug 15, 2008 · This paper presents a nonlinear extension of DCM that models such processes (to second order) at the neuronal population level.Missing: 2007 | Show results with:2007
  11. [11]
    Dynamic causal models of steady-state responses - PubMed Central
    In this paper, we describe a dynamic causal model (DCM) of steady-state responses ... Friston K.J. Dynamic causal modeling of evoked responses in EEG and MEG.
  12. [12]
    A DCM for resting state fMRI - NeuroImage - ScienceDirect.com
    Jul 1, 2014 · ... Friston ... Identifying the default mode network structure using dynamic causal modeling on resting-state functional magnetic resonance imaging.
  13. [13]
    Dynamic causal modelling in probabilistic programming languages
    Jun 4, 2025 · 2025. Supplementary material from: Dynamic Causal Modeling in Probabilistic Programming Languages. Figshare. ( 10.6084/m9.figshare.c.7803859) ...
  14. [14]
    Multi-Scale Parcellation of Dynamic Causal Models of the Brain
    Jun 15, 2025 · Multi-Scale Parcellation of Dynamic Causal Models ... To facilitate the computations, recent developments in linear dynamic causal modeling ...
  15. [15]
    SPM—30 years and beyond - PMC - NIH
    Sep 10, 2025 · This paper marks the 30th anniversary of the Statistical Parametric Mapping (SPM) software ... SPM by using dynamic causal modeling (DCM). Second, ...
  16. [16]
    Nonlinear Dynamic Causal Models for fMRI - PMC - PubMed Central
    This paper presents a nonlinear extension of DCM that models such processes (to second order) at the neuronal population level.
  17. [17]
    Neural masses and fields in dynamic causal modeling - Frontiers
    ... steady state responses. In the second section ... Citation: Moran R, Pinotsis DA and Friston K (2013) Neural masses and fields in dynamic causal modeling.
  18. [18]
    Ten simple rules for dynamic causal modeling - ScienceDirect
    Feb 15, 2010 · Dynamic causal modeling (DCM) is a generic Bayesian framework for inferring hidden neuronal states from measurements of brain activity.
  19. [19]
  20. [20]
    Dynamic causal modeling of the neural network - Nature
    Jul 25, 2025 · Regions of interest for the DCM analysis. The ROI mask for the bilateral AMY was created using the automated anatomical labelling atlas 3 (AAL3) ...
  21. [21]
    Comparing Families of Dynamic Causal Models - Research journals
    This is similar to factorial experimental designs in psychology [36] where data from all cells are used to assess the strength of main effects and interactions.<|control11|><|separator|>
  22. [22]
    Dynamic causal modeling of evoked responses in EEG and MEG
    May 1, 2006 · In this paper, we present a new approach to modeling event-related responses measured with EEG or MEG. This approach uses a biologically informed model.
  23. [23]
    Dynamic causal modelling for EEG and MEG
    Apr 23, 2008 · DCMs for M/EEG adopt a neural mass model (David and Friston 2003) to explain source activity in terms of the ensemble dynamics of interacting ...
  24. [24]
    Dynamic causal modelling revisited - ScienceDirect.com
    Oct 1, 2019 · This paper revisits the dynamic causal modelling of fMRI timeseries by replacing the usual (Taylor) approximation to neuronal dynamics with a neural mass model.
  25. [25]
    DCM for evoked responses - SPM Documentation
    The prior location for each dipole can be found either by using available anatomical knowledge or by relying on source reconstructions of comparable studies.
  26. [26]
    [PDF] Dynamic Causal Modeling of Evoked Responses in EEG and MEG
    The estimation procedure employed in DCM is described in (Friston, 2002). The posterior moments. (conditional mean η and covariance Σ ) are updated ...Missing: microcircuit | Show results with:microcircuit
  27. [27]
  28. [28]
    Bayesian Model Selection for Group Studies - PMC - PubMed Central
    In this paper, we compare the GBF with two random effects methods for BMS at the between-subject or group level.
  29. [29]
    Bayesian model reduction and empirical Bayes for group (DCM ...
    This technical note describes some Bayesian procedures for the analysis of group studies that use nonlinear models at the first (within-subject) level.
  30. [30]
    Empirical Bayes for DCM: A Group Inversion Scheme - Frontiers
    We present an empirical Bayesian scheme for group or hierarchical models, in the setting of dynamic causal modeling (DCM).
  31. [31]
    Bayesian Model Selection for Group Studies - ScienceDirect
    Bayesian model selection (BMS) is a powerful method for comparing competing hypotheses about the mechanisms that generated observed data.
  32. [32]
    Empirical Bayes for DCM: A Group Inversion Scheme - PMC - NIH
    Nov 27, 2015 · Specifically, we present an empirical Bayesian scheme for group or hierarchical models, in the setting of dynamic causal modeling (DCM). Recent ...
  33. [33]
    Dynamic causal modeling analysis reveals the modulation of motor ...
    Mar 4, 2023 · We conducted dynamic causal modeling (DCM) analysis to explore the cross-modal interactions among the left pSTG, left precentral gyrus (PrG), ...
  34. [34]
    Aberrant effective connectivity is associated with positive symptoms ...
    Schizophrenia is a neurodevelopmental psychiatric disorder thought to result from synaptic dysfunction that affects distributed brain connectivity, ...
  35. [35]
    Fronto-thalamic structural and effective connectivity and delusions in ...
    Apr 24, 2020 · Anterior cingulate cortex-related connectivity in first-episode schizophrenia: A spectral dynamic causal modeling study with functional magnetic ...Background · Methods · Discussion<|control11|><|separator|>
  36. [36]
    Tractography-based priors for dynamic causal models - PMC
    Here, we use diffusion weighted imaging and probabilistic tractography to specify anatomically informed priors for dynamic causal models (DCMs) of fMRI data.
  37. [37]
    Dynamic causal modelling shows a prominent role of local inhibition ...
    Dec 27, 2022 · The generative model in DCM combines a biophysical and an ... neural state equations. This re-parametrization allows for sign ...
  38. [38]
  39. [39]
    Silent Expectations: Dynamic Causal Modeling of Cortical Prediction ...
    Aug 10, 2016 · We use this factorial design to test a common set of computational models representing hierarchically organized neural networks for auditory ...
  40. [40]
    Feedback from lateral occipital cortex to V1/V2 triggers object ...
    Aug 21, 2021 · Dynamic causal modeling (DCM) is a technique that provides a validated estimate of effective connectivity, reflecting the directional coupling ...
  41. [41]
    A DCM study of spectral asymmetries in feedforward and feedback ...
    This canonical microcircuit model has already been used to model EEG and MEG evoked potentials (Brown and Friston, 2012, Brown and Friston, 2013, Fogelson et al ...
  42. [42]
    Causal evidence for lateral prefrontal cortex dynamics supporting ...
    Sep 13, 2017 · These data provide causal evidence for LPFC dynamics supporting cognitive control and demonstrate the utility of combining DCM with causal manipulations.Task, Ctbs Targets, And... · Results · Materials And Methods
  43. [43]
    Causal evidence for lateral prefrontal cortex dynamics supporting ...
    These data provide causal evidence for LPFC dynamics supporting cognitive control and demonstrate the utility of combining DCM with causal manipulations to test ...Results · Network Dynamics Revisited · Dynamic Causal Modeling
  44. [44]
    Frontostriatal dynamics of cognitive control - bioRxiv
    Mar 10, 2025 · Our findings reveal a specific neural mechanism that may explain how frontostriatal circuits implement cognitive control and provide a novel ...Results · Discussion · Methods
  45. [45]
    Estimating effective connectivity in Alzheimer's disease progression
    Dec 14, 2022 · This study used Dynamic Causal Modeling (DCM) method to assess effective connectivity (EC) and investigate the changes that accompany AD progression.Abstract · Introduction · Materials and methods · Alzheimer's Disease...
  46. [46]
    Dysconnection and cognition in schizophrenia - Wiley Online Library
    Feb 28, 2023 · Schizophrenia (SZ) is a debilitating brain disorder characterized by episodes of psychosis; common symptoms include delusions, hallucinations, ...
  47. [47]
    Altered activation and connectivity in a hippocampal–basal ganglia ...
    Oct 3, 2017 · We then investigated whether effective connectivity within this network is perturbed in UHR subjects, using dynamic causal modelling (DCM).
  48. [48]
    Crus control: effective cerebello-cerebral connectivity during social ...
    Feb 15, 2025 · This dynamic causal modeling (DCM) analysis, comprising 99 participants from 4 studies, investigated effective neuronal connectivity during social action ...
  49. [49]
    Dynamic Causal Modelling of Active Vision - Journal of Neuroscience
    Aug 7, 2019 · This work draws from recent theoretical accounts of active vision and provides empirical evidence for changes in synaptic efficacy consistent with these ...Missing: primary | Show results with:primary
  50. [50]
    [PDF] Dynamic Causal Modelling - FIL | UCL
    16:465-483. Friston KJ, Harrison L and Penny W. (2002) Dynamic Causal Modelling. NeuroImage. under revision. Gerstein GL and Perkel DH. (1969) Simultaneously ...
  51. [51]
    Dynamic causal models of neural system dynamics: current state ...
    Due to the Laplace approximation, the posterior distributions are defined by their posterior mode or maximum a posteriori (MAP) estimate and their posterior ...
  52. [52]
  53. [53]
  54. [54]
  55. [55]
    Dynamic Causal Models of Time-Varying Connectivity - ResearchGate
    Nov 25, 2024 · This paper introduces a novel approach for modelling time-varying connectivity in neuroimaging data, focusing on the slow fluctuations in ...
  56. [56]
    Generating Hypotheses of Dynamic Causal Graphs in Neuroscience
    May 1, 2025 · "Ten simple rules for dynamic causal modeling." Neuroimage 49.4 ... Summary: This work introduces a deep generative factor model that ...
  57. [57]
  58. [58]
    Dynamic Causal Modeling for fMRI - SPM Documentation
    Dynamic Causal Modelling (DCM) is a method for making inferences about neural processes that underlie measured time series, eg fMRI data.Theoretical background · Bayesian model selection · Practical example
  59. [59]
    Dynamic Causal Modelling for M/EEG - SPM Documentation
    The two key methods contributions can be found in (David et al. 2006) and (Kiebel, David, and Friston 2006). Two other contributions using the model for testing ...Introduction · Overview · load, save, select model type · Electromagnetic model<|control11|><|separator|>
  60. [60]
    Dynamic Causal Modelling for resting state fMRI - SPM Documentation
    This chapter provides an extension to the framework of Dynamic Causal Modelling (DCM) for modelling intrinsic dynamics of a resting state network.Theoretical background · Practical example · Defining the GLM
  61. [61]
    DCM for cross-spectral densities - SPM Documentation
    The CMC-type neural mass model comprises four subpopulations. It is a refinement of the Jansen and Rit convolution models that explicitly accommodates the ...Overview · Main Results · Using the Graphical User... · The data
  62. [62]
    Overview of DCM - SPM Documentation - FIL | UCL
    DCM is used for investigating effective connectivity - the directed effects of neural populations on one another.
  63. [63]
    SPM Tutorials - SPM Documentation - FIL | UCL
    SPM Tutorials¶. The tutorials are organised by data modality and illustrate how to use SPM to analyse exemplar data sets. Getting started.DCM · DCM for evoked responses · DCM for induced responses · fMRI
  64. [64]
  65. [65]
    TAPAS: An Open-Source Software Package for Translational ...
    The massively parallel dynamic causal modeling (mpdcm) toolbox (152) implemented in TAPAS renders sampling-based model inversion in the context of DCM for fMRI ...
  66. [66]
    Regression dynamic causal modeling for resting‐state fMRI - PMC
    ... dynamic causal modeling (DCM; Friston, Harrison, & Penny, 2003), and variants of DCM have been established to model the resting state, including stochastic ...
  67. [67]
    mpdcm: A toolbox for massively parallel dynamic causal modeling
    Jan 15, 2016 · The mpdcm toolbox is available under the GPL license as part of the open source TAPAS software at www.translationalneuromodeling.org/software .
  68. [68]
    VBA: A Probabilistic Treatment of Nonlinear Models for ... - NIH
    Jan 23, 2014 · In this paper, we have exposed the main algorithmic components of the VBA toolbox, which implements a probabilistic treatment of nonlinear ...
  69. [69]
    Dynamic Causal Modeling
    Matrices A, B, C and D correspond to connection strengths, input modulations of connections, input-state couplings and state modulations of connections, ...Missing: specification ABC sparsity
  70. [70]
    MBB-team/VBA-toolbox - GitHub
    The toolbox can be used to simulate data, perform statistical data analysis, optimize the experimental design, etc.
  71. [71]
    Dynamic causal modelling in probabilistic programming languages
    Jun 4, 2025 · Dynamic causal modelling (DCM) presents a statistical framework that embraces causal relationships among brain regions and their responses to experimental ...
  72. [72]
    ins-amu/DCM_PPLs: Implementing a DCM model of ERP in PPLs.
    The aim is to provide inference services for Dynamical Causal Modeling of Event-Related Potentials (ERPs) measured with EEG/MEG, using SATO Probabilistic ...