DCM for complexvalued data: crossspectra, coherence and phasedelays.  
Jump to Full Text  
MedLine Citation:

PMID: 21820062 Owner: NLM Status: MEDLINE 
Abstract/OtherAbstract:

This note describes an extension of Bayesian model inversion procedures for the Dynamic Causal Modeling (DCM) of complexvalued data. Modeling complex data can be particularly useful in the analysis of multivariate ergodic (stationary) timeseries. We illustrate this with a generalization of DCM for steadystate responses that models both the real and imaginary parts of sample crossspectra. DCM allows one to infer underlying biophysical parameters generating data (like synaptic time constants, connection strengths and conduction delays). Because transfer functions and complex crossspectra can be generated from these parameters, one can also describe the implicit system architecture in terms of conventional (linear systems) measures; like coherence, phasedelay or crosscorrelation functions. Crucially, these measures can be derived in both sensor and sourcespace. In other words, one can examine the crosscorrelation or phasedelay functions between hidden neuronal sources using noninvasive data and relate these functions to synaptic parameters and neuronal conduction delays. We illustrate these points using local field potential recordings from the subthalamic nucleus and globus pallidus, with a special focus on the relationship between conduction delays and the ensuing phase relationships and crosscorrelation time lags between population activities. 
Authors:

K J Friston; A Bastos; V Litvak; K E Stephan; P Fries; R J Moran 
Publication Detail:

Type: Journal Article; Research Support, NonU.S. Gov't Date: 20110728 
Journal Detail:

Title: NeuroImage Volume: 59 ISSN: 10959572 ISO Abbreviation: Neuroimage Publication Date: 2012 Jan 
Date Detail:

Created Date: 20111017 Completed Date: 20120213 Revised Date: 20140224 
Medline Journal Info:

Nlm Unique ID: 9215515 Medline TA: Neuroimage Country: United States 
Other Details:

Languages: eng Pagination: 43955 Citation Subset: IM 
Copyright Information:

Copyright © 2011 Elsevier Inc. All rights reserved. 
Export Citation:

APA/MLA Format Download EndNote Download BibTex 
MeSH Terms  
Descriptor/Qualifier:

Algorithms* Bayes Theorem Brain / physiology* Brain Mapping / methods* Humans Models, Neurological* Models, Theoretical* 
Grant Support  
ID/Acronym/Agency:

088130//Wellcome Trust; 091593//Wellcome Trust; //Wellcome Trust 
Comments/Corrections 
Full Text  
Journal Information Journal ID (nlmta): Neuroimage ISSN: 10538119 ISSN: 10959572 Publisher: Academic Press 
Article Information Download PDF Â© 2012 Elsevier Inc. License: Received Day: 10 Month: 1 Year: 2011 Revision Received Day: 13 Month: 7 Year: 2011 Accepted Day: 16 Month: 7 Year: 2011 pmcrelease publication date: Day: 02 Month: 1 Year: 2012 Print publication date: Day: 02 Month: 1 Year: 2012 Volume: 59 Issue: 1 First Page: 439 Last Page: 455 ID: 3200431 PubMed Id: 21820062 Publisher Id: YNIMG8525 DOI: 10.1016/j.neuroimage.2011.07.048 
DCM for complexvalued data: Crossspectra, coherence and phasedelays  
K.J. Fristona  
A. Bastosbc  
V. Litvaka  
K.E. Stephanad  
P. Friesc  
R.J. Morana⁎  Email: r.moran@fil.ion.ucl.ac.uk 
aThe Wellcome Trust Centre for Neuroimaging, University College London, Queen Square, London WC1N 3BG, UK 

bCenter for Neuroscience and Center for Mind and Brain, University of CaliforniaDavis, Davis, CA 95618, USA 

cErnst Strüngmann Institute in Cooperation with Max Planck Society, Deutschordenstraße 46, 60528 Frankfurt, Germany 

dLaboratory for Social and Neural Systems Research, Dept. of Economics, University of Zurich, Switzerland 

⁎Corresponding author at: The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London, WC1N 3BG, UK. r.moran@fil.ion.ucl.ac.uk 
This technical note describes a dynamic causal or generative model for timeseries, under ergodic assumptions. It is based on a linearization of meanfield models of coupled dynamical systems; in our case, neuronal subpopulations. Under the assumption that the system is driven by exogenous fluctuations with known (or parametric) spectral densities and uniform phasedistributions, it is possible to predict the coherence and phasedifferences observed among system responses; in our case electrophysiological measurements. This enables the model to be optimized using empirical measures of (complex) crossspectral densities and thereby access hidden parameters governing the datagenerating process (e.g., coupling parameters and rate constants). We validate this model using simulations and illustrate its application using real local field potential (LFP) data.
The contributions of this work are threefold. First, it generalizes variational Bayesian techniques (Variational Laplace: ^{Friston et al., 2007}) used to invert models of empirical data so that they can be applied to complex data. This generalization can be applied in any setting that uses variational Laplacian schemes to select and optimize models. It is illustrated here in the context of Dynamic Causal Modeling of steadystate responses in electrophysiology. Second, we use the generalization to show how conventional timeseries measures of activity and coupling in the frequency domain can be derived from Dynamic Causal Models that have a biophysically plausible form. Finally, we show how this enables the computation of conventional measures, such as coherence and phasedelay functions, between hidden states; in other words, between sources as opposed to sensors.
Current applications of Dynamic Causal Modeling have been used to explain realvalued data features, including evoked transients, induced responses and power spectra (^{Kiebel et al., 2008a,b}), using biologically informed meanfield models of coupled dynamical systems. The ability to model complexvalued data features offers two key advantages. First, while current DCMs can estimate conduction delays, the estimates do not have access to phase information. In principle, models that can predict or generate complex data enable the phase relationships among observed responses to inform and constrain estimates of model parameters, like conduction delays. More importantly, the extension to complex valued data bridges the technical divide between modelbased and modelfree analyses (^{Brown et al., 2004; Kay and Marple, 1981}), as we hope to show.
The generalization to complex data was motivated by DCM for steadystate responses, as applied to electrophysiological timeseries acquired under particular brain states (^{Moran et al., 2007, 2008}). Our focus here is on the potential importance of complexvalued data features and the implications for the inversion or optimization of models of those features. We try to illustrate the potential of this scheme by looking at how precisely axonal conduction delays can be estimated, when fitting complex crossspectra, in relation to realvalued crossspectra. More generally, we anticipate that this DCM will provide a useful link between the generative modeling of biological timeseries and conventional (linear systems) characterizations (e.g. ^{Kay and Marple, 1981}) that predominate in electrophysiology. This link rests upon the fact that conventional measures (e.g., coherence, phasedelay and autocorrelation functions) are caused by neuronal circuits with particular biophysical parameters (e.g., synaptic efficacy, time constants and conduction delays). This means that biophysical parameter estimates can be used to create conditional estimates of coherence and crosscorrelation functions. In turn, this means that it is possible to infer which biophysical parameters are responsible for observed coherence or phasedelays. Conceptually, the difference between DCM and conventional measures of coupling (i.e., functional connectivity) lies in the fact that DCM appeals to an explicit generative or forward model of how data features are caused (i.e., effective connectivity). In this instance, the data features are provided by the spectral behavior of observed timeseries that are, usually, the end point of conventional analyses. The advantage of having an underlying generative model is that one can estimate the spectral behavior and relationships, not just among observed sensors or data channels, but between the neuronal sources generating those data. Furthermore, one can map quantitatively between the underlying biophysical parameters and spectral summaries. We will illustrate these points using simulated and real data.
Dynamic Causal Modeling for steadystate responses has been used to make inferences about hidden neuronal states and parameters using both invasive and noninvasive data. There has been a considerable effort to validate this approach using simulations, developmental manipulations and psychopharmacological interventions (^{Moran et al., 2008, 2009}). There is now a large literature on Dynamic Causal Modeling in electrophysiology (^{Chen et al., 2008; Daunizeau et al., 2009; David et al., 2006a,b; Kiebel et al., 2007}) and, in particular, models for steadystate activity (^{Moran et al., 2007, 2008, 2009, 2011}). We take DCM for steadystate responses as the starting point for the causal modeling of complexvalued data. This is because this current scheme uses the absolute value of the crossspectrum between channels. However, the crossspectrum is a complex quantity, which means that it has the attributes of modulus (absolute value or amplitude) and argument (angle or phase). This means that one is throwing away information when using absolute measures to invert or fit generative models. In what follows, we look at the advantages of using both amplitude and phase information.
The phase of the crossspectrum is usually taken to indicate something about systematic lags or delays between two signals. If one signal appears in the other, after a constant time lag, then the phasedifference scales with frequency. In practice, this notion has been used, for example, in epilepsy research where authors have used phase differences between signals from different intracerebral electrodes or EEG channels to estimate conduction or propagation delays (^{Brazier, 1972; Gotman, 1981}). A regression of the phasedifference on frequency is also often used to estimate temporal delays over a particular frequency range (^{Rosenberg et al., 1989}). More generally, one can divide the phasedifference by frequency to quantify time lags as a function of frequency. A further advantage of working with the complex crossspectrum is that its inverse Fourier transform provides the crosscorrelation function between two timeseries. If the correlation structure is dominated by a single time delay, the latency of this delay can often be inferred from the timing of a peak in the cross correlation. In short, a generative model of complexvalued data features (e.g. crossspectra) provides a more complete model of data and provides conditional estimates of coherence, phasedelay and crosscorrelation functions that are implicitly constrained by the functional architectures inducing those correlations. The examples in this paper attempt to illustrate this in a practical fashion.
This note comprises four sections. In the first, we consider the nature of crossspectra and their relationship to coherence and phasedelays. This section is used to frame conventional measures of coupling in terms of the underlying transfer functions between sources generating data. Crucially it shows that although coherence is sensitive to the dispersion of phasedifferences between two sources or sensors, coherence does not provide a complete picture of coupling, because it is insensitive to phasedelays per se. As indicated above, a more comprehensive summary rests upon the complex crossspectra that include both real and imaginary parts that embody phasedelays. However, the phasedelay does not report the timedelays between two signals directly and can only be interpreted in relation to a model of how timedelays are manifest in data. The models we use here are biologically plausible (neural mass) models, based on delay differential equations that make timedelays an explicit model parameter. This section concludes with a brief description of these neural mass models that constitute a generative model for steadystate responses. The second section briefly reviews the inversion of these models, with a special focus on constructing freeenergy bounds on model logevidence for complexvalued data. In the third section, we present an illustrative analysis of simulated data, in which we know the coupling strengths and timedelays among a small number of neuronal populations. These simulations are used to verify that the various conditional estimates of coupling, in time and frequency space, can recover the true values in a reasonably precise way. Our special interest here is in the direction of coupling and conduction delays, as inferred through the conditional distribution over timedelays and how they manifest in phasedelay and crosscorrelation functions. There are many interesting aspects of the mapping between model parameters (effective connectivity) and spectral characterizations (functional connectivity): we have chosen conduction delays as one of the more prescient. This is because there is a nontrivial relationship between (axonal) conduction delays and delays that manifest in terms of phasedelays and crosscorrelation lags. In the final section, we repeat the analysis of the third section but using real data from local field potential recordings in rats.
In this section, we look at the nature of conventional measures of functional connectivity from the point of view of phasedifferences and their distribution. The main points made in this treatment are: (i) Coherence is a function of the absolute value of the crossspectra and, as such, provides an incomplete picture of spectral dependencies among stationary timeseries. (ii) Furthermore, even if we consider complex crossspectra, their phase information cannot be interpreted in terms of timedelays, unless we know (or model) how they were caused. Specifically, when the coupling between two sources of data is bidirectional, there is no straightforward correspondence between phase and timedelays. The resulting ambiguity can only be resolved by reference to a model of how phasedifferences are generated. This section concludes by briefly reprising the generative models used in DCM of steadystate responses that resolve this ambiguity.
Where possible, we will denote Fourier transforms by upper case letters such that S_{i}(ω) ∈ C is the Fourier transform of a stationary timeseries
si(t)∈R 
j=−1 
The complex coherence function between two widesense stationary signals s_{i}(t) and s_{j}(t), is equal to their cross power spectrum g_{ij}(ω) = 〈S_{i}S_{j}*〉 ∈ C divided by the square root product of the two auto power spectra (^{Carter, 1987; Priestley, 1981}). The magnitudesquared coherence (^{Carter, 1987}),
Cij(ω)∈R 
1
Cij=gij(ω)2gii(ω)gjj(ω)∈R 
The coherence can be factorized into the correlation between the signal amplitudes and the (circular) dispersion of their phasedifferences (e.g. ^{Priestley, 1981; Friston et al., 1997}, (Eq. A3)):
2
Cij=αiαj2αi2αj2×ΦijΦij=∫−ππp(δij)sin(δij)dδij2+∫−ππp(δij)cos(δij)dδij2 
Here, α_{i} = S_{i} corresponds to signal amplitude and ϕ_{i} = arg{S_{i}} to phase (where radial frequency is
ω:=ϕ˙i 
In what follows, we will be concerned with generative models of data features that disclose the mapping between some exogenous (neuronal) fluctuations or innovations
uk(t)∈R 
si(t)∈R 
The crossspectral density is the sum of crossspectra induced by each innovation, where there is an innovation for each source of activity that contributes to observed signals. The cross spectrum due to the kth innovation is simply the product of the transfer functions (Fourier transforms) of the corresponding kernels, K_{i}^{k}(ω, θ) and the spectral density, γ_{k}(ω) = 〈U_{k}U_{k}〉 of (statistically independent) innovations
3
gij(ω,θ)=∑klKikKjl·exp(j(ϕik−ϕjl))·UkUl·exp(j(ϕk−ϕl))=∑klKikKjl·exp(j(ϕik−ϕjl))·UkUl·exp(j(ϕk−ϕl))=∑kKikKjk·exp(j(ϕik−ϕjk))·UkUk=∑kKikKjk·exp(j(ϕik−ϕjk))·γk=∑kgijk(ω,θ)gijk(ω,θ)=KikKjk·exp(j(ϕik−ϕjk))·γk=Kik·Kjk∗·γkKik(ω,θ)=∫κik(t,θ)e−jωtdt 
Here, ϕ_{i}^{k} = arg{K_{i}^{k}} is the phasedelay induced by the kernel mapping the kth innovation to the ith channel. Eq. (3) just means that the predicted crossspectrum is a linear mixture of crossspectra induced by each innovation. This mixture depends on the mapping from each innovation or source to the channels in question. For example, in local field potential recordings, the number of innovations and channels could be the same. However, in noninvasive electromagnetic studies, the number of channels can be much greater than the number of sources.
Eqs. (2) and (3) provide a generative model of sample crossspectra. We have exploited this sort of model for steadystate responses extensively, when trying to infer the neuronal architectures generating local field potentials and other electromagnetic signals (^{Moran et al., 2007, 2008}). However, these models used realvalued crossspectra
gij(ω)∈R 
4
δij=∫δijp(δij)dδij=∫δij(U)p(U)dUδij(U)=arg{Si}−arg{Sj}:mod2πSi(U)=∑kKik·Ukexp(j(arg{Uk}+ϕik))p(U)=∏kp(Uk)pRe(Uk)=N(0,γk)pIm(Uk)=N(0,γk) 
This is a rather complicated integral to evaluate (involving Gauss hypergeometric forms; see ^{Lee et al., 1994}). Fig. 1 (left panel) shows the implicit density over phasedifferences for a simple example with asymmetries in how innovations drive the signals. It can be seen that the density peaks at the phasedifference induced by each innovation. If we now plot the phasedelay function arg{g_{ij}(ω, θ)} against the average phasedifference (the numerical integral in Eq. (4)), we see that the two are closely related (but are not the same because the phase of an average is not the average of a phase). Fig. 1 (right panel) shows the relationship (dots) between the phase of the crossspectrum and the average phasedifference; this relationship was disclosed by varying the relative power of the two innovations, where ln γ_{2}/γ_{1} ∈ [− 8, 8], while keeping the timedelays fixed.
Fig. 1 illustrates the key problem with interpreting phasedelays in terms of timedelays, when the sources of data are reciprocally coupled: the phasedelay function arg{g_{ij}(ω, θ)} is zero for certain combinations of power. However, the timedelays did not change. Put simply, the phasedelays induced by different innovations can cancel each other out. Only when the power of one innovation predominates is the symmetry broken. In this situation, the phasedelay function reflects the timedelays associated with the larger innovation. This means that the phasedelay function is a lower bound on the phasedelays caused by each innovation (that depend on the timedelays between sources). This can be seen in the right panel of Fig. 1, which shows that phasedelays are bounded by the phasedifferences induced by the two innovations. This means that the strength and timedelay of connections among distributed sources of data cannot be recovered from crossspectra (or phasedelay functions) in the absence of a generative model that specifies how sources are connected.
In summary both the real and imaginary parts of crossspectra contain useful information about the underlying system. The modulus relates to the measure of coherence, while the argument (phase or angle) is a complicated function of phasedelays induced by exogenous fluctuations (innovations). However, neither provides a unique or complete description of how data are generated and may be better thought of as data features that have yet to be explained by a generative model. In what follows, we will therefore consider the key data feature the sample crossspectra g_{ij}(ω) and treat both its real and imaginary parts on an equal footing. We will use conditional predictions of g_{ij}(ω, θ), arg{g_{ij}(ω, θ)} and FT^{− 1}{g_{ij}(ω, θ)} to report the coherence, phasedelay and crosscorrelation functions for pairs of hidden neuronal states and observed signals. To generate these predictions we need the system's kernels, κ_{i}^{k}(τ, θ). These are specified in a straightforward way by the form of the model and its parameters, as described next.
The kernels obtain analytically from the Jacobian
I=∂f/∂x 
x˙=f(x,u,θ) 
5
κik(τ,θ)=∂si(t)∂uk(t−τ)=∂si(t)∂g(t)∂g(t)∂x(t)∂x(t)∂x(t−τ)∂x(t−τ)∂x˙(t−τ)∂x˙(t−τ)∂uk(t−τ)=∂gi∂xexp(Iτ)I−1∂f∂uk 
This means the kernels are functions of the model's equations of motion and output mapping. The output mapping may be a simple gain function (for LFP data) or an electromagnetic forward model (for EEG and MEG data). The use of the chain rule follows from the fact that the only way past inputs can affect current outputs is through hidden states. The particular equations of motion used here correspond to a neuralmass model that has been used extensively in the causal modeling of electromagnetic data (^{David and Friston, 2003}; ^{Jansen and Rit, 1995; Moran et al., 2008}). These equations implement a simple but biologically motivated (alphafunction based) model (^{Jansen and Rit, 1995}) that captures the key aspects of synaptic processing; fast excitation and inhibition in layered cortical sources (^{Moran et al., 2011}). The equations for a single source are summarized in Fig. 2.
In a DCM comprising N sources, firing rates provide endogenous inputs from subpopulations that are intrinsic or extrinsic to the source (see Fig. 2). These firing rates are a sigmoid function of depolarization, which we approximate with a linear gain function (evaluated at the system's fixed point; ^{Moran et al., 2007}). Subpopulations within each source are coupled by intrinsic connections (with a conduction delay of 4 ms: ^{Lumer et al. (1997)}), whose strengths are parameterized by γ = {γ_{1}, …, γ_{5}} ⊂ θ. These intrinsic connections can arise from any subpopulation. Conversely, in accordance with cortical anatomy, extrinsic connections arise only from the excitatory pyramidal cells of other sources. The strengths of these connections are parameterized by the forward, backward and lateral extrinsic connection matrices;
AF∈RNxN 
AB∈RNxN 
AL∈RNxN 
Δ∈RNxN 
The innovations correspond to exogenous fluctuations
u(t)∈RNx1 
6
γk(ω)=αu+βuω 
As noted above, one innovation is associated with each neuronal node or source.
The observer function is a mapping from N sources to observed data features expressed at M channels:
s(x,θ)=L(η)x˜ 
x˜(t)∈RNx1 
L(η)∈RMxN 
This completes the description of the neuronal model and, implicitly, the generative model for modeled crossspectra. This model contains unknown parameters θ ⊃ {γ, A, Δ, α, β, η, …} controlling the strength and delays of intrinsic and extrinsic connections, the autospectra of innovations and the electromagnetic forward model. These parameters define the kernels and associated crossspectra in Eq. (3). To complete our specification of a generative model, we presume the data (sample crossspectra) to be a mixture of the predicted crossspectra, channel noise and Gaussian prediction error (see ^{Moran et al., 2009} for details)
7
gij(ω)=gij(ω,θ)+αs+βsω+εij(ω)Re(εij)~N(0,Π(ω)ε−1)Im(εij)~N(0,Π(ω)ε−1) 
The channel noise, like the innovations, is parameterized in terms of (unknown) white (α) and pink (β) components, which can include channelspecific and nonspecific components. Please see ^{Moran et al. (2009)} for more details.
This section has motivated the use of complex crossspectra as data features that summarize the behavior of ergodic timeseries. We have seen that only the absolute values of crossspectra are used to form measures of coherence. Although coherence depends upon the dispersion of phasedifferences, it is not sensitive to the expected or systematic phasedifferences that could be introduced by neuronal dynamics or conduction delays. A simple solution to this is to use a generative model of both the real and imaginary parts of the crossspectra and fit these predictions to sample crossspectra. To do this we need a Bayesian model inversion scheme that can handle complexvalued data.
In this section, we consider a generalization of the variational scheme (^{Friston et al., 2007}) used to invert Dynamic Causal Models that can handle complexvalued data. In what follows, we will briefly summarize the overall principles of model inversion and list the special differences that attend the analysis of complexvalued data.
Almost universally, the fitting or inversion of Dynamic Causal Models uses a variational freeenergy bound on the logevidence for a model m (see ^{Friston et al., 2007} for details). This bound is optimized with respect to a variational density q(θ) (which we assume to be Gaussian) on unknown model parameters. By construction the freeenergy bound ensures that when the variational density maximizes freeenergy, it approximates the true posterior density over parameters: q(θ) ≈ p(θy, m). At the same time, the freeenergy itself
F(y,q)≈lnp(ym) 
Usually, one first compares different models (e.g., with and without particular connections) using their logevidence and then turns to inferences on parameters, under the model selected (for an overview of procedures for inference on model structure and parameters in DCM, see ^{Stephan et al., 2010}). Here, we focus on the use of the conditional density, given a single model, which we assume has a Gaussian form
q(θ)=N(μ,Σ) 
The freeenergy is the average of the loglikelihood and logprior of the model under the variational density and its entropy (see ^{Friston et al., 2007; Kiebel et al., 2008a,b}). For nonlinear models, under Gaussian assumptions about the variational density and observation noise, this has a very simple form:
8
F=ln(p(g(ω),θ)−lnq(θ)q=G(μ)+12ln∂μμGG=−12Re(ε)TΠεRe(ε)−12Im(ε)TΠεIm(ε)−12υTΠνυ+12lnΠε+12lnΠνε=g(ω,μ)−g(ω)ν=μ−υ 
Here, g(ω, μ) represents any nonlinear prediction or mapping from model parameters to data features (cf, Eq. (6)) and ε(μ) ∈ C are the corresponding prediction errors (i.e., discrepancies between the sampled and predicted crossspectra). Similarly, v(μ) ∈ C are prediction errors on the parameters, in relation to their prior density
p(θm)=N(υ,Πν−1) 
G(μ) 
9
∂μG=−Re(∂με)TΠεRe(ε)−Im(∂με)TΠεIm(ε)−Πνυ∂μμG=−Re(∂με)TΠεRe(∂με)−Im(∂με)TΠεIm(∂με)−Πν 
The gradients in Eq. (9) are used in a GaussNewton scheme to optimize the parameters iteratively, until the freeenergy has been maximized. In practice, things are a little more complicated because one often makes a meanfield assumption when estimating parameters of the model and the noise precision, Π_{ε} (inverse covariance). In other words, the precision of the prediction error is usually assumed to be conditionally independent of the parameters. The gradient ascent then becomes a coordinate ascent that optimizes the conditional expectations of the model and precision parameters respectively. This is called Variational Laplace, which reduces to classical expectation maximization under some simplifying assumptions. A full description of these schemes, and their relationship to each other, can be found in ^{Friston et al. (2007)}.
In this section, we have considered the central role of the freeenergy bound on logevidence used in model selection and inversion. The only thing we have to worry about, when dealing with complexvalued data, is to separate the real and imaginary parts of the data (and implicitly prediction errors), when evaluating the freeenergy and its gradients. Having done this, we can then use standard schemes to optimize the parameters of any Dynamic Causal Model and select among competing models to find the one that has the highest freeenergy (logevidence). We now illustrate the application of this scheme using simulated data.
In this section, we use simulated data from four sources, with known directed connections and delays, to establish the face validity of the inversion scheme of the previous section. Our particular focus here is on the improvement in the precision of parameter estimates, when including the phase information in complex crossspectra. To illustrate this we will look closely at the conditional density over conduction delays. We will then be in a position to compare these estimates with true values and how these conduction delays translate into phasedelays and timelags at the level of simulated population dynamics.
To simulate data, we used the (^{David and Friston, 2003}) neural mass model above to simulate four sources, organized into two pairs. The sources within each pair were coupled with lateral connections, whereas there was an asymmetric directed coupling between the first and second pair. This allowed us to look at predicted and estimated crossspectra within and between pairs and illustrate the consequences of reciprocal connections between sources. The data were generated using the model parameters estimated from the empirical data of the next section. The only difference was that we suppressed backward connection to enforce an asymmetric (directed) coupling between the two pairs. The data were modeled as arising from pairs of sources in the Globus pallidus (GP) and subthalamic nucleus (STN). The connections from the GP to the STN constitute the forward (GABAergic) connections of the indirect basal ganglia pathway, while the reciprocal (glutamatergic) connections are from the STN to the GP. To suppress these backward connections we set them to a half, while the forward connections were given a value of four. The lateral connections were given intermediate values of one. This means our forward connections were 2 × 4 = 8 times stronger than the backward connections (i.e., the first pair of sources drove the second). The conduction delays were as estimated from the empirical data. See Fig. 3 (left panels) for a schematic of this small network of sources.
Data were simulated over 4 s with time bins of 4 ms. Crossspectra were generated directly in frequency space, assuming that each source was driven by random fluctuations and that LFP data were observed with a signal to noise ratio of about 10%. The spectral characteristics of the innovations and channel noise were controlled by mixing white and pink noise components (see Eqs. (6) and (7)), using the conditional parameter estimates from the empirical analysis reported in the next section.
An example of these simulated data (sample crossspectra) is shown in Fig. 3 (right panels) and illustrates the characteristic beta coupling seen in patients with Parkinson's disease and animal lesion models thereof (^{Lehmkuhle et al., 2009; Silberstein et al., 2005}). These simulated crossspectra were then used to invert the neural mass model described. Because connections strengths and time delays (and other model parameters) are nonnegative quantities, their prior mean is scaled by a free parameter with a lognormal distribution. We refer to these as the logscale parameters, such that a logscaling of zero returns the prior mean. Priors,
p(θm)=N(υ,Πν−1) 
Fig. 3 (right panels) shows the simulated sample cross spectra, in terms of their real and imaginary parts (upper panels) and the corresponding absolute values or modulus (lower panel). The auto and crossspectra for all four simulated channels are shown as dotted (colored) lines. Following optimization of the model parameters, the modeled crossspectra are shown as solid lines and illustrate the goodness of fit or accuracy of model inversion (they are barely distinguishable in many cases). The key thing to take from this example is the pronounced crossspectral density in the beta range (20 Hz) that can be seen in both the real and imaginary parts. The relative contribution of the complex part is only about 10% of the real part but is concentrated in the frequency ranges over which coherence induced by coupling among the sources is expressed. The imaginary part of the complex crossspectra (upper right panel) contains information that enables the estimation of phasedelays. These predicted crossspectra are based on the conditional means of the parameters shown in the next figure.
The upper panel of Fig. 4 shows the true and prior values of the key coupling parameters in this DCM (for clarity, only the strengths and conduction delays of the extrinsic connections among sources are shown). The lower two panels show the posterior or conditional densities after fitting the model to complex (middle panel) and absolute (lower panel) crossspectra from the four regions. These estimates allow us to quantify any improvement in the accuracy or precision of parameter estimates, in relation to the true values, when inverting complex data relative to absolute data (^{Moran et al., 2009}). The model comprises four forward connections (A^{F}) from the GP to STN (Fig. 3), four backward connections (A^{B}) from the STN to GP, and four lateral connections (A^{L}). Given the conditional densities over these parameters, we can not only assess whether the complex scheme provides estimates that are closer to the true parameter values than the corresponding modulusbased estimates, but also whether the conditional precision or confidence increases. In the upper panel, we see that a priori all logscale parameters are zero; these priors regularize the estimates and induce a “shrinkage” effect on the posterior estimates. The pink bars correspond to the prior 90% confidence interval. The true values of the simulated parameters are shown as blue crosses. The true strengths disclose the asymmetry in this directed connectivity, which we hoped would subtend substantial phasedelays.
First we consider the first twelve parameters corresponding to the connections strengths, A^{F}, A^{B} and A^{L}. These are shown in the lower panels, where the gray bars report the posterior mean and pink bars denote 90% posterior confidence intervals. In the middle panel of Fig. 4, one can see that the asymmetry in the GPSTN network has been detected using the complex spectra, with larger values for the forward connections than for the backward connections. However, the shrinkage priors have precluded the forward connections from attaining their true values of log(4). Interestingly, the inversion has failed to decrease the backward connections to their true value of −log(2). These posterior densities should be compared with the lower panel in Fig. 4, illustrating the equivalent densities obtained after fitting the absolute crossspectra. In this example, the conditional estimates using the complex and modulus schemes are roughly the same.
However, Fig. 4 reveals a greater improvement in the estimation of the delays (Δ) for the complex compared to the modulusdata scheme. These are the second set of twelve parameter estimates. In particular, we observe that the 90% confidence intervals encompass the true simulated values in ten of the twelve parameters in the complexscheme, compared to only seven of twelve parameters in the modulus scheme. Crucially, the estimates of the delays are more uncertain for conventional modulusbased schemes than when using complexvalued data. We now look at this more closely.
The upper panel of Fig. 5 shows the posterior uncertainty (covariance) for all (127) unknown or free parameters in this DCM. Here, we have plotted the conditional uncertainty after fitting the modulus data against the equivalent uncertainty when using complex crossspectra. As might be anticipated, the uncertainties about estimates that are informed by the modulus only are higher than when both real and imaginary parts are used. These differences are particularly marked for the estimates of conduction delays (marked by red dots). In some instances, there has been more than a doubling of the conditional precision or certainty when using complex data. This is exactly the sort of behavior we hoped to observe and reflects the improvement afforded by generative models of complex data. The bottom panel provides the full conditional density on the conduction delay for one lateral (within pair) connection (the connection from the third source to the last). This is the parameter that showed the greatest change in conditional covariance under the two inversions (indicated by the connecting line in Fig. 4). The true value of the conduction delay was about 5 ms and falls within the posterior (shown in blue) when using complexvalued data. This is very distinct from the broader prior density shown in red. Interestingly, the posterior density obtained when using the modulus data has an intermediate value and fails to include the true value within its 90% posterior confidence interval. These results illustrate the increased accuracy and precision of posterior inferences, particularly on delay parameters, that are afforded by using complexvalued data with both real and imaginary parts.
We now turn to the implicit coherence, phasedelay and crosscorrelation functions predicted by the parameter estimates. In what follows, we will actually use the crosscovariance functions to present quantitatively, the shared variance in two signals. Furthermore, we will divide the angular phasedelay by frequency and display it in units of milliseconds.
The upper right panels in Fig. 6 show the sample (dotted lines) and modeled (solid lines) coherence among the four simulated channels. The panels below the leading diagonal show the corresponding phasedelay functions in milliseconds. The leading diagonal panels (pertaining to autospectra) have been omitted, because the associated phasedelay is zero and the coherence is one for all frequencies. Note that the coherence between one channel and another is the same (calculated as the angular phasedelay divided by frequency). There are two important points to be made using these results. First, there is a relatively poor correspondence between the sample and modeled coherence. This is because coherence (defined in Eq. (1); technically the magnitude squared coherence) is a highly nonlinear function of the original data features (the complex crossspectra). From Eq. (1) we can see that the nonlinearity results from normalizing the absolute squared crossspectra by the product of the autospectra from the two channels (^{Carter, 1987}). It is possible that the large gamma coherence (~ 0.4 in some cases) observed from the channel data and not capitulated in our estimate result from unstable ratios at frequencies with low power in the autospectra. This contrasts with the modeled coherence based upon the modeled crossspectra, which was produced by a biologicallymotivated model (the DCM). The second point to note here is that the phasedelay functions are not constant over frequencies, despite the fact that the conduction delays were fixed during data generation. Like the coherences, the most interesting excursions are contained within the (beta) frequencies mediated by simulated interactions among the underlying sources. However, these are not estimates of conduction delays; as shown in the previous sections, they are lower bounds. For example, in the highlighted panel in Fig. 6 we see how the phasedelay function would suggest that this lateral (withinsource pair) connection has a conduction delay of 5 ms, even though the reciprocal connections have different timedelays (5 ms for the connection from source 3 to 4, and 16 ms for the connection from source 4 to 3). This illustrates that there is no onetoone mapping between the phasedelay and the underlying conduction delay. One can see this immediately by noting that in general the phasedelay (at any frequency) between two nodes is by definition antisymmetric (a signreversal), even though there may be a greater conduction delay from one source to a second, compared with the conduction delay from the second to the first. The bottom line here is that it is extremely difficult to infer the direction of coupling or delays from phasedelay functions in the setting of reciprocal connections. In contrast, the conditional estimates of the parameters of the DCM afford an unambiguous characterization of conduction delays. We will pursue this in the next section but in the context of coupling among hidden sources, as opposed to channels.
To highlight the distinction between measures of coupling in channel and source space we will focus on a forward connection (between the first and fourth regions). Fig. 7 reports on this coupling between channels (left panels) and sources (right panels). For this pair of channels (resp. sources) the first panel shows the sample and modeled covariance as a function of lag in milliseconds (noting that the covariance function can be recapitulated in terms of a crosscorrelation measure). The second panel shows the corresponding sample and modeled coherence as a function of frequency and the third panel shows the sample and modeled phasedelay in milliseconds as a function of frequency. Finally, the fourth panel shows the conditional density over its conduction delay. In all panels, the solid blue lines represent the true values used to generate the simulated data. The thin blue lines correspond to the conditional expectation, while the gray regions correspond to 90% posterior confidence intervals. The modeled covariance, coherence and phasedelay functions are all functions of the modeled crossspectra, which depend upon the parameters of the generative model. In other words, there is a direct mapping from any set of parameter values to a particular covariance, coherence or phasedelay function. This means that we can compute the posterior confidence intervals simply by sampling parameters from the posterior or conditional density to produce a density on these functions. We have shown the conditional densities in channel and source space side by side to emphasize some key points.
The channel space characterizations include both specific and nonspecific instrumentation or channel noise that has both white and colored components (see Eq. (7)). In contrast, the sourcespace functions are what would have been observed in the absence of channel noise (and with unit gain on the LFP electrodes). This means the characterizations in channel space (left panel) are a mixture of neuronal and nonneuronal spectral features, whereas the source space results in the right panel reflect the components or coupling due only to neuronal fluctuations or innovations. Specifically, one can see that the modeled crosscovariance function in channel space is higher, tighter and estimated with a greater conditional confidence than the corresponding modeled and sample covariance function in source space. This is because the channel data contain a substantial amount of white noise that is common to all electrodes, resulting in a more peaked cross covariance function. When removed, one can see more clearly the underlying crosscovariances due to the neuronal fluctuations. These have a clear oscillatory pattern in the beta range (note the peaks around delays 50 and − 50 ms) that has been shaped by the neuronal transfer functions associated with each source. Similarly, the modeled and sample coherence in channel space are much smaller than in source space. This is due to the channelspecific noise component, which disperses the phasedifferences and suppresses coherence. When this effect is removed, the coherence increases markedly, particularly at higher frequencies. In terms of phase and conductiondelays it can be seen that the modeled phasedelay increases, when considering sources in relation to channels. This effect can be explained in terms of nonspecific channel noise that changes the distribution of phasedifferences, so that most of its probability mass is centered at zero lag. This means the (average) phasedelay shrinks towards zero (^{Daffertshofer and Stam, 2007}). The implication here is that the phasedelay between channels represents a lower bound on the neuronal phasedelay between sources. For example, in the range 20–30 Hz, the phasedelay between channels does not exceed 5 ms, whereas it is nearly 10 ms between sources. Crucially, this is not the conduction delay (which would be the same for all frequencies). The true conduction delay in this example was ~ 15 ms and was estimated to be about 20 ms. Happily, the true value fell within the 90% conditional confidence interval (note that the conduction delays are the same for sensors and sources because they are an attribute of the underlying system not its measurement). It is also important to note that one could not deduce the conduction delay from peaks in the modeled or sample crosscovariance functions (zero and the conduction delays are shown as vertical lines). Although there is a small peak in the (sample and modeled) crosscovariance function between the two channels, there is no hint of such a peak in the modeled crosscovariance between sources. This speaks to the complicated relationship between the true (conduction) delay and how it is expressed both in terms of phasedelay functions and crosscovariance (and crosscorrelation) functions (for an example from corticomuscular recordings see ^{Williams and Baker, 2009}).
In summary, we have used simulations to show that it is possible to recover the biophysical parameters of a reasonably realistic model of distributed responses from complexvalued data, summarized in terms of their sample crossspectra. We have also seen that some parameters (especially conduction delays) are estimated more precisely when one uses complex crossspectra, as opposed to its modulus. By identifying the system in terms of its parameters, one can derive coherence, phasedelay and other functions used in conventional measures of functional connectivity. However, it can be difficult to map back from these spectral characterizations to the architectures that caused them. In the next section, we consider an analysis of real data.
In this section, we apply the analysis of the previous section to real data to demonstrate the reconstruction of conditional estimates of conventional measures of coupling among hidden sources and to highlight the complex relationship between these measures and underlying conduction delays. It should be noted that we are not presenting this analysis to draw any neurobiological conclusions but just to illustrate some technical points (an analysis of these data can be found in ^{Mallet et al., 2008a,b}). These data were acquired from adult male (6OHDAlesioned) Sprague–Dawley rats (Charles River, Margate, UK) in accordance with the Animals (Scientific Procedures) Act, 1986 (UK). Briefly, anesthesia was induced with 4% v/v isoflurane (Isoflo™, ScheringPlough Ltd., Welwyn Garden City, UK) in O_{2}, and maintained with urethane (1.3 g/kg, i.p.; ethyl carbamate, Sigma, Poole, UK), and supplemental doses of ketamine (30 mg/kg, i.p.; Ketaset™, Willows Francis, Crawley, UK) and xylazine (3 mg/kg, i.p.; Rompun™, Bayer, Germany). Extracellular recordings of LFPs in the, external GP and STN were made simultaneously using ‘silicon probes’ (NeuroNexus Technologies, Ann Arbor, MI). Each probe had one or two vertical arrays of recording contacts (impedance of 0.9–1.3 MΩ measured at 1000 Hz; area of ~ 400 μm^{2}). Neuronal activity was recorded during episodes of spontaneous ‘cortical activation’, defined according to ECoG activity. For the present paper, we used 4 s of data (downsampled to 250 Hz) from a single rat, comprising two (arbitrary) channels from the GP and STN probes. The crossspectra were constructed from these time series using a vector autoregressive model (with order p = 8 chosen to reflect the order of the neural statespace used in DCM see ^{Moran et al., 2008}). We then treated these empirical data in exactly the same way as the simulated data, i.e., we inverted the model with the same structure and priors as above. The results of this analysis are shown in Figs. 8 to 11, using the same format as for the simulated data.
Fig. 8 shows the estimated extrinsic connections strengths and predicted data features (crossspectra) using the real data from the four LFP channels described above. The freeenergy objective function maximized during estimation (Eq. (8)) ensures maximum accuracy under complexity constraints, where complexity is the divergence between the prior and posterior densities (^{Penny et al., 2004}). In other words, to avoid overfitting, the model is constrained by priors over the parameters. In the present analyses, it is noteworthy that despite these constraints, the predictions in Fig. 8 are very accurate, capturing most of the salient features in both the real and imaginary parts of these crossspectra (with the exception of frequencies above 50 Hz). The images (lower panels) show the conditional estimates of the extrinsic coupling strengths for forward, backward and lateral connections respectively (on the left, middle and right). The connection strengths and the posterior probability of exceeding their prior mean (of 32, 16 and 4 [arbitrary units] for forward, backward and lateral connections, respectively) are displayed alongside the connections in the left panel: The strongest connection was from the second pallidal source to the second subthalamic nucleus source. In terms of backward connections, the most prominent was from the first subthalamic to the second pallidal source (although both backward connections were weaker than their forward homologues). The most salient aspect of the ensuing architecture is a predominantly forward connectivity from GP to STN sources. This is consistent with the role of these connections in the indirect pathway. The predictions of this architecture, in terms of absolute crossspectra, are shown in Fig. 9.
Fig. 9 shows the modeled (solid lines) and sample (dotted lines) absolute crossspectra among the four channels. The autospectra along the leading diagonal are gathered together on the lower left. In these data, we see a pronounced spectral peak at 20 Hz in most channels; although relatively suppressed in the crossspectra involving the fourth channel. The corresponding modeled coherence and phasedelay functions among the underlying sources are shown in Fig. 10. This figure follows the same format as Fig. 6 but presents the modeled coherence and phasedelay functions in source space (as opposed to sensor space) having removed channel noise. The most salient feature of these results is the marked phasedelay (more than 10 ms) between the second STN source and the remaining sources. Interestingly, the greatest coherence between this source and the remaining sources is seen in the gamma range (40–60 Hz in these data), whereas beta (20 Hz) coherence appears to be restricted to exchanges between the globus pallidus and first subthalamic source. In some cases these sources appear to have coherence approaching one. Fig. 11 uses the same display format as in Fig. 7 and shows the covariance, coherence and phasedelay functions for the connection between the first globus pallidus source and the second subthalamic nucleus source. The asymmetry in this bidirectional coupling has induced a profound asymmetry in the modeled crosscovariance function, with greater covariances at lags up to about 30 ms. The modeled phasedelay function peaks at around 12 ms and is upperbounded by the conditional estimate of the (forward) conduction delay (just above 15 ms). From a linear systems perspective, the coupling here appears to be mediated by gamma coherence (upper right panel). This is consistent with the (asymmetrical) peaks of the crosscovariance function, where the first peak (for positive lag) occurs around 25 ms: this lag is not inconsistent with the high coherence at 40 Hz shown on the upper right. However, it would be a mistake to interpret these results as showing that signals from the GP to the STN source are delayed by 25 ms. Furthermore, the differential phasedelay (of 12 ms) in the beta range and (5 ms) in the gamma range does not suggest that fast frequencies are propagated with a smaller conduction delay than slow frequencies: The conduction delay is the same for all frequencies (about 15 ms). The frequency dependency of phasedelays is a result of interactions within and between sources, modeled here in terms of linear differential equations.
In this section, we have applied DCM for complex crossspectra to LFP recordings from subcortical sources that are constituents of the (indirect) corticobasal gangliathalamic pathways. The conditional estimates of connection strengths suggest that the extrinsic (betweenregion) coupling was asymmetrical and directed, consistently for the two pairs of sources considered. The associated conduction delays were fairly long (15 ms), which may reflect real axonal propagation delays or the inherent slowness of population dynamics in relation to the firing of individual neurons. The modeled responses to neuronal fluctuations produced some interesting and complicated spectral behaviors; with the coherence between two sources being mediated largely by gamma coherence with a phasedelay that was substantially less than the phasedelay at beta frequencies predominant in the source region. This again highlights the complicated mapping between the underlying functional architecture generating signals and classical measures based on linear systems theory.
In summary, we have described a way to access conditional estimates of coherence, phasedelay and covariance functions between timeseries in sensor or source space. This rests on the inversion of Dynamic Causal Models as generative models for complexvalued data. The benefits of a generative model include the ability to see how various model parameters effect coherence and phasedelays in a frequencyspecific manner. Furthermore, one can reconstitute the conditional coherence and related functions, not just between data channels but between any hidden states that are included in the model. In the examples above, we were able to look at the phasedelays between specific subpopulations comprising one of several sources of local field potential data. An important consequence of this is that we can access conditional coherence and phasedelay functions among sources that are observed noninvasively with EEG and MEG.
One important conclusion of this work is that one should be careful in interpreting estimates of phasedelays as an expression of conduction delays. Conduction delays are undoubtedly of major importance for understanding largescale neuronal dynamics (^{Breakspear et al., 2006; Deco et al., 2009; Jirsa and Ding, 2004; Roberts and Robinson, 2008}), and it is tempting to infer them by assuming a direct relationship with phase differences in recorded signals (e.g., ^{Brazier, 1972; Gotman, 1981}). However, as shown above, the relation between phase differences and conduction delays is not straightforward: phasedelays can differ across frequencies, while conduction delays are determined by axonal microarchitecture and are fixed across all frequencies. Furthermore, as shown in Fig. 7, conduction delays cannot be inferred from peaks in the crosscovariance function. In summary, inference on conduction delays can only be made with a model that parameterizes these delays. As suggested by our analyses (Fig. 4), inference on conduction delays can benefit from modeling the imaginary components of recorded data.
In this paper, we used invasive LFP recordings, assuming that the signal at each channel was provided by one source. However, exactly the same scheme can be applied to EEG and MEG data, where there may be many more (or less) channels than sources. In this context, the ability to recover conditional estimates of coupling among sources (as opposed to channels) is crucial and finesses some of the issues associated with interpreting coherence among channels, e.g., volume conduction effects (^{Schoffelen and Gross, 2009; Stam et al., 2007; Winter et al., 2007}) or correlated noise in the context of Granger causal estimates (see ^{ValdesSosa et al., 2011} for a discussion).
In principle, any metric that has proven fruitful for connectivity analyses at the sensor level (such as phasesynchronization, transfer entropy or Grangercausal measures; e.g., ^{Brovelli et al., 2004; Bressler et al., 2007; Dhamala et al., 2008; Lachaux et al., 1999; Rodriguez et al., 1999; Vakorin et al., 2010; Varela et al., 2001}) can be derived from the conditional estimates provided by DCM. This is because conventional measures can be derived from the transfer functions that are determined uniquely by the parameters of a biophysical DCM. With the developments described in this paper, it is now possible to reproduce conventional metrics of coupling by replacing the conventional modelfree (sample) estimator with a modelbased (conditional) estimator. Crucially, this can be done in either sensor or source space.
Phasesynchronization is usually used to quantify the amount of nonlinear coupling between channels (e.g., ^{Rosenblum et al., 1996; Tass and Haken, 1996}). The phasesynchronization index (Eq. (2)) can be computed from the distribution of phasedifferences (Eq. (4)), which is specified by the conditional estimates of a DCM. However, the underlying DCM can be linearized (as in this paper), which provides an interesting perspective on phasesynchronization. Many people (including ourselves; ^{Chawla et al., 2001}) have tried to understand how zerolag phasesynchronization can emerge in nonlinear coupled neuronal oscillators. However, the linear systems perspective provides a rather trivial explanation: the phasedelays induced by random fluctuations that are passed between reciprocally connected sources cancel. In fact, it is rather hard to generate nonzero lag phase synchronization unless one introduces substantial asymmetries in the coupling (see Fig. 1). Whether this is a useful perspective remains to be established, particularly in the context of DCMs that model nonlinear coupling (e.g., ^{Chen et al., 2008}).
It is hoped that these developments may harmonize DCM and conventional timeseries analysis. This is meant in the sense that conventional analyses in electrophysiology can now be complemented with conditional estimates of spectral behavior that are informed by the neuronal architecture generating these behaviors. This should allow intuitions about how phase relationships and coherence arise to be tested. The marriage between conventional (linear systems) timeseries analysis and DCM is evident in this work at two levels. First, we use a linearization around the fixed point of the system to enable the use of linear systems theory to generate predicted spectral responses. Secondly, the data features predicted are themselves motivated by linear systems theory. However, the estimation of these sample spectra highlights the fundamental difference between the spectral characterizations used in conventional analyses and those furnished by DCM. This difference rests upon the underlying generative model. Our sample spectral data were constructed from time series using a vector autoregressive model At this point conventional approaches would stop and report power, coherence, and other metrics of functional connectivity and interpret these quantities directly. However, from the point of view of DCM, this autoregressive process (or spectral estimates derived from Fourier or wavelet based techniques) serves as a feature selection step to provide a compact summary of the data in terms of their sample cross spectra. The desired spectral estimates are those that are conditioned upon a biologically plausible DCM, which best accounts for the sample (conventional) spectra. In short, the difference between conventional and conditional crossspectra (in sensor space) is that the latter are constrained by a model that allows one to put formal constraints and prior beliefs into the estimation. Furthermore, there is a unique mapping between the parameters of the underlying model and the conditional spectra provided by DCM. Employing complexvalued data features, as we have shown, becomes especially important when trying to establish spectral asymmetries in reciprocal connections (e.g. between forward and backward messagepassing in the brain) and associating these asymmetries with the laminar specificity of forward and backward connections. To address these sorts of issues it will be necessary to examine conditional coherence between different subpopulations (i.e., cortical layers), which is, in principle, possible with DCM. We will pursue this in future work using ECoG recordings in awakebehaving monkeys (^{Rubehn et al., 2009}).
Perhaps the simplest and most important point made by the analyses in this paper is that conventional characterizations of coupling among observed channel data are basically the starting point for Dynamic Causal Modeling. In other words, we are interested in establishing how particular data features like coherence and phasedelay are generated biophysically. Once one has an explicit mapping between the underlying biophysical parameters of a generative model and the predicted behavior in terms of crossspectral density (and associated functions) the rather complicated relationship between conduction strengths and delays and how they manifest in terms of coherence and crosscorrelation functions becomes more evident. In this sense, Dynamic Causal Modeling of observed crossspectra may allow one to further qualify and understand the subtleties of conventional summaries. Perhaps one of the most important (and unforeseen) aspects of the analyses presented here was how channel noise can influence sample covariance and coherence functions in such a qualitative fashion. One of the key advantages of having a generative model is that one can partition observed coherence into those parts that are mediated neuronally and those parts which are not. This may represent one step towards a more quantitative assessment of coherence and phasedelays and how they relate to asymmetries in the strength and conduction delays of underlying neuronal connections.
All the inversion schemes and DCM analyses described in this paper can be implemented using Matlab routines that are available as part of our academic freeware from http://www.fil.ion.ucl.ac.uk/spm/software/spm8/.
References
Brazier M.A.B.. Spread of seizure discharges in epilepsy: anatomical and electrophysiological considerationsExp. Neurol.36Year: 19722632724559716  
Breakspear M.,Roberts J.A.,Terry J.R.,Rodrigues S.,Mahant N.,Robinson P.A.. A unifying explanation of primary generalized seizures through nonlinear brain modeling and bifurcation analysisCereb. Cortex169Year: 20061296131316280462  
Bressler S.L.,Richter C.G.,Chen Y.,Ding M.. Cortical functional network organization from autoregressive modeling of local field potential oscillationsStat. Med.2621Year: 20073875388517551946  
Brovelli A.,Ding M.,Ledberg A.,Chen Y.,Nakamura R.,Bressler S.L.. Beta oscillations in a largescale sensorimotor cortical network: directional influences revealed by Granger causalityProc. Natl. Acad. Sci.101Year: 20049849985415210971  
Brown E.M.,Kass R.E.,Mitra P.P.. Multiple neural spike train data analysis: stateoftheart and future challengesNat. Neurosci.7Year: 200445646115114358  
Carter C.G.. Coherence and timedelay estimationProc. IEEE75Year: 1987236255  
Chawla D.,Friston K.J.,Lumer E.D.. Zerolag synchronous dynamics in triplets of interconnected cortical areasNeural Netw.146–7Year: 200172773511665766  
Chen C.C.,Kiebel S.J.,Friston K.J.. Dynamic causal modelling of induced responsesNeuroimage414Year: 20081293131218485744  
Daffertshofer A.,Stam C.J.. Influences of volume conduction on phase distributionsInt. Congr. Ser.1300Year: 2007209212  
Daunizeau J.,Kiebel S.J.,Friston K.J.. Dynamic causal modelling of distributed electromagnetic responsesNeuroimage47Year: 200959060119398015  
David O.,Friston K.J.. A neuralmass model for MEG/EEG: coupling and neuronal dynamicsNeuroimage203Year: 20031743175514642484  
David O.,Harrison L.,Friston K.J.. Modelling eventrelated responses in the brainNeuroimage253Year: 200675677015808977  
David O.,Kiebel S.J.,Harrison L.M.,Mattout J.,Kilner J.M.,Friston K.J.. Dynamic causal modeling of evoked responses in EEG and MEGNeuroimage30Year: 20061255127216473023  
Deco G.,Jirsa V.,McIntosh A.R.,Sporns O.,Kötter R.. Key role of coupling, delay, and noise in resting brain fluctuationsProc. Natl. Acad. Sci. U.S.A.10625Year: 2009103021030719497858  
Dhamala M.,Rangarajan G.,Ding M.. Estimating Granger causality from Fourier and wavelet transforms of time series dataPhys. Rev. Lett.1001Year: 200801870118232831  
Felleman D.J.,Van Essen D.C.. Distributed hierarchical processing in the primate cerebral cortexCereb. Cortex11Year: 19911471822724  
Freeman W.,Holmes M.,Burke B.,Vanhatalo S.. Spatial spectra of scalp EEG and EMG from awake humansClin. Neurophysiol.114Year: 20031053106812804674  
Friston K.J.,Stephan K.M.,Frackowiak R.S.. Transient phaselocking and dynamic correlations: are they the same thing?Hum. Brain Mapp.51Year: 1997485720408209  
Friston K.J.,Mattout J.,TrujilloBarreto T.,Ashburner J.,Penny W.. Variational freeenergy and the Laplace approximationNeuroimage34Year: 200722023417055746  
Gotman J.. Interhemispheric relations during bilateral spikeandwave activityEpilepsia22Year: 19814534667262051  
Jansen B.H.,Rit V.G.. Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columnsBiol. Cybern.73Year: 19953573667578475  
Jirsa V.K.,Ding M.. Will a large complex system with timedelays be stable?Phys. Rev. Lett.937Year: 200407060215324222  
Kay S.M.,Marple S.L.. Spectrum analysis — a modern perspectiveProc. IEEE6911Year: 198113801419  
Kiebel S.J.,Garrido M.L.,Friston K.J.. Dynamic causal modelling of evoked responses: the role of intrinsic connectionsNeuroimage36Year: 200733234517462916  
Kiebel S.J.,Daunizeau J.,Phillips C.,Friston K.J.. Dynamic causal modelling of evoked responses: the role of intrinsic connectionsNeuroimage39Year: 200872877417951076  
Kiebel S.J.,Garrido M.L.,Moran R.,Chen C.C.,Friston K.J.. Dynamic causal modeling for EEG and MEGHum. Brain Mapp.30Year: 20081866187619360734  
Lachaux J.P.,Rodriguez E.,Martinerie J.,Varela F.J.. Measuring phase synchrony in brain signalsHum. Brain Mapp.84Year: 199919420810619414  
Lee J.S.,Allen R.,Miller A.R.,Karl W.,Hoppel K.W.. Statistics of phasedifference and product magnitude of multilook processed Gaussian signalsWaves Random Complex Media43Year: 1994307319  
Lehmkuhle M.J.,Bhangoo S.S.,Kipke D.R.. The electrocorticogram signal can be modulated with deep brain stimulation of the subthalamic nucleus in the hemiparkinsonian ratJ. Neurophysiol.1023Year: 200918111820 Sep. Epub 2009 Jul 22. 19625533  
Lumer E.D.,Edekman G.M.,Tononi G.. Neural dynamics in a model of the thalamocortical system. I. Layers, loops and the emergence of fast synchronous rhythmsCereb. Cortex7Year: 19972072279143442  
Mallet N.,Pogosyan A.,Marton L.F.,Bolam J.P.,Brown P.. Parkinsonian beta oscillations in the external global pallidus and their relationship with subthalamic nucleus activityJ. Neurosci.52Year: 2008142451425819109506  
Mallet N.,Pogosyan A.,Sharott A.,Csicsvari J.,Bolam J.P.. Disrupted dopamine transmission and the emergence of exaggerated beta oscillations in subthalamic nucleus and cerebral cortexJ. Neurosci.18Year: 20084795480618448656  
Mardia K.,Jupp P.. Directional StatisticsYear: 1999WileyNew York  
Moran R.J.,Kiebel S.J.,Stephan K.E.,Reilly R.B.,Daunizeau J.,Friston K.J.. A neuralmass model of spectral responses in electrophysiologyNeuroimage373Year: 200770672017632015  
Moran R.J.,Stephan K.E.,Kiebel S.J.,Rombach N.,O'Connor W.T.,Murphy K.J.,Reilly R.B.,Friston K.J.. Bayesian estimation of synaptic physiology from the spectral responses of neural massesNeuroimage42Year: 200827228418515149  
Moran R.J.,Stephan K.E.,Seidenbecher T.,Pape H.C.,Dolan R.J.,Friston K.J.. Dynamic causal models of steadystate responsesNeuroimage44Year: 200979681119000769  
Moran R.J.,Stephan K.E.,Dolan R.J.,Friston K.J.. Consistent spectral predictors for dynamic causal models of steady state responsesNeuroimage55Year: 20111694170821238593  
PascualMarqui R.D.. Coherence and phase synchronization: generalization to pairs of multivariate time series, and removal of zerolag contributionsarXiv:0706.1776v3Year: 2007 [stat.ME]. http://arxiv.org/pdf/0706.1776  
Penny W.D.,Stephan K.E.,Mechelli A.,Friston K.J.. Comparing dynamic causal modelsNeuroimage22Year: 20041157117215219588  
Priestley M.B.. Spectral Analysis and Time SeriesYear: 1981Academic PressLondon  
Roberts J.A.,Robinson P.A.. Modeling distributed axonal delays in meanfield brain dynamicsPhys. Rev. E Stat. Nonlin. Soft Matter Phys.785 Pt 1Year: 200805190119113149  
Rodriguez E.,George N.,Lachaux J.P.,Martinerie J.,Renault B.,Varela F.J.. Perception's shadow: longdistance synchronization of human brain activityNature397Year: 19994304339989408  
Rosenberg J.R.,Amjad A.M.,Breeze P.,Brillinger D.R.,Halliday D.M.. The Fourier approach to the identification of functional coupling between neuronal spike trainsProg. Biophys. Mol. Biol.53Year: 19891312682781  
Rosenblum M.,Pikovsky A.,Kurths J.. Phase synchronization of chaotic oscillatorsPhys. Rev. Lett.76Year: 19961804180710060525  
Rubehn B.,Bosman C.,Oostenveld R.,Fries P.,Stieglitz T.. A MEMSbased flexible multichannel ECoGelectrode arrayJ. Neural Eng.63Year: 200903600319436080  
Schoffelen J.M.,Gross J.. Source connectivity analysis with MEG and EEGHum. Brain Mapp.6Year: 20091857186519235884  
Silberstein P.,Pogosyan A.,Kühn A.A.,Hotton G.,Tisch S.,Kupsch A.,DowseyLimousin P.,Hariz M.I.,Brown P.. Cortico–cortical coupling in Parkinson's disease and its modulation by therapyBrain128Year: 20051277129115774503  
Stam C.J.,Nolte G.,Daffertshofer A.. Phase lag index: assessment of functional connectivity from multi channel EEG and MEG with diminished bias from common sourcesHum. Brain Mapp.11Year: 20071178119317266107  
Stephan K.E.,Penny W.D.,Moran R.J.,den Ouden H.E.M.,Daunizeau J.,Friston K.J.. Ten simple rules for dynamic causal modellingNeuroimage49Year: 20103099310919914382  
Stevens C.. Inferences about membrane properties from electrical noise measurementsBiophys. J.12Year: 1972102810475044577  
Tass P.,Haken H.. Synchronized oscillations in the visual cortex — a synergetic modelBiol. Cybern.741Year: 199631398573651  
Vakorin V.A.,Kovacevic N.,McIntosh A.R.. Exploring transient transfer entropy based on a groupwise ICA decomposition of EEG dataNeuroimage492Year: 20101593160019698792  
ValdesSosa P.A.,Roebroeck A.,Daunizeau J.,Friston K.. Effective connectivity: Influence, causality and biophysical modelingNeuroimage582Year: 201133936121477655  
Varela F.,Lachaux J.P.,Rodriguez E.,Martinerie J.. The brainweb: phase synchronization and largescale integrationNat. Rev. Neurosci.24Year: 2001229239 Apr. 11283746  
Williams E.R.,Baker S.N.. Circuits generating corticomuscular coherence investigated using a biophysically based computational model. I. Descending systemsJ. Neurophys.101Year: 20093141  
Winter W.R.,Nunez P.L.,Ding J.,Srinivasan R.. Comparison of the effect of volume conduction on EEG coherence with the effect of field spread on MEG coherenceStat. Med.2621Year: 20073946395717607723 
This work was funded by the Wellcome Trust. We are indebted to Marcia Bennett for helping prepare this manuscript. We would like to thank Dr. Nicolas Mallet and Dr. Peter J. Magill of the MRC Anatomical Neuropharmacology Unit, University of Oxford, for the use of their data in Sections three and four.
Article Categories:

Previous Document: Fast highresolution brain imaging with balanced SSFP: Interpretation of quantitative magnetization ...
Next Document: Abnormal subcortical deepgray matter susceptibilityweighted imaging filtered phase measurements in...