bayesian models for fmri data methods & models for fmri data analysis 06 may 2009 klaas enno...
Post on 19-Dec-2015
224 views
TRANSCRIPT
Bayesian models for fMRI data
Methods & models for fMRI data analysis06 May 2009
Klaas Enno Stephan
Laboratory for Social and Neural Systems ResearchInstitute for Empirical Research in EconomicsUniversity of Zurich
Functional Imaging Laboratory (FIL)Wellcome Trust Centre for NeuroimagingUniversity College London
With many thanks for slides & images to:
FIL Methods group, particularly Guillaume Flandin
The Reverend Thomas Bayes(1702-1761)
Why do I need to learn about Bayesian stats?
Because SPM is getting more and more Bayesian:
• Segmentation & spatial normalisation
• Posterior probability maps (PPMs)– 1st level: specific spatial priors– 2nd level: global spatial priors
• Dynamic Causal Modelling (DCM)
• Bayesian Model Selection (BMS)
• EEG: source reconstruction
RealignmentRealignment SmoothingSmoothing
NormalisationNormalisation
General linear modelGeneral linear model
Statistical parametric map (SPM)Statistical parametric map (SPM)Image time-seriesImage time-series
Parameter estimatesParameter estimates
Design matrix
TemplateTemplate
KernelKernel
Gaussian Gaussian field theoryfield theory
p <0.05p <0.05
StatisticalStatisticalinferenceinference
Bayesian segmentationand normalisation
Bayesian segmentationand normalisation
Spatial priorson activation extent
Spatial priorson activation extent
Posterior probabilitymaps (PPMs)
Posterior probabilitymaps (PPMs)
Dynamic CausalModelling
Dynamic CausalModelling
p-value: probability of getting the observed data in the effect’s absence. If small, reject null hypothesis that there is no effect.
0
0
: 0
( | )
H
p y H
Limitations:
One can never accept the null hypothesis
Given enough data, one can always demonstrate a significant effect
Correction for multiple comparisons necessarySolution: infer posterior probability of the effect
Probability of observing the data y, given no effect ( = 0).
)|( yp
Problems of classical (frequentist) statistics
Probability of the effect, given the observed data
Overview of topics
• Bayes' rule
• Bayesian update rules for Gaussian densities
• Bayesian analyses in SPM
– Segmentation & spatial normalisation
– Posterior probability maps (PPMs)
• 1st level: specific spatial priors
• 2nd level: global spatial priors
– Bayesian Model Selection (BMS)
Bayesian statistics
)()|()|( pypyp )()|()|( pypyp posterior likelihood ∙ prior
)|( yp )|( yp )(p )(p
Bayes theorem allows one to formally incorporate prior knowledge into computing statistical probabilities.
Priors can be of different sorts:empirical, principled or shrinkage priors.
The “posterior” probability of the parameters given the data is an optimal combination of prior knowledge and new data, weighted by their relative precision.
new data prior knowledge
)(
)()|()|(
yp
pypyP
Given data y and parameters , the conditional probabilities are:
)(
),()|(
yp
ypyp
)(
),()|(
p
ypyp
Eliminating p(y,) gives Bayes’ rule:
Likelihood Prior
Evidence
Posterior
Bayes’ rule
yy
Observation of data
likelihood p(y|)
prior distribution p()
likelihood p(y|)
prior distribution p()
Formulation of a generative model
Update of beliefs based upon observations, given a prior state of knowledge
( | ) ( | ) ( )p y p y p
Principles of Bayesian inference
Likelihood & Prior
Posterior:
Posterior mean = variance-weighted combination of prior mean and data mean
Prior
Likelihood
Posterior
y
Posterior mean & variance of univariate Gaussians
p
),;()(
),;()|(2
2
pp
e
Np
yNyp
),;()|( 2 Nyp
ppe
pe
222
222
11
111
Likelihood & prior
Posterior:
Prior
Likelihood
Posterior
Same thing – but expressed as precision weighting
p
),;()(
),;()|(1
1
pp
e
Np
yNyp
),;()|( 1 Nyp
ppe
pe
Relative precision weighting
y
Likelihood & Prior
Posterior
)2(
Relative precision weighting
Prior
Likelihood
Posterior
)2()2()1(
)1()1(
y
Same thing – but explicit hierarchical perspective
)1(
)2()2(
)1()1(
)2()1(
)1()1( )/1,;()|(
Nyp
)/1,;()(
)/1,;()|()2()2()1()1(
)1()1()1(
Np
yNyp
Bayesian GLM: univariate case
Relative precision weighting
Normal densities
exy
ppe
yy
pey
yx
x
222||
22
2
2|
1
11
x
Univariatelinear model
),;()( 2ppNp
),;()|( 2exyNyp
),;()|( 2|| yyNyp
p
y|
One step if Ce is known.Otherwise iterative estimation with EM.
GeneralLinear Model
Bayesian GLM: multivariate case
Normal densities eXθy ),;()( ppNp Cηθθ
),;()|( eNp CXθyθy
),;()|( || yyNyp Cηθθ
ppeT
yy
peT
y
ηCyCXCη
CXCXC1
||
111|
2
1
Likelihood distributions from different subjects are independent
one can use the posterior from one subject as the prior for the next
)|()...|()|(),...,|(
...
)|()|(
)()|()|(),|(
)()|( )|(
111
12
1221
11
ypypypyyp
ypyp
pypypyyp
pypyp
NNN
)|()...|()|(),...,|(
...
)|()|(
)()|()|(),|(
)()|( )|(
111
12
1221
11
ypypypyyp
ypyp
pypypyyp
pypyp
NNN
NiiN
iN
yy
N
iyyyy
N
iyyy
CC
CC
,...,|1
|1|,...,|
1
1|
1,...,|
11
1
NiiN
iN
yy
N
iyyyy
N
iyyy
CC
CC
,...,|1
|1|,...,|
1
1|
1,...,|
11
1
Under Gaussian assumptions this is easy to compute:
groupposterior covariance
individualposterior covariances
groupposterior mean
individual posterior covariances and means
“Today’s posterior is tomorrow’s prior”
Bayesian (fixed effects) group analysis
Bayesian analyses in SPM5
• Segmentation & spatial normalisation
• Posterior probability maps (PPMs)– 1st level: specific spatial priors– 2nd level: global spatial priors
• Dynamic Causal Modelling (DCM)
• Bayesian Model Selection (BMS)
• EEG: source reconstruction
Spatial normalisation: Bayesian regularisation
Deformations consist of a linear combination of smooth basis functions
lowest frequencies of a 3D discrete cosine transform.
Find maximum a posteriori (MAP) estimates: simultaneously minimise – squared difference between template and source image – squared difference between parameters and their priors
)(log)(log)|(log)|(log yppypyp MAP:
MAP:
Deformation parametersDeformation parameters
“Difference” between template and source image
“Difference” between template and source image
Squared distance between parameters and their expected values
(regularisation)
Squared distance between parameters and their expected values
(regularisation)
Bayesian segmentation with empirical priors
•Goal: for each voxel, compute probability that it belongs to a particular tissue type, given its intensity
•Likelihood model: Intensities are modelled by a mixture of Gaussian distributions representing different tissue classes (e.g. GM, WM, CSF).
•Priors are obtained from tissue probability maps (segmented images of 151 subjects).
•Goal: for each voxel, compute probability that it belongs to a particular tissue type, given its intensity
•Likelihood model: Intensities are modelled by a mixture of Gaussian distributions representing different tissue classes (e.g. GM, WM, CSF).
•Priors are obtained from tissue probability maps (segmented images of 151 subjects). Ashburner & Friston 2005, NeuroImage
p (tissue | intensity)
p (intensity | tissue) ∙ p (tissue)
Unified segmentation & normalisation
• Circular relationship between segmentation & normalisation:– Knowing which tissue type a voxel belongs to helps normalisation.– Knowing where a voxel is (in standard space) helps segmentation.
• Build a joint generative model:– model how voxel intensities result from mixture of tissue type distributions– model how tissue types of one brain have to be spatially deformed to
match those of another brain
• Using a priori knowledge about the parameters: adopt Bayesian approach and maximise the posterior probability
Ashburner & Friston 2005, NeuroImage
XyGeneral Linear Model:
What are the priors?
),0(~ CNwith
• In “classical” SPM, no priors (= “flat” priors)
• Full Bayes: priors are predefined on a principled or empirical basis
• Empirical Bayes: priors are estimated from the data, assuming a hierarchical generative model PPMs in SPM Parameters of one level = priors for
distribution of parameters at lower levelParameters and hyperparameters at each level can be estimated using EM
Bayesian fMRI analyses
Hierarchical models and Empirical Bayes
)()()()1(
)2()2()2()1(
)1()1()1(
nnnn X
X
Xy
Hierarchicalmodel
Hierarchicalmodel
ParametricEmpirical
Bayes (PEB)
ParametricEmpirical
Bayes (PEB)
EM = PEB = ReMLEM = PEB = ReML
RestrictedMaximumLikelihood
(ReML)
RestrictedMaximumLikelihood
(ReML)
Single-levelmodel
Single-levelmodel
)()()1(
)()1()1(
)2()1()1(
...
nn
nn
XXXX
Xy
Posterior Probability Maps (PPMs)
)|( yp )|( yp
Posterior distribution: probability of the effect given the dataPosterior distribution: probability of the effect given the data
Posterior probability map: images of the probability (confidence) that an activation exceeds some specified threshold, given the data y
Posterior probability map: images of the probability (confidence) that an activation exceeds some specified threshold, given the data y
)|( yp
Two thresholds:• activation threshold : percentage of whole brain mean
signal (physiologically relevant size of effect)• probability that voxels must exceed to be displayed
(e.g. 95%)
Two thresholds:• activation threshold : percentage of whole brain mean
signal (physiologically relevant size of effect)• probability that voxels must exceed to be displayed
(e.g. 95%)
mean: size of effectprecision: variability
mean: size of effectprecision: variability
PPMs vs. SPMs
LikelihoodLikelihood PriorPriorPosteriorPosterior
SPMsSPMsSPMsSPMs
PPMsPPMsPPMsPPMs
u
)(yft )0|( utp )|( yp
)()|()|( pypyp
Bayesian test:Bayesian test: Classical t-test:Classical t-test:
2nd level PPMs with global priors
In the absence of evidenceto the contrary, parameters
will shrink to zero.
In the absence of evidenceto the contrary, parameters
will shrink to zero.
)1()1()1( Xy1st level (GLM):
2nd level (shrinkage prior):
),0()( CNp
)2(
)2()2()1(
0
),0()( CNp
)(p
0
Basic idea: use the variance of over voxels as prior variance of at any particular voxel.
2nd level: (2) = average effect over voxels, (2) = voxel-to-voxel variation.
(1) reflects regionally specific effects assume that it sums to zero over all voxels shrinkage prior at the second level variance of this prior is implicitly estimated by estimating (2)
Shrinkage Priors Small & variable effect Large & variable effect
Small but clear effect Large & clear effect
2nd level PPMs with global priors
)1( Xy
1st level (GLM):
2nd level (shrinkage prior):
),0()( CNp
)2(0 ),0()( CNp
Once Cε and C are known, we can apply the usual rule for computing the posterior mean & covariance:
yCXCm
CXCXCT
yy
Ty
1||
111|
We are looking for the same effect over multiple voxels
Pooled estimation of C over voxels
voxel-specific
global pooled estimate
Friston & Penny 2003, NeuroImage
PPMs and multiple comparisons
No need to correct for multiple comparisons:
Thresholding a PPM at 95% confidence: in every voxel, the posterior probability of an activation is 95%.
At most, 5% of the voxels identified could have activations less than .
Independent of the search volume, thresholding a PPM thus puts an upper bound on the false discovery rate.
PPMs vs.SPMsSPM
mip
[0, 0
, 0]
<
< <
PPM2.06
rest [2.06]
SPMresults:C:\home\spm\analysis_PET
Height threshold P = 0.95
Extent threshold k = 0 voxels
Design matrix1 4 7 10 13 16 19 22
147
1013161922252831343740434649525560
contrast(s)
4
SPM
mip
[0, 0
, 0]
<
< <
SPM{T39.0
}
rest
SPMresults:C:\home\spm\analysis_PET
Height threshold T = 5.50
Extent threshold k = 0 voxels
Design matrix1 4 7 10 13 16 19 22
147
1013161922252831343740434649525560
contrast(s)
3
PPMs: Show activations greater than a given size
PPMs: Show activations greater than a given size
SPMs: Show voxels with non-zero
activations
SPMs: Show voxels with non-zero
activations
PPMs: pros and cons
• One can infer that a cause did not elicit a response
• Inference is independent of search volume
• SPMs conflate effect-size and effect-variability
• One can infer that a cause did not elicit a response
• Inference is independent of search volume
• SPMs conflate effect-size and effect-variability
DisadvantagesDisadvantagesAdvantagesAdvantages
• Estimating priors over voxels is computationally demanding
•Practical benefits are yet to be established
•Thresholds other than zero require justification
• Estimating priors over voxels is computationally demanding
•Practical benefits are yet to be established
•Thresholds other than zero require justification
1st level PPMs with local spatial priors
• Neighbouring voxels often not independent
• Spatial dependencies vary across the brain
• But spatial smoothing in SPM is uniform
• Matched filter theorem: SNR maximal when smoothing the data with a kernel which matches the smoothness of the true signal
• Basic idea: estimate regional spatial dependencies from the data and use this as a prior in a PPM regionally specific smoothing markedly increased sensitivity
Contrast map
AR(1) map
Penny et al. 2005, NeuroImage
A
q1 q2
W
Y
1
1 2; ,
K
kk
k k
p p
p Ga q q
α
1
11; ,
KTk
k
T T Tk k k
p p
p N
W w
w w 0 S S
u1 u2
1
1 2; ,
N
nn
n n
p p
p Ga u u
λ
1
1 1
( ) ( )
( ) ; , ( )
P
pp
Tp p p
p p
p N
A a
a a 0 S S
Y=XW+E
r1 r2
1
1 2
( ) ( )
( ) ( ; , )
P
pp
p p
p p
p Ga r r
β
The generative spatio-temporal model
Penny et al. 2005, NeuroImage
= spatial precision of parameters = observation noise precision = precision of AR coefficients
11,0; SSwNwp T
kTk
Tk
Prior for k-th parameter:
Shrinkage prior
Spatial kernel matrix
Spatial precision: determines the
amount of smoothness
The spatial prior
Different choices possible for spatial kernel matrix S.
Currently used in SPM: Laplacian prior (same as in LORETA)
Smoothing
Global prior Laplacian Prior
Example: application to event-related fMRI data
Contrast maps for familiar vs. non-familiar faces, obtained with
- smoothing- global spatial prior- Laplacian prior
Bayesian model selection (BMS)
Given competing hypotheses on structure & functional mechanisms of a system, which model is the best?
For which model m does p(y|m) become maximal?
Which model represents thebest balance between model fit and model complexity?
Pitt & Miyung (2002), TICS
dmpmypmyp )|(),|()|( Model evidence:
Various approximations, e.g.:- negative free energy- AIC- BIC
Penny et al. (2004) NeuroImage
Bayesian model selection (BMS)
)|(
)|(
2
1
myp
mypBF
Model comparison via Bayes factor:
)|(
)|(),|(),|(
myp
mpmypmyp
Bayes’ rules:
accounts for both accuracy and complexity of the model
allows for inference about structure (generalisability) of the model
Example: BMS of dynamic causal models
modulation of back-ward or forward connection?
additional drivingeffect of attentionon PPC?
bilinear or nonlinearmodulation offorward connection?
V1 V5stim
PPCM2
attention
V1 V5stim
PPCM1
attention
V1 V5stim
PPCM3attention
V1 V5stim
PPCM4attention
BF = 2966
M2 better than M1
M3 better than M2
BF = 12
M4 better than M3
BF = 23
Stephan et al. (2008) NeuroImage