feat 2 - advanced single-subject statisticsric.uthscsa.edu/personalpages/lancaster/spm_class/... ·...

Post on 20-Jun-2020

1 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

FEAT 2 - Advanced Single-Subject Statistics

• Temporal autocorrelation• Contrast estimability and orthogonality• Parametric designs and trends• Factorial designs and interactions• Contrast masking• HRF variability• Perfusion FMRI

FEAT Schematic

FEAT Schematic

FEAT Schematic

FEAT Schematic

GLM Estimation

• Estimates of the regression parameters are obtained using Least Squares

Parameter estimates (PEs) reflect how much of the data is explained by each regressor

Y = X! + " !! = pinv(X)Y

jacklancaster
Sticky Note
remember regressors are also called explanatory values or EVs

frequency

powerColoured / autocorrelated

White / independent

Even after high-pass filtering, FMRI noise has

extra power at low frequencies (positive

autocorrelation or temporal smoothness)

Uncorrected, this causes:

- biased stats ( increased false positives)

- decreased sensitivity

Non-independent/Autocorrelation/Coloured FMRI noise

jacklancaster
Sticky Note
needed to insure that the standard deviation for each PE and COPE is calculated correctly

FMRIB’s Improved Linear Modelling (FILM)

• FILM is used to fit the GLM voxel-wise in FEAT• Deals with the autocorrelation locally and uses prewhitening

FILM estimates autocorrelation by looking at the residuals of the GLM fit:

residuals = Y !X !!

Y = X! + "

jacklancaster
Sticky Note
locally means that autocorrelation is assessed for each voxel

FMRIB’s Improved Linear Modelling (FILM)

1) Fit the GLM and estimate the autocorrelation on the residuals

frequency

power

Power vs. freq in the residuals

FMRIB’s Improved Linear Modelling (FILM)

1) Fit the GLM and estimate the autocorrelation on the residuals

frequency

2) Spatially and spectrally smooth the data

frequency

power

Power vs. freq in the residuals

power

jacklancaster
Sticky Note
determine smooth model of the power spectrum

FMRIB’s Improved Linear Modelling (FILM)

1) Fit the GLM and estimate the autocorrelation on the residuals

frequency

2) Spatially and spectrally smooth the data

frequency

power

3) Construct prewhitening filter to “undo” autocorrelation

frequency

power

Power vs. freq in the residuals

power

FMRIB’s Improved Linear Modelling (FILM)

1) Fit the GLM and estimate the autocorrelation on the residuals

frequency

2) Spatially and spectrally smooth the data

frequency

power

3) Construct prewhitening filter to “undo” autocorrelation

4) Apply filter to data and design matrix and refit

frequency

power

power

power

frequency

jacklancaster
Sticky Note
This filtering alters the time course of the fMRI data and the model EVs need to be adjusted to match.

Choosing High-Pass Filter Cut-off

• High-pass filter is used to remove worst of the low frequency noise so that the autocorrelation modelling works well

• A cut-off of 100 secs generally satisfies this requirement (counter-productive to use a shorter cut-off)

• High-pass filter is used to remove worst of the low frequency noise so that the autocorrelation modelling works well

• A cut-off of 100 secs generally satisfies this requirement (counter-productive to use a shorter cut-off)

• FEAT GUI shows the effects of the high-pass filter on the EVs

• Need to ensure cut-off > period of oscillation of slowest varying signal

Choosing High-Pass Filter Cut-off

Cut-off

Choosing High-Pass Filter Cut-off

• Example: Boxcar EV with period 100s

Cut-off=100s

Negligible effect on EV, so use cut-off of 100s

Choosing High-Pass Filter Cut-off

• Example: Boxcar EV with period 100s

• Example: Boxcar with period 250s

Cut-off=100sCut-off=100s Cut-off=250s

Negligible effect on EV, so use cut-off of 250s

Substantial effect on EV, so need longer cut-off

Negligible effect on EV, so use cut-off of 100s

jacklancaster
Sticky Note
remember the high-pass filter is used to remove things like scanner drift, but we want to preserve the BOLD expected EV

Orthogonality of EVs• It is the component of an EV that is orthogonal to the other EVs that

is used to obtain the PE for that EV• So some orthogonality between EVs is needed to provide sufficient

information to divide up the explained signal between them

Orthogonality of EVs• It is the component of an EV that is orthogonal to the other EVs that

is used to obtain the PE for that EV• So some orthogonality between EVs is needed to provide sufficient

information to divide up the explained signal between them

jacklancaster
Sticky Note
in the orthogonal version EV2 is shifted 1/4 period relative EV1

Design Matrix Rank Deficiency

• A design matrix is rank deficient when a linear combination of EVs is exactly zero

Design Matrix Rank Deficiency

• A design matrix is rank deficient when a linear combination of EVs is exactly zero

Design Matrix Rank Deficiency

• A design matrix is rank deficient when a linear combination of EVs is exactly zero

jacklancaster
Sticky Note
EV1 - EV2 = 0positively correlated
jacklancaster
Sticky Note
EV1 + EV2 = 0Negatively correlated

• Good News: The statistics always take care of being close to rank deficient

Close to Rank Deficient Design Matrices

• Good News: The statistics always take care of being close to rank deficient

• Bad News: the ignorant experimenter may have found no significant effect, because:a) Effect size was too smallb) Being close to rank deficient meant finding an effect would

have required a HUGE effect size

Close to Rank Deficient Design Matrices

Design Efficiency

Design Efficiency

jacklancaster
Sticky Note
We actually specify z-score in FEAT

Design Efficiency

% change required for each contrast to pass specified z-

threshold

Correlation matrix

Eigenvalues

Settings for design efficiency calculations

jacklancaster
Sticky Note
This is diagonalized correlation matrix.The raw correlation matrix for the four EVs is a symmetric 4x4 matrix with nonzero off diagonal.
jacklancaster
Sticky Note
eigenvalues relate to correlation after removing covariance. Principal component analysis does this.

When do we have a problem?

• Depends on SNR, and crucially the contrasts we are interested in:

• [1 -1] ??

• [1 1] ??

• [1 0] or [0 1] ??

When do we have a problem?

• Depends on SNR, and crucially the contrasts we are interested in:

• [1 -1] ?? - no chance

• [1 1] ?? - no problems

• [1 0] or [0 1] ?? - no chance

When do we have a problem?

• Depends on SNR, and crucially the contrasts we are interested in:

• [1 -1] ?? - no chance: 3.3%

• [1 1] ?? - no problems: 0.84%

• [1 0] or [0 1] ?? - no chance: 3.3%

Effect size required

jacklancaster
Sticky Note
Seek design with similar effect size requirement for all contrasts.

Orthogonalising EVs in FEAT

• EV1 is a regressor of interest• EV2 is a confound regressor• Orthogonalising EV2 with respect to

EV1 removes from EV2 that part which is correlated with EV1

jacklancaster
Sticky Note
From the Stats Tab select "Full model setup" for this.

Orthogonalising EVs in FEAT

• EV1 is a regressor of interest• EV2 is a confound regressor• Orthogonalising EV2 with respect to

EV1 removes from EV2 that part which is correlated with EV1

jacklancaster
Sticky Note
confound might be motion or other things that were not part of the experimental design. Do not do this with an EV that was part of the experiment's design. Optimal design is when then main EVs are already orthogonal.

Understanding Orthogonalisation

• Recall: it is the orthogonal components of the EVs that are used to estimate the PEs

• Consider an example where we fit a GLM with 2 EVs:

Y = X1!1 + X2!2 + "

Understanding Orthogonalisation

• Recall: it is the orthogonal components of the EVs that are used to estimate the PEs

• Consider an example where we fit a GLM with 2 EVs:

Y = X1!1 + X2!2 + "

• We then orthogonalise EV1 wrt EV2 and refit:

Y = X1!2B1 + X2B2 + !

Understanding Orthogonalisation

• Recall: it is the orthogonal components of the EVs that are used to estimate the PEs

• Consider an example where we fit a GLM with 2 EVs:

Y = X1!1 + X2!2 + "

• We then orthogonalise EV1 wrt EV2 and refit:

Y = X1!2B1 + X2B2 + !

A

C

B

D

• How do the PEs change?

!1 = B1, !2 = B2 !1 = B1, !2 != B2

!1 != B1, !2 = B2 !1 != B1, !2 != B2

Understanding Orthogonalisation

• Recall: it is the orthogonal components of the EVs that are used to estimate the PEs

• Consider an example where we fit a GLM with 2 EVs:

Y = X1!1 + X2!2 + "

• We then orthogonalise EV1 wrt EV2 and refit:

Y = X1!2B1 + X2B2 + !

A

C

B

D

• How do the PEs change?

!1 = B1, !2 = B2 !1 = B1, !2 != B2

!1 != B1, !2 = B2 !1 != B1, !2 != B2

jacklancaster
Sticky Note
Before orthogonalization we can break X1 into two orthogonal components as follows: X1 = X1' + K*X2 After orthogonalization X1 -> X1' which not longer contains the k*X2 partSince X1' was the component of X1 orthogonal to X2 we see that B1 does not change.However since X1' is different from X1 we see that B2 changes.

Using Motion Parameters as Confound EVs

• Perfect motion correction can still leave motion artefacts ("spin history")

• Can use ICA (MELODIC) cleanup to remove artefacts• Or can tell FEAT to use motion parameters as confound EVs

Without motion parameter EVs

With motion parameter EVs

jacklancaster
Sticky Note
if motion is correlated with tasks then need to orthogonalize to remove that part related to the task(s).

Modelling Different Levels of Stimulation

• Simple 2-strength model, e.g. low and high pain

• Pre-supposes relationship between stimulation strength and response

• Can only ask the question about the pre-supposed relationship

low pain“rest”high pain

1

2

jacklancaster
Sticky Note
Only one contrast possible with this type design matrix

Modelling Different Levels of Stimulation

• Now, no pre-supposition about relationship between stimulation strength and response

• Same design as for two completely different stimuli

• t-contrast [1 -1] : "is the response to high pain greater than that to low pain ?"

• t-contrast [-1 1] : "is the response to low pain greater than that to high pain ?"

low painhigh pain

Parametric Variation - Linear Trends

• Is there a linear trend between the BOLD response and some task variable? For example: the Motor system:

BOLD signal effect size

Force of hand squeeze

lightmedium

hard

Parametric Variation - Linear Trends

• A three-strength experiment

• Is there a linear trend between the BOLD response and some task variable?

• t-contrast [-1 0 1] : Linear trend

Parametric Variation - Linear Trends

• A three-strength experiment

• Is there a linear trend between the BOLD response and some task variable?

• t-contrast [-1 0 1] : Linear trend

BOLDeffect size

Force of hand

Parametric Variation - Linear Trends

• A three-strength experiment

• Is there a linear trend between the BOLD response and some task variable?

• t-contrast [-1 0 1] : Linear trend

BOLDeffect size

Force of hand

Parametric Variation - Linear Trends

• A three-strength experiment

• Is there a linear trend between the BOLD response and some task variable?

• t-contrast [-1 0 1] : Linear trend

BOLDeffect size

Force of hand

BOLDeffect size

Force of hand

Parametric Variation - Linear Trends

• A three-strength experiment

• Is there a linear trend between the BOLD response and some task variable?

• t-contrast [-1 0 1] : Linear trend

BOLDeffect size

Force of hand

BOLDeffect size

Force of hand

Slope is the same for both

jacklancaster
Sticky Note
and Beta relates to slope

Parametric Variation - Linear Trends

• A four-strength experiment

• t-contrast [-3 -1 1 3] : Positive linear trend

jacklancaster
Sticky Note
note that contrast had to change by 2 between each strength since a change of 2 is needed for -1 to +1 and we wanted to preserve the linear trend.

Factorial design

• Allows you to characterise interactions between component processes

• i.e. effect that one component has on another

No Vision

Vision

No Touch

Touch

No Interaction Effect

No Vision

Vision

No Touch

Touch

No Interaction Effect

No Vision

Vision

No Touch

Touch

No Interaction Effect

No Vision

Vision

No Touch

Touch

No interaction - effects add linearly

Positive Interaction Effect

No Vision

Vision

No Touch

Touch

Positive Interaction Effect

No Vision

Vision

No Touch

Touch

Positive interaction - “superadditive”

Negative Interaction Effect

No Vision

Vision

No Touch

Touch

Negative Interaction Effect

No Vision

Vision

No Touch

Touch

Negative interaction - “subadditive”

Nonlinear Interactions Between EVs

• EV1 models vision on/off• EV2 models touch on/off

No Vision Vision

No Touch

Touch

Nonlinear Interactions Between EVs

• EV1 models vision on/off• EV2 models touch on/off

No Vision Vision

No Touch

Touch

Contrast Masking• After thresholding all contrasts, it may be desirable to further

mask a thresholded z image for a chosen contrast using the thresholded z image from one or more other contrasts

For example, say we had two t contrasts C1 (1 0) and C2 (0 1). We may be interested in only those voxels which are significantly "active" for both contrasts

Contrast Masking• Rather than masking with voxels which survive thresholding, it

may be desirable to mask using positive z statistic voxels instead

For example, say that we have two t contrasts C3 (1 -1) and C1 (1 0). It may be desirable to see those voxels for which EV1 is bigger than EV2, only when EV1 is positive

Dealing with Variations in Haemodynamics

• The haemodynamic responses vary between subjects and areas of the brain

• How do we allow haemodynamics to be flexible but remain plausible?

Reminder: the haemodynamic response function (HRF) describes the BOLD response to a short burst of neural activity

• We need to allow flexibility in the shape of the fitted HRF

Parameterise HRF shape and fit shape parameters to the data

Needs nonlinear fitting - HARD

Using Parameterised HRFs

Temporal Derivatives

• A very simple approach to providing HRF variability is to include (alongside each EV) the EV temporal derivative

• Including the temporal derivative of an EV allows for a small shift in time of that EV

• This is based upon a first order Taylor series expansion

EVTemporal derivative

datamodel fit without

derivative

model fit with derivative

• We need to allow flexibility in the shape of the fitted HRF

Parameterise HRF shape and fit shape parameters to the data

Needs nonlinear fitting - HARD

Using Parameterised HRFs

Using Basis Sets

• We need to allow flexibility in the shape of the fitted HRF

Parameterise HRF shape and fit shape parameters to the data

We can use linear basis sets to span the space of expected HRF shapes

Needs nonlinear fitting - HARD Linear fitting (use GLM) - EASY

How do HRF Basis Sets Work?

Different linear combinations of the basis functions can be used to create different HRF shapes

+ -0.1*+ 0.3* 1.0* =

basis fn 1 basis fn 2 basis fn 3 HRF

How do HRF Basis Sets Work?

Different linear combinations of the basis functions can be used to create different HRF shapes

+ -0.1*+ 0.3* 1.0* =

+ 0.5*+ -0.2* 0.7* =

basis fn 1 basis fn 2 basis fn 3 HRF

Setting up EVs from HRF Basis SetsBasis function

stimulus responses

HRF Basis functions

Stimulus/Neural Activity

convolved with =

Setting up EVs from HRF Basis SetsBasis function EVs

The fitted linear combination of these EVs explains the size and shape of the BOLD response to the underlying stimulus

Basis function stimulus responses

HRF Basis functions

Stimulus/Neural Activity

convolved with =

subsample to FMRI

temporal resolution

How do we Test for Significance?

Recall that F-tests allow us to test if there is significant amounts of power explained by linear combinations of contrasts

How do we Test for Significance?

Recall that F-tests allow us to test if there is significant amounts of power explained by linear combinations of contrasts

1 0 0

0 0 10 1 0f-contrast

matrix:

But note: f-test can not distinguish between a positive or negative activation

FMRIB’s Linear Optimal Basis Set (FLOBS)

Using FLOBS we can:

• Specify a priori expectations of parameterised HRF shapes

• Generate an appropriate basis set

Generating FLOBs

(1) Take samples of the HRF

Generating FLOBs

(2) Perform SVD (3) Select the top eigenvectors as the optimal basis set

“Canonical HRF”

Generating FLOBs

(2) Perform SVD (3) Select the top eigenvectors as the optimal basis set

“Canonical HRF”

temporal derivative

Generating FLOBs

(2) Perform SVD (3) Select the top eigenvectors as the optimal basis set

The resulting basis set can then be used in FEAT

“Canonical HRF”dispersion derivative temporal

derivative

HRF Basis Functions in FEAT

The FEAT GUI allows a range of different basis functions to choose from

HRF Basis functions

Stimulus/Neural Activity

convolved with

HRF Basis Functions in FEAT

The FEAT GUI allows a range of different basis functions to choose from

HRF Basis functions

Stimulus/Neural Activity

convolved with

Basis function stimulus responses

HRF Basis Functions in FEAT

The FEAT GUI allows a range of different basis functions to choose from

HRF Basis functions

Stimulus/Neural Activity

convolved with

Basis function EVs

subsample to FMRI

resolution

HRF Basis Functions in FEAT

In FEAT the GUI allows contrasts to be setup on “Original EVs” or “Real EVs”

“Original EVs” represent the underlying

experimental conditions

HRF Basis Functions in FEAT

In FEAT the GUI allows contrasts to be setup on “Original EVs” or “Real EVs”

“Original EVs” represent the underlying

experimental conditions

“Real EVs” represent the actual basis function EVs in

the design matrix

How do we Test for Significant Differences?

We want to test for a significant difference between two underlying experimental conditions

Basis fn EVs for condition 1

Basis fn EVs for condition 2

We want to test for a significant difference between two underlying experimental conditions

Basis fn EVs for condition 1

Basis fn EVs for condition 2

How do we Test for Significant Differences?

We want to test for a significant difference between two underlying experimental conditions

Basis fn EVs for condition 1

Basis fn EVs for condition 2

How do we Test for Significant Differences?

We want to test for a significant difference between two underlying experimental conditions

Basis fn EVs for condition 1

Basis fn EVs for condition 2

How do we Test for Significant Differences?

We want to test for a significant difference between two underlying experimental conditions

Basis fn EVs for condition 1

Basis fn EVs for condition 2

How do we Test for Significant Differences?

We want to test for a significant difference between two underlying experimental conditions

Basis fn EVs for condition 1

Basis fn EVs for condition 2

- f-test combines [1 -1] t-contrasts for corresponding basis fn EVs - this will find significance for size or shape differences

How do we Test for Significant Differences?

Basis Functions and Group Studies

• Using basis fns and f-tests is problematic when it comes to doing inference on groups of subjects• This is because we are typically interested in only size (not shape) at the group level

• Using basis fns and f-tests is problematic when it comes to doing inference on groups of subjects• This is because we are typically only interested in only size (not shape) at the group level

Basis Functions and Group Studies

• Options:

1) Only use the “canonical HRF” EV PE in the group inference - e.g. when EVs with temporal derivatives, only use the main EV’s PE in the group inference

• Using basis fns and f-tests is problematic when it comes to doing inference on groups of subjects• This is because we are typically only interested in only size (not shape) at the group level

Basis Functions and Group Studies

2) Calculate a “size” statistic from the basis function EVs PEs to use in the group inference

- this requires permutation testing to do the group inference

• Options:

1) Only use the “canonical HRF” EV PE in the group inference - e.g. when EVs with temporal derivatives, only use the main EV’s PE in the group inference

Perfusion FMRI using Arterial Spin Labelling (ASL)

• Blood is tagged in the arteries (e.g. in the neck) using an RF pulse• After a delay to allow tagged blood to flow into the imaging region, the image is read out• A control image is also collected without the tag. The subtraction of the two images gives a perfusion-weighted image

Tag Control

Tag region

Image region

Perfusion FMRI using Arterial Spin Labelling (ASL)

• Alternative to BOLD• Noisier than BOLD for high frequency designs• Potentially less noisy than BOLD for low frequency designs• More quantitative• Only a few slices

Tag Control

Tag region

Image region

Perfusion FMRI Modelling

•Timeseries alternates between "control" (up) and "tag" (down)•Activation seen as modulation of control-tag difference •There are two GLM approaches available in FEAT:

Perfusion FMRI Modelling

•Timeseries alternates between "control" (up) and "tag" (down)•Activation seen as modulation of control-tag difference •There are two GLM approaches available in FEAT:

1) Pre-subtract data (using sinc interpolation)

Perfusion FMRI Modelling

•Timeseries alternates between "control" (up) and "tag" (down)•Activation seen as modulation of control-tag difference •There are two GLM approaches available in FEAT:

1) Pre-subtract data (using sinc interpolation) 2) Use full model of unsubtracted data

Simultaneous BOLD and Perfusion FMRI Modelling

• Dual-echo sequences commonly used to extract BOLD and perfusion changes simultaneously• Traditionally, separate analysis of low TE (for perfusion) and high TE (for BOLD) results in biased results

Simultaneous BOLD and Perfusion FMRI Modelling

• Dual-echo sequences commonly used to extract BOLD and perfusion changes simultaneously• Traditionally, separate analysis of low TE (for perfusion) and high TE (for BOLD) results in biased results

• FABBER uses nonlinear simultaneous modelling of both TEs to give uncontaminated more sensitive information

That’s All Folks

top related