statistical signi cance of functional networks in the brain · 2 chapter 1. introduction 2013; deco...
TRANSCRIPT
Statistical Significance of
Functional Networks in the
Brain
Jiating Zhu
Master of Science
Artificial Intelligence
School of Informatics
University of Edinburgh
2016
Abstract
This thesis analyzed the functional networks in brain across one hundred subjects
by their functional magnetic resonance imaging (fMRI) data scanned during the
resting state. Standard pre-processing techniques have been explained and been
implemented on the data. The functional connectivity in the brain is measured
with temporal correlations among voxels selected in every cubic region. The brain
network is constructed by a selected threshold, at which the second largest con-
nected component merges to the largest connected component. We find that the
second largest component is scale-free. To explore the difference of the functional
network across subjects, k-means clustering with clustering coefficient and small-
worldnes of the second largest components are carried out. We compared the
clustering results with degree distribution parameters, and found that subjects in
different clusters have distinguishable difference in terms of the considered net-
work properties. Therefore, the proposed pipeline and method in this project
are suitable for future analysis on discriminating the functional brain of healthy
controls and Major Depression Disorder patients.
Keywords: Voxel-based functional network, Resting state fMRI, Second
largest component, K-means clustering, Depression.
iii
Acknowledgements
I would like express my sincere thanks to my supervisor, Dr Michael Herrmann,
for his support and guidance throughout my time as his student. His enthusiasm,
encouragement and teachings are priceless.
I thank Heather Sibley from Royal Edinburgh Hospital for providing me the
fMRI data and additional information for analysis. I also would like to thank
Shen Xueyi, who gave me a lot advice on understanding the brain images. I
thank Thomas Nikson for helping me logging in the computer in Royal Edinburgh
Hospital. I am grateful to the people who worked at the the same floor with me
at Royal Edinburgh Hospital. Their kindness made me feel relax and helped me
a lot.
I would like to thank Thomas Joyce. He gave me very useful feedback on my
analysis and report even very close to the deadline.
I am so grateful to the people mentioned above. I have learned a lot from
them, and their help is very important to me.
iv
Declaration
I declare that this thesis was composed by myself, that the work contained herein
is my own except where explicitly stated otherwise in the text, and that this work
has not been submitted for any other degree or professional qualification except
as specified.
(Jiating Zhu)
v
Contents
1 Introduction 1
2 Background 3
2.1 Resting state, Default mode network . . . . . . . . . . . . . . . . 3
2.2 Functional network during rest . . . . . . . . . . . . . . . . . . . 4
2.3 Data from Stratifying Resilience and Depression Longitudinally
(STRADL) project . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.4 Resting state fMRI in Major Depression Disorder . . . . . . . . . 5
3 Methods 7
3.1 Data description . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 Data pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2.1 Realignment . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2.2 Slice timing . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2.3 Coregistration . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2.4 Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2.5 Normalisation . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.2.6 Smoothing . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3 Brain extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3.1 Pre-processed results . . . . . . . . . . . . . . . . . . . . . 11
3.3.2 Brain representation . . . . . . . . . . . . . . . . . . . . . 13
3.4 Network extraction . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.5 Network properties . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.5.1 The properties of functional network connectivity . . . . . 15
3.5.2 Nodal properties . . . . . . . . . . . . . . . . . . . . . . . 18
3.5.3 Global network properties . . . . . . . . . . . . . . . . . . 18
3.5.4 Metrics selection . . . . . . . . . . . . . . . . . . . . . . . 20
vii
3.6 Network analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.6.1 Network construction . . . . . . . . . . . . . . . . . . . . . 21
3.6.2 Second largest component . . . . . . . . . . . . . . . . . . 22
3.6.3 Random graph . . . . . . . . . . . . . . . . . . . . . . . . 24
3.6.4 Power-law degree distribution . . . . . . . . . . . . . . . . 27
3.7 Clustering evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.7.1 Clustering with clustering coefficient and small-worldness . 30
3.7.2 Clustering analysis . . . . . . . . . . . . . . . . . . . . . . 30
4 Discussion 35
4.1 Pre-processing procedure . . . . . . . . . . . . . . . . . . . . . . . 35
4.2 Voxel-based network . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.3 Dimension reduction . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.4 Correlations in the network . . . . . . . . . . . . . . . . . . . . . 36
4.5 Network properties . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.6 Results interpreted from resting state . . . . . . . . . . . . . . . . 37
5 Conclusion 39
Bibliography 41
viii
Chapter 1
Introduction
When someone is performing a particular task or no particular task, we can
obtain information about the corresponding neural activity from the person’s
brain image. Relations between small regions in the brain can be inferred by
constructing functional networks from the brain image. Different networks will
be generated from a particular brain with different parameters such as number of
trials, time windows for averaging and thresholds for the activity and correlation.
Intuitively, the overall characteristics of the reconstructed networks of a particular
brain should be similar. As neurological and psychiatric disorders often share
underlying brain network pathology (Deco and Kringelbach, 2014), we assume
that subjects with similar mental health condition will have similar functional
networks.
In the last decades, it is popular to investigate the brain activity during the
rest in a wide range of mental illness (Cabral et al., 2014). The resting state
dynamics are consistent across healthy subjects, and the re-productivity is high
(Cole et al., 2010). It has also been used for classification of mental disease such
as schizophrenia patients (Arbabshirani et al., 2013).
The goal of this project is to analyze the resting state functional networks
of brains among Major Depression Disorder (MDD) patients. To study whether
common properties will be shared in networks constructed from MDD brains,
the project aims to propose an approach to analysis the networks by identifying
networks from different MDD condition subjects and healthy subjects based on
some graph properties.
Most studies in neurobiology focus on the network properties that affected
by mental disease and aging separately (Lynall et al., 2010; Arbabshirani et al.,
1
2 Chapter 1. Introduction
2013; Deco and Kringelbach, 2014; Fair et al., 2008; Cao et al., 2014), while this
project aims to offer a way to estimate networks that can tell how different the
brains are in general. That is to say, this project will consider network properties
that are important in both mental disease and aging problems. This is also a
reasonable attempt, as MDD shows only low mood, which is unlike other mental
disease that has a big difference from the healthy state.
In this project, the pipeline of functional network comparison across subjects
is: data pre-processing, brain extraction, brain network construction, network
properties calculation, and finally clustering subjects with different properties as
the predictor.
A voxel-based functional network analysis is proposed in this project to study
the depression patients and healthy controls. This approach can reveal more
information about the brain connectivity than the mainstream region-based net-
work (further discussed in section 3.6.1 and section 4.2). It is a novel attempt to
infer the difference between depression and healthy subjects from the voxel-based
functional network. This project also proposed to investigate brain with the sec-
ond largest connected component of the functional network. The consistency of
the second largest functional networks across subjects has been explored in sec-
tion 3.6.2. This is the first time (to our knowledge) to analyze the second largest
functional network in the brain under the resting state. Furthermore, complex
network properties of the second largest components are calculated for clustering
the subjects (see section 3.7). The results imply that the properties of the second
largest component are indicative of discriminating subjects with different mental
health conditions, and the method we proposed is promising for future analysis
on depression.
Chapter 2
Background
2.1 Resting state, Default mode network
When a subject is performing a task such as looking at a photo, the increased
neural activity in visual cortex increases blood flow in that region. This robust
relationship of mental activity changing reflected in changing needs of the brain
for oxygen is known well for over 100 years (Raichle et al., 2001).
Traditionally, the main focus of the brain activity analyses is task-related
which helps identifying and characterizing functionally distinct areas in the hu-
man brain. Recently, researchers become interested in the resting state of the
brain. When a person is at the resting state, he/she is not consciously perform-
ing any task, but is somehow stand by the external stimuli such as the auditory
or visual. He/she is ready to suddenly turn around to a disrupting sound or
look towards a shining point. It is reported that the patterns of brain activity at
rest are distinguishable from the ones observed in performing tasks and sleeping
(Cabral et al., 2014).
The resting state network (RSN) refers to the network of the brain regions
which are anatomically separated, but strongly functionally connected and acti-
vated during rest. Multiple RSNs can be identified by Independent Component
Analysis (ICA). As ICA decomposes the brain data at rest into a number of
components, the network for each component is an RSN module. Moussa et al.
(2012) found that the sensory/motor module, the basal ganglia module, the vi-
sual module and the Default Mode Network (DMN) module are consistent across
subjects.
Sometimes, the RSN is specifically referring to the DMN. The DMN is not ac-
3
4 Chapter 2. Background
tivated in the resting state but consistently decreases its activity during attention-
demanding and goal-directed tasks (Raichle, 2015). Unlike the DMN, other task-
positive RSNs including vision, language and basal ganglia RSNs exhibit stronger
functional connectivity during corresponding tasks.
It is natural that researchers try to associate the DMN with underlying high-
order cognitive processes, such as daydreaming, mind wondering and recalling
past memory. However, the present of the DMN in monkeys and rats (Raichle,
2015) makes it unlikely that the patterns in the DMN are caused by unconstrained
conscious cognition (Raichle, 2015; Cabral et al., 2014).
2.2 Functional network during rest
The first and most widely used technique in the field of exploring brain activity
during rest is resting state functional magnetic resonance imaging (R-fMRI),
which measures fluctuations in blood-oxygen-level dependent (BOLD) signal in
subjects at rest (Cabral et al., 2014). This technique is popular due to its ability
to measure correlations in neural activity between distant brain regions. Activity
fluctuations across the brain can be transformed to a network representation.
In such network, anatomically distinct brain areas can be represented as nodes,
which are “functionally connected” to each other if their activity correlate above
a threshold (Lord et al., 2013).
The graph theoretic approach is the most powerful and flexible method to
study R-fMRI (Power et al., 2011; Bullmore and Sporns, 2009). Cabral et al.
(2014) pointed out that some complex network properties are consistent over
time and spatial scales, such as small worldness and modularity. More interest-
ing findings are: disrupted small-world properties are reported in pathological
conditions and changes with normal aging in modularity.
An increasing number of pathological conditions also appear to be reflected in
the functional connectivity between particular regions (Power et al., 2011). Func-
tional connectivity during rest deteriorates in the progression of the Alzheimers
disease and appears a widespread decrease for schizophrenia patients (Cabral
et al., 2014).
2.3. Data from Stratifying Resilience and Depression Longitudinally (STRADL) project5
2.3 Data from Stratifying Resilience and De-
pression Longitudinally (STRADL) project
The brain image data used here were made available by the Stratifying Resilience
and Depression Longitudinally (STRADL) project (Fernandez-Pujals et al., 2015),
and is kindly offered by the University of Edinburgh’s Division of Psychiatry.
Though clinical depression is a chronic worldwide health problem affecting mil-
lions of people, little is known about what makes people vulnerable or resilient to
the condition. Rather than being one disease, clinical depression is a collection of
different disorders with one common symptom: low mood. The STRADL project
aims to identify the causes and mechanisms of clinical depression by studying
groups of people either with or without depression.
In the STRADL study, participants are asked to complete questionnaires on
their mental health and resilience. A subset of the participants is selected for
MRI brain scanning. As the image data are still growing, the mental health
information is under a non-disclosure agreement. In this project, we regard the
subjects as a group of people with unknown mental health condition.
2.4 Resting state fMRI in Major Depression Dis-
order
Wang et al. (2012) reviewed 16 resting state fMRI studies on Major Depression
Disorder and healthy control. They concluded that interactions between the DMN
and other task positive RSNs, and cortico-limbic mood regulating circuit should
play an important role in further MDD research. Three commonly used methods
in R-fMRI in MDD are region-of-interest (ROI), ICA and Regional Homogeneity
(ReHo). In a ROI analysis, temporal correlations between selected ROI regions
and other brain regions constitute the functional connectivity of the network.
ICA decomposes the whole brain into separate components and each component
depicts a functional network. ReHo maps the whole brain by comparing a given
voxel to the voxels around it in time series.
Both ROI and ICA methods mentioned above are region-based, which can
only measure the brain functional connectivity among restricted regions (further
discussed in section 4.2). ReHo, on the other hand, depending on neighbouring
voxels can not capture the functional connections between spatially distant struc-
6 Chapter 2. Background
tures. In this project, we propose a voxel-based network construction method,
which can gain both inter-regional and intra-regional connection information.
Chapter 3
Methods
3.1 Data description
In this project, data from one hundred subjects from the STRADL project will be
analyzed. The resting fMRIs are acquired when the participants were instructed
to do relax. For each subject, the resting state fMRI scan is stored in NIfTI
(Neuroimaging Informatics Technology Initiative) file format, which contains a
four dimension tensor and some other properties. Each NIfTI is about 51.1 MB.
Thus, this amounts to a total of 5.38 GB.
The first three dimensions in the tensor measure the length, width, and height
of the brain respectively. We can represent these three dimensions in y, x, z axis
correspondingly. Voxels of the brain are equivalent to the pixels in the 3D space.
The forth dimension of the tensor is the time dimension. During scanning, the
scanner moves from the bottom of the brain to the top. Thus, for each time step,
one brain volume (whole brain) activities are recorded.
For a 64 × 64 × 32 × 195 tensor from STRADL, we can say it has 195 time
steps, and 32 slices of 64× 64 2D images for each time step. Multiplying all four
dimensions in the tensor, 25,559,040 voxels in total per brain scan. Figure 3.1
shows the illustration of the 4D tensor in STRADL.
3.2 Data pre-processing
All pre-processing is done by Statistical Parametric Mapping toobox (SPM) (Ash-
burner et al., 2014). SPM is written in Matlab and can analyze fMRI imaging
data. Standard pre-processing steps - realignment, slice timing, coregistration,
7
8 Chapter 3. Methods
x
y
z
Slice
(a) One brain volume.
Brain Volume
2D space
Slic
e
Tim
e Ste
ps
(b) 4D tensor.
Figure 3.1: Data illustration
segmentation, normalization and smoothing will be reviewed in detail.
3.2.1 Realignment
In order to compare the same part of the brain across time, artefacts caused by
head movements during scanning should be removed. The realignment module
aims to make the same voxels have the same 3D positions throughout the time
steps. It minimizes the movements by two steps. Firstly, it estimates differences
between the mean image at the current time step and the one at the first time
step. Secondly, it uses the parameters estimated in the previous step to re-slice
the images at the current time point.
The estimation parameters measure translations and rotations in a 3D coor-
dinate system. An example of the estimation is shown in Figure 3.2. In SPM,
three translations are recorded in an x, y, z coordinate system, while three rota-
tions along each axis are represented by pitch, roll, and yaw.
3.2.2 Slice timing
The slice timing corrects differences in slice acquisition times. The data on ad-
jacent slices are recorded with an 1/2 Time Resolution (TR) interval in time
without time slicing. The whole brain scan is adjusted such that it can be con-
sidered to be acquired at the moment immediately after the nominal slice timing,
i.e. as if there was no time delay between the recordings of the slices.
Slice timing effects are more pronounce for long TRs (TR > 2s) and task
related fMRIs (Sladky et al., 2011). In our project, however, the TR is 1.56s
and fMRIs we study are acquired from the resting sate. Thus, we do not apply
slice timing on our data. This makes the pre-processing simple and avoids the
3.2. Data pre-processing 9
Figure 3.2: Estimation done by SPM realignment pre-processing for subject No.1.
In this example, two bigger movements around 50th and 140th time step (image)
can be observed along with some minor movements throughout out the recording
time.
10 Chapter 3. Methods
risk of introducing artefacts caused by the temporal interpolation in slice timing
correction.
3.2.3 Coregistration
Coregistration matches modalities in individual subjects. For each subject, a
functional magnetic brain scan (fMRI) and a structural magnetic brain scan
(sMRI) are recorded separately. Coregistration maps fMRI and sMRI in the
same space. In Figure 3.3, at the same position (the cross point of two axises) in
a 3D space, the fMRI image shows the functional information corresponding to
the the anatomical position in the sMRI image.
(a) Mean fMRI across time. (b) Structural MRI.
Figure 3.3: fMRI and structural MRI of one subject in STRDAL after coregis-
tration.
3.2.4 Segmentation
A segmentation function maps sMRI and a sMRI template by registering grey
and white matter. It makes it possible to compare the structure of the brain
across subjects.
3.2.5 Normalisation
Brains of different subjects vary in shape and size. Normalisation brings them all
into a common anatomical space. It creates the mapping between one fMRI scan
and the sMRI template. Traditionally, normalisation depends on coregistration
3.3. Brain extraction 11
and segementation. Alternativly, a new SPM version (SPM12) (Ashburner et al.,
2014) normalizes the fMRI scan directly to the template sMRI space. After
normalisation, the size of a 4D tensor will change. In our case, a 64×64×32×195
tensor will become 95×79×79×195 tensor. This means 115,614,525 voxels need
to be processed per brain scan after normalization.
3.2.6 Smoothing
Smoothing is the last step in pre-processing. It suppresses noise and artefacts by
applying Gaussian kernel to the 3D brain scan images. Different parts of the brain
have their corresponding suitable kernels. If a brain is not smoothed, some highly
activated voxels are noise caused by the magnetic equipment during scanning. On
the other hand, if the brain is smoothed too much, some important signals will be
blurred. Danev (2016) reported that smoothing for the whole brain image made
everything appears to be more active relatively. Thus, for simplification, we do
not consider smoothing in this project.
3.3 Brain extraction
3.3.1 Pre-processed results
One hundred resting state brain scans in STRADL are pre-processed by realign-
ment and normalisation steps. Realignment for one subject can be done by SPM,
and the same to normalization. Therefore, we wrote the scripts for batch process-
ing (call the realignment function and normalization function in SPM for multiple
times) in Matlab. Realignment for 100 subjects took 3 and half hours, and roughly
the same amount of time is taken for normalization. These pre-processing steps
are standard procedures and are unavoidable, especially normalization. The head
position in the brain image varies across subjects, which makes them incompara-
ble before normalization.
Figure 3.4 shows the two normalised fMRIs and the sMRI template at the
same 3D position in the same space. Both normalised fMRIs in Figure 3.4 show
the images at the anterior commissure point as the sMRI template does. This
shows that the alignment of subjects is acceptable.
12 Chapter 3. Methods
(a) Normalized fMRI for subject No.1. (b) Normalized fMRI for subject No.85.
(c) Structual MRI Template at anterior
commissure point.
Figure 3.4: Normalisation results. The cross point in the sMRI template is the
anterior commissure point. Correspondingly, at the same 3D position in the two
normalized fMRIs, they show the brain activity around the anterior commissure
point as well.
3.3. Brain extraction 13
3.3.2 Brain representation
Noise and low activity voxels are irrelevant to the analysis. Thus voxels with low
value are removed. A simply method is to ignore a given voxel if it presents all
zeros in time series. Holes in the brain images are filled up by image processing
methods - dilation and erosion (Gonzalez and Woods, 2008; Danev, 2016). The
voxel information of the brain is represented by a matrix (the values in time series
of one voxel in 3D store in one column) and an array of original 3D coordinates.
The statistical information for subject No.1 is shown in Table 3.1. As we
can see, the size of raw data reduces from 25,559,040 voxels to 115,192 voxels
after simple noise removing. After normalization, though the number of voxels
becomes bigger, it is much smaller than the normalized data before simple noise
reduction, which has 115,614,525 voxels. The negative voxel value occurs due
to the misalignment in the normalization process. The misalignment makes the
boundary of the brain to be fuzzy. We can simply set the negative values to zero,
which does not cause much difference of the voxels we get from the brain image.
In table 3.1, we are acknowledged that the mean and the standard deviation
remain close to the one without neglecting negative values.
As we only consider highly activated voxels, we set a threshold slightly higher
than the mean voxels and remove any voxels that are below the threshold. This
is implemented before dilation and erosion procedures. The last row in Table 3.1
shows that the mean becomes higher and the standard deviation becomes smaller
after high-value selection. More significantly, the size of the number of voxels is
half smaller than before.
Table 3.1: Voxel statistical information for subject No.1 with brain holes filled
and simple noise removed.
max mean min std. dev. size
raw 2613 298.08 0 456.46 115192
normalized 2728 566.12 -168 525.86 470713
normalized
(set negative to zero)2728 560 0 526.22 475604
normalized
(remove voxels below 600,
set negative to zero )
2728 1018.3 0 369.37 232432
14 Chapter 3. Methods
Figure 3.5 shows the slice before brain extraction (without threshold and brain
holes are not filled). The colour in Figure 3.5 shows the activity intensity in the
brain. The higher value is in warm color while the lower value is in cold color. In
Figure 3.5a we can see that some light blue around the boundary, which is the
noise in the brain skull. After brain extraction, most of those noises are removed,
as we can see in Figure 3.5b. Figure 3.5d and Figure 3.5c show the same brain
as the one in Figure 3.5b from different views. The coronal section is the brain
section when we fix the y-axis in the brain volume. Similarly, the sagittal section
is the one with the fixed x-axis.
10 20 30 40 50 60 70
10
20
30
40
50
60
70
80
90
Horizontal section of the brain
0
500
1000
1500
2000
2500
(a) Mid-horizontal slice before brain ex-
traction.
10 20 30 40 50 60 70
10
20
30
40
50
60
70
80
90
Horizontal section of the brain (after brain extraction)
0
500
1000
1500
2000
2500
(b) Mid-horizontal slice after brain ex-
traction.
10 20 30 40 50 60 70 80 90
10
20
30
40
50
60
70
Mid-sagittal section of the brain
0
500
1000
1500
2000
(c) Mid-sagittal section after brain ex-
traction.
10 20 30 40 50 60 70
10
20
30
40
50
60
70
Coronal section of the brain
0
500
1000
1500
2000
(d) Mid-coronal section after brain ex-
traction.
Figure 3.5: Brain slice for subject No.1.
3.4 Network extraction
A correlation matrix Cnp represents the link between two brain areas n and p
(n, p = 1, ..., N) in a brain with N cortical regions. The connection in a functional
3.5. Network properties 15
brain network is weighted, corresponding to the correlation value between the two
brain areas. The connectivity can be binarized at a given threshold th to covert
the correlation matrix C into an adjacency matrix A (Anp = 1 if Cnp > th,
otherwise Anp = 0).
Highly correlated regions have a stronger connection, while lower correlated
regions have a weaker connection. The correlation value ranges from 0 to 1. The
execution time for a correlation matrix calculation in Matlab is slow and increases
rapidly as the size growing. For a size 11000× 11000 correlation matrix, it takes
about 3 minutes to calculate the correlation matrix (Danev, 2016). To reduce
the correlation matrix size, we can select the voxels in every 5 × 5 patch in a
horizontal direction. One voxel is selected in the center of each patch.
As we take all the vertical neighbouring voxels into account in 5×5 horizontal
patch selection procedure, a lot vertical connected pairs appear in Figure 3.6b,
due to the fact that adjacent voxels are closely affected each other.
Figure 3.7 shows the correlation matrix of the extracted network shown in
Figure 3.6.
3.5 Network properties
In this section, we are going to give the definition of the network properties that
have been mentioned in mental disease and aging problem in Cao et al. (2014)
and Lynall et al. (2010). We will implement some of them and give the analysis
results in the later sections.
The following properties we are going to review are very common and of-
ten used in the community. The property information is not only comparable
across subjects, but also further reduces the data. We can compare functional
networks across subjects with property values instead of thousands of connection
information.
3.5.1 The properties of functional network connectivity
In terms of network connectivity measurement. We are going to use the measures
defined in (Cao et al., 2014).
• network connection density
16 Chapter 3. Methods
20 40 60
10
20
30
40
50
60
70
80
90
0
500
1000
1500
2000
2500
(a) Extracted network shown on Mid horizontal slice
(b) Extracted network shown in 3D
Figure 3.6: Extracted network at threshold=0.96 for subject No.1.
3.5. Network properties 17
2000 4000 6000 8000
1000
2000
3000
4000
5000
6000
7000
8000
9000
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
Figure 3.7: Correlation matrix for subject No.1 with 5× 5 horizontal patch. The
size of the correlation matrix is 9322× 9322.
A network G with N nodes and K edges
D(G) =2K
N(N − 1)(3.1)
• network mean connectivity strength
Str(G) =W
2K(3.2)
where W is the total weights of the network.
• network mean anatomical distance
Dis(G) =
∑1≤i,j≤N drij
2K(3.3)
where drij is the anatomical distance of edge rij.
• dynamic connectivity
A sliding window approach can be applied to adjust connectivity strength
and efficiency to time-varying brain images (Yu et al., 2015). This con-
siders the time factor, which is not common in normal network. But the
computation of such property is complex.
18 Chapter 3. Methods
3.5.2 Nodal properties
• degree distribution parameter
Degree for node i is k(i) =∑
j Ai,j, where A is the binary adjacency matrix
derived from the corresponding functional connectivity (correlation) matrix
C with a threshold. The degree distribution is the counts for how many
nodes of each degree value appear in the network. The distribution can
be visualized as a curve with long tail, which can be estimated as a power
function (Eguiluz et al., 2005). In other words, P (k) ∼ k−γ, where P (k) is
the probability of a node has degree k (Bullmore and Sporns, 2009). We call
the exponent γ as the degree distribution parameter(Lynall et al., 2010).
• weighted degree centrality
rFCS(i) =1
N
∑1≤j≤N,j 6=i
wrij (3.4)
where wrij is the strength (the Pearson correlation coefficient) of edge rij.
It measures the connectivity strength of the region i to all other regions in
network (Cao et al., 2014) (Figure 3.8(B)).
• connectivity patterns among functional hubs
Regions with higher rFCSs are hubs. Correlation density among hubs and
their connections (sub-network) can be calculated by the rich club (Figure
3.8(C)) coefficient Φ (see (Cao et al., 2014) for detail).
3.5.3 Global network properties
• topological efficiency
Global efficiency
Eglob(G) =1
N(N − 1)
∑1≤i,j≤N,i 6=j
1
Lij(3.5)
where Lij is the characteristic (shortest) path length between nodes i and
j in the network G (Figure 3.8(A)). It measures the global efficiency of
information propagation in the network (Cao et al., 2014).
Local efficiency
Eloc(G) =1
N
∑1≤i≤N
Eglob(Gi) (3.6)
3.5. Network properties 19
Figure 3.8: Illustration of network properties, figures from (Cao et al., 2014).
(A)Characteristic path length from node a to node b (red line). The characterise
feature is assumed to be functionally important. (B) Connectivity strength of
node c (average weight of the red color lines). Node c is a hub, which has a
strong connection in the network (hubs are marked as large dots in the figure).
(C)Clustered modules (shaded area). (D)Rich club organization (red dots and
lines).
20 Chapter 3. Methods
where Gi denotes the sub-graph composed of the nearest neighbors of node
i. It measures the efficiency of the network when node i removed (Cao
et al., 2014).
• clustering coefficient
Clustering coefficient measures the segregation of the network. The clus-
tering coefficient for a network is the average clustering coefficient (C(i))
for the nodes in the network.
For a node i,
C(i) =δiτi
(3.7)
where δi is the number of connected triangles, τi is the number of connected
triples.
• small-worldness
σ =C/CR
Eglobal(G)R/Eglobal(G)(3.8)
where C and CR are clustering coefficients with characteristic path length L
and LR (LR > L). CR, LR and Eglobal(G)R are from a comparable random
graph. It is a “small-world” when σ > 1 (Lynall et al., 2010).
• modularity
For a given partition p of weighted network, the modularity index Q(p) can
be calculated to measure the modular structure (Figure 3.8(B)) (see (Cao
et al., 2014) for detail).
• robustness
Robustness parameter ρ is defined as the area under the curve s/n, where s
is the size of the largest connected component and n is the number of nodes
removed (Lynall et al., 2010).
3.5.4 Metrics selection
We present an overall consideration of the network by reviewing the network prop-
erties mentioned above. Some metrics are useful for weighted network, whereas
we only consider the biniarized network in our study. Some metrics measure the
components of the network, which are not helpful as we will only look a certain
3.6. Network analysis 21
component (will be discussed later). Some metrics are complex and need some
subjective parameter settings such as dynamic connectivity and connectivity pat-
terns among functional hubs. Therefore, we will only choose clustering coefficient,
small-worldness for later analysis.
3.6 Network analysis
3.6.1 Network construction
One approach to construct the whole brain network is to depict the correlations
between regions identified by ICA (Mørup et al., 2010). Another popular ap-
proach is to construct a macro-scale functional networks with nodes for brain
regions that are selected by random parcellation algorithm or anatomical defined
regions (Zalesky et al., 2010; Cao et al., 2014; Lynall et al., 2010). Both ap-
proaches focus the interactions between regions, rather than voxels, in the brain.
This is understandable that the connections between adjacent voxels are less im-
portant as they should be highly correlated and has no other implications of the
functional connectivity in a macro-scale of the brain. As discussed before, voxels
in every 5× 5 horizontal patch will cause some isolated strongly connected pairs
in vertical direction (Figure 3.6b), which is not informative.
In this project, however, we construct a voxel-based network instead of region-
based network as mentioned above. We choose voxels in every 5×5×5 cube region.
One voxel is selected in the center of each cube region. In this way, we can also
consider the selected voxels as representative voxels in their corresponding cubic
region, and neglect the interactions between neighbouring voxels. One advantage
for voxel-based networks is that they are reported to be more robust against
network fragmentation (fewer fragments with higher thresholds) compared to
region-based networks (Hayasaka and Laurienti, 2010).
Density of the network is affected by the threshold value. The higher threshold
is set, the lower density of the network will be, and more connected components
will show up. In Erdos-Renyi random graph, the sudden emergence of the “gi-
ant component” is concerned as the phase transition as the size of the largest
component increases largely (Bollobas et al., 2007). Similarly, the threshold at
the biggest second largest connected component is an interest point to look at.
At this threshold, the size of the largest connected component has an sudden
increase and the second largest component immediately merges to the largest
22 Chapter 3. Methods
component below such threshold. Therefore we are going to analyse the second
largest component at the critical threshold.
The advantage to study the second largest component is that it contains less
noise than the largest connected component as it is smaller.
3.6.2 Second largest component
As we can see from Figure 3.9, it is consistent that each subject has a size drop
for the second largest component across different thresholds. Since the drop point
varies in different subjects, we can not simply select one threshold value for all
subjects. For instance, if we select 0.76, which is the threshold that cause the
highest point of node coverage in the second largest component for subject No.86,
we will not get a second largest network for subject No.96 with the same threshold
as the node coverage is around zero for subject No.96 (see Figure 3.9a and Figure
3.9b).
0.5 0.6 0.7 0.8 0.9 1
Threshold
0
20
40
60
80
100
No
de
s C
ove
rag
e
Largest Component
Second Largest Component
(a) Subject No.86
0.5 0.6 0.7 0.8 0.9 1
Threshold
0
20
40
60
80
100
No
de
s C
ove
rag
e
Largest Component
Second Largest Component
(b) Subject No.96
0.5 0.6 0.7 0.8 0.9 1
Threshold
0
50
100
150
200
Nodes C
overa
ge
Largest Component
Second Largest Component
(c) Subject’s average
Figure 3.9: The sizes of the largest component and the second largest component
change with different thresholds.
3.6. Network analysis 23
As we can see from Figure 3.10, the selected thresholds are ranging from 0.66
and 0.88 and tend to be centering at the mean 0.783 (Figure 3.10b). This shows
that the selected thresholds are meaningful as they do not vary too much across
different subjects.
0 10 20 30 40 50 60 70 80 90 100
Subject
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Th
resh
old
(a) Stem plot for selected thresholds
1
All Subjects
0.65
0.7
0.75
0.8
0.85
Thre
shold
(b) Box plot for selected thresholds
Figure 3.10: The threshold selected among subjects. The max threshold is 0.88,
the mean is 0.783, the min is 0.66 and the standard deviation is 0.045.
The whole brain network includes more noise than sub-networks such as the
second largest component. The density of the whole brain network is much smaller
than the second largest component (Figure 3.11). This is due to the fact that
some smaller clustered sub-networks (such as pairs of linked nodes) present in the
whole brain network, which will make the analysis more difficult and complicated.
An example of extracted second largest networks is shown in Figure 3.13.
Although some small noise (nodes outside the brain) appears in the right bottom
corner of the coronal section (Figure 3.12b), the overall distribution of the network
is reasonable, as it is mostly lie in the visual and motor region (middle and lateral
24 Chapter 3. Methods
0 10 20 30 40 50 60 70 80 90 100
Subject
0
0.05
0.1
0.15
0.2
0.25
0.3
De
nsity
Brain Network
Second Largest Component
Figure 3.11: Network density for the whole brain network and the second largest
component across subjects
part of the brain), and it is an abstract notwork that is not clustered only in one
functional brain region (such as vision cortex).
3.6.3 Random graph
Because the fMRI derived graphs behave far from random (Mørup et al., 2010),
we can get some statistical evidence of the differences compared across individual
brains by comparing the extracted network of each subject with a random graph.
Figure 3.14 shows that fMRI derived graphs look different from a random graph
with respect to degree values.
For a Erdos–Renyi random graph G(n, p), the graph G is constructed by
connecting n nodes randomly with probability p independently.
In Figure 3.14, though the probability of the Erdos–Renyi random graph is
set to the density of the extracted second largest network, the resulting density
of the random and the extracted network is slightly different (0.028 and 0.026
respectively).
When calculating the small-worldness, the comparable random graph is an
Erdos–Renyi random graph with average node numbers and the average density
of the second largest components from the 100 subjects in STRADL.
3.6. Network analysis 25
10
20
30
40
50
60
70
X
102030405060708090
Y
0
200
400
600
800
1000
1200
1400
1600
1800
(a) The extracted second largest network in horizontal view.
10
20
30
40
50
60
70
Z
10203040506070
X
0
500
1000
1500
2000
(b) The extracted second largest network in coronal view.
26 Chapter 3. Methods
10
20
30
40
50
60
70
Z
102030405060708090
Y
0
200
400
600
800
1000
1200
1400
1600
(c) The extracted second largest network in sagittal view.
(d) The extracted second largest network in 3D view.
Figure 3.13: The extracted second largest network for subject No.5. Voxels are
selected in every 5× 5× 5 cube.
3.6. Network analysis 27
0 20 40 60 80 100 120 140 160
Nodes
0
2
4
6
8
10
12
14
16
De
gre
e
Figure 3.14: Node degrees in the extracted second largest network for subject
No.1. The pink line is the extracted second largest network, the blue area rep-
resents the Erdos–Renyi random graph with the same vertices as the extracted
network and the probability is set to the density of the extracted one. The degree
distribution for the extracted second largest network and a random graph can be
seen in Figure 3.16 and Figure 3.15.
3.6.4 Power-law degree distribution
As we can see from Figure 3.16 and Figure 3.15 that the degree distribution of the
second largest functional in the brain is very different from a random graph. The
number of nodes and the density parameter of the random graph are the averages
from the 100 subjects in STRADL. We can also tell from Figure 3.17a that the
average degree distribution of the second largest network in brains has a “heavy
tail”, which obey the power law. The curve in Figure 3.17a is a fitted power
function estimated from Equation 3.10. Figure 3.17b shows the same degree
distribution in Figure 3.17a, but in a logarithmic scale.
We applied the Maximum Likelihood Estimation(MLE) for Truncated Pareto
distribution (White et al., 2008) to estimate the exponent λ ( λ = −γ).
According to White et al. (2008), the MLE λ is calculated from Equation 3.9.
lnx =−1
λ+ 1+bλ+1 ln b− aλ+1 ln a
bλ+1 − aλ+1(3.9)
where x is the degree axis in degree distribution, a ≤ x ≤ b, λ 6= 1, a ≥ 0, b ≥ 0.
The probability density function (PDF) is
f(x) = (λ+ 1)(b(λ+1) − a(λ+1))−1xλ) (3.10)
28 Chapter 3. Methods
1 2 3 4 5Degree
0
20
40
60
80
100
120
140
160
180
Co
un
ts
Figure 3.15: Degree distribution of the comparable random graph.
Figure 3.16: Degree distributions of the second largest components constructed
from STRADL subjects. The label “Count/Size” (size is the total counts of the
degree in each subject) in the figure is equivalent to the probability of degree.
3.7 Clustering evaluation
In this section, we are going to cluster subjects into groups by k-means clustering
with different network properties as the predictor. Intuitively, we assume that
3.7. Clustering evaluation 29
0 50 100 150 200 250Degree
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16P
robabili
ty (
degre
e)
averagefit average
(a) Average degree distribution for 100 subjects from STRADL.
100
101
102
103
Subject
10-5
10-4
10-3
10-2
10-1
100
Pro
ba
bili
ty (
de
gre
e)
averagefit average
(b) Average degree distribution in logarithmic scale.
Figure 3.17: Average degree distribution of the second largest components con-
structed from STRADL subjects. The degree distribution parameter γ is 1.239.
30 Chapter 3. Methods
there are 3 clusters among the 100 subjects from STRADL. The largest cluster
might reveal the average mental health condition in the 100 subjects. The other
two clusters might alienate from the average to two different extremes.
3.7.1 Clustering with clustering coefficient and small-worldness
Let us call the clusters with clustering coefficient as the predictor as CC clusters
(results are shown in Figure 3.18a). Similarly, we call the clusters with small-
worldness as the predictor as SW clusters (results shown in Figure 3.19c). The
number of the subjects in each cluster is reported on the right side of Table 3.2.
The small-worldness values for all subject are bigger than 1, which means all
the second largest components from the STRADL subjects are small-world. In
Figure 3.18b, the subjects that are clustered in one group with close clustering
coefficient are marked in the same shape and is consistent with the clustering
results in Figure 3.18a. It is obvious that the distribution of the CC clusters
along the small-worldness axis in Figure 3.18b is also similar to the ones along
clustering coefficient axis. The clusters at the bottom, middle and top in both
figures (Figure 3.18a and Figure 3.18b) are CC cluster2, CC cluster1 and CC
cluster3 respectively. The same observation is reported in Figure 3.19c and Figure
3.19d. Though some subjects in SW cluster2 are mixed up with subjects in SW
cluster3, the distribution of the SW clusters along clustering coefficient axis in
Figure 3.19d is roughly the same as the one in Figure 3.19c. This indicates that
the clustering results with two different predictors are roughly the same.
3.7.2 Clustering analysis
We also calculate the intersections of the subjects in the clusters of two different
clustering predictors. We find that most subjects (86%) can be clustered to the
same group with two different clustering. Table 3.2 shows the overlap of the
two clustering approaches in detail. For instance, from the 50 subjects and the
38 subjects that are clustered in CC cluster1 and SW cluster3 respectively, we
find 36 of them are grouped in both clusters. Note that CC cluster1 and SW
cluster3 are both in the middle part in Figure 3.18a and Figure 3.19c, thus they
are comparable.
Furthermore, we compare the mean degree distribution of the three clusters
(covering 86% subjects) from Table 3.2. The degree distribution parameters
3.7. Clustering evaluation 31
0 20 40 60 80 100Subject
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Clu
ste
rin
g c
oe
ffic
ien
t
CC cluster1CC cluster2CC cluster3
(a) Clustering coefficients of the second largest components from
100 subjects in STRADL. The clustering predictor is clustering
coefficient.
0 20 40 60 80 100Subject
1
2
3
4
5
6
7
8
9
Sm
all-
wo
rld
ne
ss
CC cluster1CC cluster2CC cluster3
(b) Small-worldness values of the second largest components from
100 subjects in STRADL. The clustering predictor is clustering
coefficient.
32 Chapter 3. Methods
0 20 40 60 80 100Subject
1
2
3
4
5
6
7
8
9
Sm
all-
wo
rld
ne
ss
SW cluster1SW cluster2SW cluster3
(c) Small-worldness values of the second largest components from
100 subjects in STRADL. The clustering predictor is small-
worldness.
0 20 40 60 80 100Subject
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Clu
ste
rin
g c
oe
ffic
ien
t
SW cluster1SW cluster2SW cluster3
(d) Clustering coefficients of the second largest components from
100 subjects in STRADL. The clustering predictor is small-
worldness.
Figure 3.19: K-means clustering results with clustering coefficient or small-
worldness as the predictor. All subjects have been clustered into three groups
and marked in different shapes in the figures.
3.7. Clustering evaluation 33
Table 3.2: Comparison of the clustering results by two clustering predictors. The
subjects that are clustered in both selected source clusters (clusters shown in
Figure 3.19) are picked. For example, subjects in cluster1 are subjects in both
CC cluster1 and SW cluster3.
Subjects coverage (%) Source Subjects coverage (%)
Cluster1 36CC cluster1
SW cluster3
50
38
Cluster2 20CC cluster2
SW cluster2
20
34
Cluster3 28CC cluster3
SW cluster1
30
28
for the cluster1, cluster2 and cluster3 are 1.242, 1.695 and 0.988 respectively
(see Figure 3.20). It is interesting that the mean degree distribution parameter
increases with the increase of the clustering coefficient and small-worldness.
To summarize, the clustering results with clustering coefficient and small-
worldnes support each other, as they have high similarity. The degree distribution
parameters for the subjects overlapped in the two type of clusters also shows
distinct difference, which means the three network properties we consider are
reliable to cluster the subjects. The largest clusters are also close to the average
in all three metrics across subjects, which shows the network features of the
average mental health condition in the analyzed subject group.
34 Chapter 3. Methods
100
101
102
103
Degree
10-5
10-4
10-3
10-2
10-1
100
Pro
ba
bili
ty (
de
gre
e)
averagecluster1cluster2cluster3
Figure 3.20: Degree distribution of clusters in logarithmic scale. The clusters
marked in the figure are the ones from Table 3.2. The degree distribution for
cluster1 (the largest cluster) is close to the average, while the other two clusters
deviate from the average. The degree distribution parameters for the cluster1,
cluster2 and cluster3 are 1.242, 1.695 and 0.988 respectively.
Chapter 4
Discussion
4.1 Pre-processing procedure
The choice of pre-processing technique has a big influence on the later analysis. In
section 3.2 we discussed the trade-off of implementing slice timing and smoothing,
and decide not implementing them. The pre-processing steps - realignment and
normalization are already sufficient for comparability among brains. However, in
section 3.3.2, we find that after normalisation some slight misalignments around
the boundary of the head bring negative voxel values . This suggests that the
traditional normalistion (based on coregistration and segmentation) might be a
better option. But the traditional procedure is more time consuming, as we
need to align the fMRI and sMRI for each subject before adjusting the fMRIs
to the sMRI template space. On the other hand, simply neglecting the negative
value only lose 1% of the voxels (see section 3.3.2). Therefore, although our
normalisation results is not perfect, it is acceptable.
4.2 Voxel-based network
Most functional networks in resting state fMRI study are region-based. The
most common brain parcellation is to divide the brain into 90 anatomical regions
with Automated Anatomical Labeling (AAL) template and averaging voxels of
each region (Lynall et al., 2010; Cao et al., 2014). Another similar approach is
to parcel the brain into random amount of regions (Cao et al., 2014). Func-
tional connectivity between ICA regions are also explored (Mørup et al., 2010).
Region-based functional networks are computationally easier to analyze as smaller
35
36 Chapter 4. Discussion
amount of nodes are considered. However, the region-based network is computa-
tionally expensive to be constructed, and it limits the evaluation of inter-regional
connectivity and restricts to certain regions. In contrast, voxel-based network,
which we implemented in our project, has the ability to measure inter-regional
and intra-regional connectivity. Previously, van den Heuvel et al. (2008) analyzed
voxel-based resting state functional networks among healthy subjects and found
that the networks are scale-free, but they did not make any further inference.
4.3 Dimension reduction
For computational efficiency, we reduce the dimension with a cubic selection
approach. We find that the larger the cube is, the more connected components
it will form. However, the density of the whole brain functional network will not
change with different sizes of cube (3 × 3 × 3, 4 × 4 × 4 and 5 × 5 × 5) across
different thresholds. The amount of the connected components increase as the
threshold increase, and the increasing curve is similar with different cube sizes.
Also, the observations of the sudden merge of the second largest component to
the largest component are consistent across the 3 different cube sizes we studied.
This implies that the selection of the cubic size 5× 5× 5 for dimension reduction
is feasible.
4.4 Correlations in the network
In this project, we pick correlations as the measures for interrelations of pro-
cesses. Though there are other measures such as Granger causality, independent
components and mutual information, we believe that correlations are proper mea-
surements in our big data analysis.
Mutual information is commonly used to measure the regions defined by ICA
(Dodel, 2002; Mørup et al., 2010). The mutual information for two variables is
calculated from the joint probability distribution and marginal probability distri-
bution, which are heavily depend on the observation. That is to say the choice
of bin widths and time scale for counting observations influence the accuracy of
the mutual information. Granger causality has the same disadvantage of depen-
dence on observed variables (Zhou et al., 2014). The Granger causality quantifies
the usefulness of one voxel to another voxel or a selected region in times series
4.5. Network properties 37
(Roebroeck et al., 2005). In other words, the Granger causality identify whether
one variable in the past can predict another variable in the present. This causal-
ity information about the connectivity between voxels is less interpretable than
correlations. Therefore, Granger causality, independent components and mu-
tual information are statistically less reliable and computationally more complex,
which is not helpful for big data analysis.
4.5 Network properties
The network properties we take into consideration cover global and local fea-
tures of the network. Degree distribution parameter, clustering coefficient and
small-wordlness all can be computed to a quantity. With enough data, these
quantities can be statistically significantly estimated. Another advantage is that
these quantities are computationally accessible, which is suitable for big data
analysis.
In addition, these properties are generally used in the community, thus our
results are comparable with other previous work (such as Mørup et al. (2010);
Cao et al. (2014); Lynall et al. (2010)). Though the scale of the networks varies
in different studies, the results on the network properties are still worth for com-
parison.
4.6 Results interpreted from resting state
In this project we simply assume the highly active voxels are of great interest.
However, it is not always the case. Voxels with high values might be caused by
MRI scanner (the low-frequency fluctuations), respiratory and cardiac pulsations,
which are the main source of noise in resting state (Cordes et al., 2001). Thus, in
the future, we might implement band-pass filter to minimize such noise influence
(van den Heuvel et al., 2008).
Another assumption we made in this project is that the subjects are scanned
under the resting state. But the fact remains unknown, as the subjects might
doing some unconstrained cognitive tasks, such as mind wondering, during the
rest. As the network we extracted is not the Default Mode Network (see section
2.1), other explanation of the brain activity at rest is possible, which should be
explored in the future work.
Chapter 5
Conclusion
In this project we performed a complete analysis of fMRI brain scans across
one hundred subjects during the resting state. We discussed pros and cons of
utilizing the standard pre-processing techniques. Realignment and normalization
pre-processing steps are taken to make the subjects comparable, though they
bring some artefacts and noise.
The functional network of the brain is constructed in a voxel-based level,
which reveals inter-regional and intra-regional information. Voxels in every 5 ×5 × 5 cube are selected and a correlation matrix for these voxels is calculated.
A threshold is set to binarize the correlation matrix. We find that there is a
consistent observation of a phase transition that the second largest connected
component merges to the largest connected component. The threshold that causes
the sudden transition is selected and the second largest component is extracted
from the network constructed at such a threshold. The selected thresholds for
the subjects are range from 0.66 to 0.88.
Basic network properties are discussed and three proper metrics - degree dis-
tribution parameter, clustering coefficient and small-worldness are selected to be
analyzed for the subjects. We find that the second largest networks we generate
are scale-free, because their degree distribution follows the power-law. The de-
gree distribution for different subjects can be then compared on the characterising
exponent (degree distribution parameter). We clustered the subjects into three
groups by k-means clustering with clustering coefficient and small-worldness. The
clustering results with the two types of clustering predictors are similar in terms
of the distribution and the amount of overlapped subjects. Moreover, we find the
overlapped subjects in the three clusters shows distinguishable degree distribution
39
40 Chapter 5. Conclusion
parameters, which shows the reliability of the clustering result.
Though we do not have the mental health condition data for the studied
subjects for now, we can still conclude that the largest cluster we get reveals the
network information of the average mental health condition among the subjects.
With more information for each subject in the future, we might give more accurate
classification results on healthy controls and Major Depression Disorder patients
with the proposed network properties.
Bibliography
Arbabshirani, M. R., Kiehl, K. A., Pearlson, G. D., and Calhoun, V. D. (2013).
Classification of schizophrenia patients based on resting-state functional net-
work connectivity. Frontiers in Neuroscience, 7(7):133.
Ashburner, J., Barnes, G., Chen, C., and Daunizeau, J. (2014). SPM12 Manual.
Bollobas, B., Janson, S., and Riordan, O. (2007). The phase transition in inhomo-
geneous random graphs. Random Structures & Algorithms, 31(1):3–122.
Bullmore, E. and Sporns, O. (2009). Complex brain networks: graph theoretical
analysis of structural and functional systems. Nature Reviews Neuroscience,
10(3):186–198.
Cabral, J., Kringelbach, M. L., and Deco, G. (2014). Exploring the network
dynamics underlying brain activity during rest. Progress in Neurobiology,
114:102–131.
Cao, M., Wang, J.-H., Dai, Z.-J., Cao, X.-Y., Jiang, L.-L., Fan, F.-M., Song, X.-
W., Xia, M.-R., Shu, N., Dong, Q., Milham, M. P., Castellanos, F. X., Zuo, X.-
N., and He, Y. (2014). Topological organization of the human brain functional
connectome across the lifespan. Developmental Cognitive Neuroscience, 7:76–
93.
Cole, D., Smith, S., and Beckmann, C. (2010). Advances and pitfalls in the
analysis and interpretation of resting-state FMRI data. Frontiers in Systems
Neuroscience, 4:8.
Cordes, D., Haughton, V. M., Arfanakis, K., Carew, J. D., Turski, P. A., Moritz,
C. H., Quigley, M. A., and Meyerand, M. E. (2001). Frequencies Contributing
to Functional Connectivity in the Cerebral Cortex in “Resting-state” Data.
American Journal of Neuroradiology, 22(7):1326–1333.
41
42 Bibliography
Danev, L. (2016). Exploratory analysis of the brain’s resting state functional
connectivity in fMRI data. Minf project(part1) interim report, University of
Edinburgh.
Deco, G. and Kringelbach, M. L. (2014). Great Expectations: Using Whole-Brain
Computational Connectomics for Understanding Neuropsychiatric Disorders.
Neuron, 84(5):892–905.
Dodel, S. (2002). Data driven analysis of brain activity and functional connectiv-
ity. Phd thesis, University of Gttingen.
Eguiluz, V. M., Chialvo, D. R., Cecchi, G. A., Baliki, M., and Apkarian,
A. V. (2005). Scale-Free Brain Functional Networks. Physical Review Letters,
94(1):018102.
Fair, D. A., Cohen, A. L., Dosenbach, N. U. F., Church, J. A., Miezin, F. M.,
Barch, D. M., Raichle, M. E., Petersen, S. E., and Schlaggar, B. L. (2008).
The maturing architecture of the brain’s default network. Proceedings of the
National Academy of Sciences of the United States of America, 105(10):4028–
4032.
Fernandez-Pujals, A. M., Adams, M. J., Thomson, P., McKechanie, A. G., Black-
wood, D. H., Smith, B. H., Dominiczak, A. F., Morris, A. D., Matthews, K.,
Campbell, A., and et al. (2015). Epidemiology and Heritability of Major De-
pressive Disorder, Stratified by Age of Onset, Sex, and Illness Course in Gen-
eration Scotland: Scottish Family Health Study (GS:SFHS).
Gonzalez, R. C. and Woods, R. E. (2008). Digital image processing — Clc.
Prentice Hall.
Hayasaka, S. and Laurienti, P. J. (2010). Comparison of characteristics between
region-and voxel-based network analyses in resting-state fMRI data. Neuroim-
age, 50(2):499–508.
Lord, L.-D., Expert, P., Huckins, J. F., and Turkheimer, F. E. (2013). Cerebral
energy metabolism and the brain’s functional network architecture: an integra-
tive review. Journal of Cerebral Blood Flow & Metabolism, 33(9):1347–1354.
Lynall, M.-E., Bassett, D. S., Kerwin, R., McKenna, P. J., Kitzbichler, M.,
Muller, U., and Bullmore, E. (2010). Functional Connectivity and Brain Net-
works in Schizophrenia. The Journal of Neuroscience, 30(28):9477–9487.
Bibliography 43
Mørup, M., Madsen, K., and Dogonowski, A. (2010). Infinite relational modeling
of functional connectivity in resting state fmri. Advances in Neural Information
Processing Systems 23.
Moussa, M. N., Steen, M. R., Laurienti, P. J., and Hayasaka, S. (2012). Con-
sistency of Network Modules in Resting-State fMRI Connectome Data. PLoS
One, 7(8):e44428.
Power, J. D., Cohen, A. L., Nelson, S. M., Wig, G. S., Barnes, K. A., Church,
J. A., Vogel, A. C., Laumann, T. O., Miezin, F. M., Schlaggar, B. L., and
Petersen, S. E. (2011). Functional Network Organization of the Human Brain.
Neuron, 72(4):665–678.
Raichle, M. E. (2015). The Brain’s Default Mode Network. Annual Review of
Neuroscience, 38(1):433–447.
Raichle, M. E., MacLeod, A. M., Snyder, A. Z., Powers, W. J., Gusnard, D. A.,
and Shulman, G. L. (2001). A default mode of brain function. Proceedings of the
National Academy of Sciences of the United States of America, 98(2):676–682.
Roebroeck, A., Formisano, E., and Goebel, R. (2005). Mapping directed influence
over the brain using Granger causality and fMRI. Neuroimage, 25(1):230–242.
Sladky, R., Friston, K. J., Trostl, J., Cunnington, R., Moser, E., and Windis-
chberger, C. (2011). Slice-timing effects and their correction in functional MRI.
Neuroimage, 58(2):588–594.
van den Heuvel, M. P., Stam, C. J., Boersma, M., and Hulshoff Pol, H. E. (2008).
Small-world and scale-free organization of voxel-based resting-state functional
connectivity in the human brain. Neuroimage, 43(3):528–539.
Wang, L., Hermens, D. F., Hickie, I. B., and Lagopoulos, J. (2012). A systematic
review of resting-state functional-MRI studies in major depression. Journal of
Affective Disorders, 142(1-3):6–12.
White, E. P., Enquist, B. J., and Green, J. L. (2008). ON ESTIMATING THE
EXPONENT OF POWER-LAW FREQUENCY DISTRIBUTIONS. Ecology,
89(4):905–912.
44 Bibliography
Yu, Q., Erhardt, E. B., Sui, J., Du, Y., He, H., Hjelm, D., Cetin, M. S.,
Rachakonda, S., Miller, R. L., Pearlson, G., and Calhoun, V. D. (2015). As-
sessing dynamic brain graphs of time-varying connectivity in fMRI data: Ap-
plication to healthy controls and patients with schizophrenia. Neuroimage,
107:345–355.
Zalesky, A., Fornito, A., Harding, I. H., Cocchi, L., Yucel, M., Pantelis, C., and
Bullmore, E. T. (2010). Whole-brain anatomical networks: Does the choice of
nodes matter? Neuroimage, 50(3):970–983.
Zhou, D., Zhang, Y., Xiao, Y., and Cai, D. (2014). Reliability of the Granger
causality inference. New Journal of Physics, 16(4):043016.