statistical analysis of network data and evolution on gpus ... · statistical analysis of network...

44
Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne & Michael P.H. Stumpf Theoretical Systems Biology Group 25/01/2012 Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf 1 of 22 re Precedings : doi:10.1038/npre.2012.6874.1 : Posted 9 Feb

Upload: others

Post on 09-Jun-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Statistical analysis of network data and evolution on GPUs:High-performance statistical computing

Thomas Thorne & Michael P.H.Stumpf

Theoretical Systems Biology Group

25/01/2012

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf 1 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 2: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Trends in High-Performance Computing

CUDA OpenCL MPI OpenMPI PVM

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Introduction 2 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 3: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

(Hidden) Costs of Computing

Prototyping/Development Time is typically reduced for scriptinglanguages such as R or Python.

Run Time on single threads C/C++ (or Fortran) have betterperformance characteristics. But for specialized tasksother languages, e.g. Haskell, can show goodcharacteristics.

Energy Requirements Every Watt we use for computing we also haveto extract with air conditioning.

The role of GPUs• GPUs can be accessed from many different programming

languages (e.g. PyCUDA).• GPUs have a comparatively small footprint and relatively modest

energy requirements compared to clusters of CPUs.• GPUs were designed for consumer electronics: computer gamers

have different needs from the HPC community.

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Introduction 3 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 4: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

(Hidden) Costs of Computing

Prototyping/Development Time is typically reduced for scriptinglanguages such as R or Python.

Run Time on single threads C/C++ (or Fortran) have betterperformance characteristics. But for specialized tasksother languages, e.g. Haskell, can show goodcharacteristics.

Energy Requirements Every Watt we use for computing we also haveto extract with air conditioning.

The role of GPUs• GPUs can be accessed from many different programming

languages (e.g. PyCUDA).• GPUs have a comparatively small footprint and relatively modest

energy requirements compared to clusters of CPUs.• GPUs were designed for consumer electronics: computer gamers

have different needs from the HPC community.

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Introduction 3 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 5: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

(Hidden) Costs of Computing

Prototyping/Development Time is typically reduced for scriptinglanguages such as R or Python.

Run Time on single threads C/C++ (or Fortran) have betterperformance characteristics. But for specialized tasksother languages, e.g. Haskell, can show goodcharacteristics.

Energy Requirements Every Watt we use for computing we also haveto extract with air conditioning.

The role of GPUs• GPUs can be accessed from many different programming

languages (e.g. PyCUDA).• GPUs have a comparatively small footprint and relatively modest

energy requirements compared to clusters of CPUs.• GPUs were designed for consumer electronics: computer gamers

have different needs from the HPC community.

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Introduction 3 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 6: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

(Hidden) Costs of Computing

Prototyping/Development Time is typically reduced for scriptinglanguages such as R or Python.

Run Time on single threads C/C++ (or Fortran) have betterperformance characteristics. But for specialized tasksother languages, e.g. Haskell, can show goodcharacteristics.

Energy Requirements Every Watt we use for computing we also haveto extract with air conditioning.

The role of GPUs• GPUs can be accessed from many different programming

languages (e.g. PyCUDA).• GPUs have a comparatively small footprint and relatively modest

energy requirements compared to clusters of CPUs.• GPUs were designed for consumer electronics: computer gamers

have different needs from the HPC community.

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Introduction 3 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 7: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Evolving Networksα

δ

δ

γ

γ

α

δ

δNetwork Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Network Evolution 4 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 8: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Evolving Networks

Model-Based Evolutionary Analysis• For sequence data we use models of nucleotide substitution in

order to infer phylogenies in a likelihood or Bayesian framework.• None of these models — even the general time-reversible model

— are particularly realistic; but by allowing for complicating factorse.g. rate variation we capture much of the variability observedacross a phylogenetic panel.

• Modes of network evolution will be even more complicated andexhibit high levels of contingency; moreover the structure andfunction of different parts of the network will be intricately linked.

• Nevertheless we believe that modelling the processes underlyingthe evolution of networks can provide useful insights; in particularwe can study how functionality is distributed across groups ofgenes.

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Network Evolution 4 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 9: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Network Evolution Models

(a) Duplication attachment (b) Duplication attachmentwith complimentarity

(c) Linear preferentialattachment

wi

wj

(d) General scale-free

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Network Evolution 5 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 10: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

ABC on Networks

Summarizing Networks• Data are noisy and incomplete.• We can simulate models of network

evolution, but this does not allow us tocalculate likelihoods for all but verytrivial models.

• There is also no sufficient statistic thatwould allow us to summarize networks,so ABC approaches require somethought.

• Many possible summary statistics ofnetworks are expensive to calculate.

Full likelihood: Wiuf et al., PNAS (2006).

ABC: Ratman et al., PLoS Comp.Biol. (2008).

Stumpf & Wiuf, J. Roy. Soc. Interface (2010).

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Network Evolution 6 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 11: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Graph Spectrum

a

b

c

d e0 1 1 1 01 0 1 1 01 1 0 0 01 1 0 0 10 0 0 1 0

a b c d e

abcde

A =

Graph SpectraGiven a graph G comprised of a set of nodes N and edges (i, j) ∈ Ewith i, j ∈ N, the adjacency matrix, A, of the graph is defined by

ai,j =

{1 if (i, j) ∈ E ,

0 otherwise.The eigenvalues, λ, of this matrix provide one way of defining thegraph spectrum.

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Network Evolution 7 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 12: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Spectral Distances

A simple distance measure between graphs having adjacencymatrices A and B, known as the edit distance, is to count the numberof edges that are not shared by both graphs,

D(A,B) =∑

i,j

(ai,j − bi,j)2.

However for unlabelled graphs we require some mapping h fromi ∈ NA to i ′ ∈ NB that minimizes the distance

D(A,B) > D ′h(A,B) =

∑i,j

(ai,j − bh(i),h(j))2,

Given a spectrum (which is relatively cheap to compute) we have

D ′(A,B) =∑

l

(λ(α)l − λ

(β)l

)2

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Network Evolution 8 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 13: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Spectral Distances

A simple distance measure between graphs having adjacencymatrices A and B, known as the edit distance, is to count the numberof edges that are not shared by both graphs,

D(A,B) =∑

i,j

(ai,j − bi,j)2.

However for unlabelled graphs we require some mapping h fromi ∈ NA to i ′ ∈ NB that minimizes the distance

D(A,B) > D ′h(A,B) =

∑i,j

(ai,j − bh(i),h(j))2,

Given a spectrum (which is relatively cheap to compute) we have

D ′(A,B) =∑

l

(λ(α)l − λ

(β)l

)2

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Network Evolution 8 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 14: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Spectral Distances

A simple distance measure between graphs having adjacencymatrices A and B, known as the edit distance, is to count the numberof edges that are not shared by both graphs,

D(A,B) =∑

i,j

(ai,j − bi,j)2.

However for unlabelled graphs we require some mapping h fromi ∈ NA to i ′ ∈ NB that minimizes the distance

D(A,B) > D ′h(A,B) =

∑i,j

(ai,j − bh(i),h(j))2,

Given a spectrum (which is relatively cheap to compute) we have

D ′(A,B) =∑

l

(λ(α)l − λ

(β)l

)2

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Network Evolution 8 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 15: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

ABC using Graph Spectra

For an observed network, N, and a simulated network, Sθ, we use thedistance between the spectra

D ′(N, Sθ) =∑

l

(λ(N)l − λ

(S)l

)2,

in our ABC SMC procedure. Note that this distance is a close lowerbound on the distance between the raw data; we therefore do nothave to bother with summary statistics.Also, calculating graph spectra costs as much as calculating otherO(N3) statistics (such as all shortest paths, the network diameter orthe within-reach distribution).

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Network Evolution 9 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 16: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Simulated Data

−0.1 0.1 0.3 0.5

0.0

0.5

1.0

1.5

δ

0.0 0.2 0.4 0.6 0.8

0.0

0.1

0.2

0.3

0.4

0.2 0.4 0.6 0.8 1.0

0.0

0.1

0.2

0.3

0.4

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.1

0.2

0.3

0.4

0.0 0.1 0.2 0.3 0.4

0.0

0.2

0.4

0.6

0.8

−0.2 0.2 0.6 1.0

0.0

0.1

0.2

0.3

0.4

α

0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

0.0 0.1 0.2 0.3 0.4

0.2

0.4

0.6

0.8

1.0

0.0 0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

1.0

0.0 0.2 0.4 0.6 0.8 1.0 1.2

0.0

0.1

0.2

0.3

0.4

p

0.0 0.2 0.4 0.6 0.8 1.0

0.2

0.4

0.6

0.8

1.0

0.0 0.1 0.2 0.3 0.4

0.0

0.2

0.4

0.6

0.8

1.0

0.0 0.2 0.4 0.6 0.8

0.0

0.2

0.4

0.6

0.8

1.0

0.2 0.4 0.6 0.8 1.0

0.0

0.2

0.4

0.6

0.8

1.0

0.0 0.2 0.4 0.6 0.8 1.0

0.0

0.4

0.8

1.2

m

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Network Evolution 10 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 17: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Protein Interaction Network DataSpecies Proteins Interactions Genome size Sampling fraction

S.cerevisiae 5035 22118 6532 0.77

D. melanogaster 7506 22871 14076 0.53

H. pylori 715 1423 1589 0.45

E. coli 1888 7008 5416 0.35

Model

Mod

el p

roba

bilit

y

0.0

0.1

0.2

0.3

0.4

0.5

DA DAC LPA SF DACL DACR

Organism

S.cerevisae

D.melanogaster

H.pylori

E.coli

Model Selection• Inference here was based on all

the data, not summarystatistics.

• Duplication models receive thestrongest support from the data.

• Several models receive supportand no model is chosenunambiguously.

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Network Evolution 11 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 18: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Protein Interaction Network DataSpecies Proteins Interactions Genome size Sampling fraction

S.cerevisiae 5035 22118 6532 0.77

D. melanogaster 7506 22871 14076 0.53

H. pylori 715 1423 1589 0.45

E. coli 1888 7008 5416 0.35

Model

Mod

el p

roba

bilit

y

0.0

0.1

0.2

0.3

0.4

0.5

DA DAC LPA SF DACL DACR

Organism

S.cerevisae

D.melanogaster

H.pylori

E.coli

Model Selection• Inference here was based on all

the data, not summarystatistics.

• Duplication models receive thestrongest support from the data.

• Several models receive supportand no model is chosenunambiguously.

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Network Evolution 11 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 19: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Protein Interaction Network DataSpecies Proteins Interactions Genome size Sampling fraction

S.cerevisiae 5035 22118 6532 0.77

D. melanogaster 7506 22871 14076 0.53

H. pylori 715 1423 1589 0.45

E. coli 1888 7008 5416 0.35

Model

Mod

el p

roba

bilit

y

0.0

0.1

0.2

0.3

0.4

0.5

DA DAC LPA SF DACL DACR

Organism

S.cerevisae

D.melanogaster

H.pylori

E.coli

Model Selection• Inference here was based on all

the data, not summarystatistics.

• Duplication models receive thestrongest support from the data.

• Several models receive supportand no model is chosenunambiguously.

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Network Evolution 11 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 20: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Protein Interaction Network Data

0.0 0.4 0.8

05

1015

δD

A

0.0 0.4 0.8

02

46

8

α

0.0 0.4 0.8

05

1015

δ

DA

C

0.0 0.4 0.8

02

46

8

α

0.0 0.4 0.8

02

46

810

δ

DA

CL

0.0 0.4 0.8

01

23

4

α

0.0 0.4 0.8

02

46

810

p

0 2 4 6 8 10

0.0

0.2

0.4

0.6

0.8

1.0

m

0.0 0.4 0.8

02

46

8

δ

DA

CR

0.0 0.4 0.8

01

23

4

α

0.0 0.4 0.8

01

23

45

p

0 2 4 6 8 10

0.0

0.2

0.4

0.6

0.8

1.0

m

S.cerevisiaeD. melanogasterH. pyloriE. coli

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Network Evolution 11 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 21: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

GPUs in Computational Statistics

Performance• With highly optimized libraries (e.g. BLAST/ATLAS) we can run

numerically demanding jobs relatively straightforwardly.

• Whenever poor-man’s paralellization is possible, GPUs offerconsiderable advantages over multi-core CPU systems (atcomparable cost and energy requirements).

• Performance depends crucially on our ability to map tasks onto thehardware.

Challenges

• GPU hardware was initially conceived for different purposes —computer gamers need fewer random numbers than MCMC orSMC procedures. The address space accessible tosingle-precision numbers also suffices to kill zombies etc .

• Combining several GPUs requires additional work, using e.g. MPI.

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf GPU and the Alternatives 12 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 22: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

GPUs in Computational Statistics

Performance• With highly optimized libraries (e.g. BLAST/ATLAS) we can run

numerically demanding jobs relatively straightforwardly.• Whenever poor-man’s paralellization is possible, GPUs offer

considerable advantages over multi-core CPU systems (atcomparable cost and energy requirements).

• Performance depends crucially on our ability to map tasks onto thehardware.

Challenges

• GPU hardware was initially conceived for different purposes —computer gamers need fewer random numbers than MCMC orSMC procedures. The address space accessible tosingle-precision numbers also suffices to kill zombies etc .

• Combining several GPUs requires additional work, using e.g. MPI.

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf GPU and the Alternatives 12 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 23: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

GPUs in Computational Statistics

Performance• With highly optimized libraries (e.g. BLAST/ATLAS) we can run

numerically demanding jobs relatively straightforwardly.• Whenever poor-man’s paralellization is possible, GPUs offer

considerable advantages over multi-core CPU systems (atcomparable cost and energy requirements).

• Performance depends crucially on our ability to map tasks onto thehardware.

Challenges

• GPU hardware was initially conceived for different purposes —computer gamers need fewer random numbers than MCMC orSMC procedures. The address space accessible tosingle-precision numbers also suffices to kill zombies etc .

• Combining several GPUs requires additional work, using e.g. MPI.

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf GPU and the Alternatives 12 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 24: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

GPUs in Computational Statistics

Performance• With highly optimized libraries (e.g. BLAST/ATLAS) we can run

numerically demanding jobs relatively straightforwardly.• Whenever poor-man’s paralellization is possible, GPUs offer

considerable advantages over multi-core CPU systems (atcomparable cost and energy requirements).

• Performance depends crucially on our ability to map tasks onto thehardware.

Challenges• GPU hardware was initially conceived for different purposes —

computer gamers need fewer random numbers than MCMC orSMC procedures. The address space accessible tosingle-precision numbers also suffices to kill zombies etc .

• Combining several GPUs requires additional work, using e.g. MPI.

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf GPU and the Alternatives 12 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 25: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

GPUs in Computational Statistics

Performance• With highly optimized libraries (e.g. BLAST/ATLAS) we can run

numerically demanding jobs relatively straightforwardly.• Whenever poor-man’s paralellization is possible, GPUs offer

considerable advantages over multi-core CPU systems (atcomparable cost and energy requirements).

• Performance depends crucially on our ability to map tasks onto thehardware.

Challenges• GPU hardware was initially conceived for different purposes —

computer gamers need fewer random numbers than MCMC orSMC procedures. The address space accessible tosingle-precision numbers also suffices to kill zombies etc .

• Combining several GPUs requires additional work, using e.g. MPI.

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf GPU and the Alternatives 12 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 26: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Alternatives to GPUs: FPGAs

Field Programmable Gate Arrays are configurable electronic circuitsthat are partly (re)configurable. Here the hardware is adapted to theproblem at hand and encapsulates the programme in itsprogrammable logic arrays.

312 D.B. Thomas, W. Luk, and M. Stumpf

1

10

100

1000

10000

100000

4 9 14 19 24

Graph Size

Perfo

rm

an

ce (

MS

wap

-Co

mp

are/s

) Multiple instances in xc4vlx60

Single Instance in Virtex-4

Quad Opteron Software

Fig. 9. A comparison of the raw swap-compare performance between the Virtex-4hardware and a Quad Opteron software implementation. Hardware performance isshown both for a single graph labelling unit, and for an xc4vlx60 device filled with themaximum number of units that will fit.

1.E-05

1.E-04

1.E-03

1.E-02

1.E-01

1.E+00

1.E+01

1.E+02

1.E+03

4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

Graph Size

Perfo

rm

an

ce (

ML

ab

els

/s)

xc4vlx60 via PCIQuad OpteronBrute Forcewv=0wv=1wv=2wv=3

Fig. 10. Practical performance when labelling random graphs for a Virtex-4 xc4vlx60device compared with that of a Quad Opteron. The FPGA performance includes cycleslost due to loading the graph and extracting the label, and is also bandwidth limited to133MBytes/s to simulate a connection over a PCI bus. For all graphs of size less than12 the FPGA is bandwidth limited (except in the Brute Force case), and for wv = 2and wv = 3 the computational load stays so low relative to IO, due to the small numberof swaps per graph, that the FPGA is bandwidth limited for all graph sizes.

The hardware costs also include the time taken to load input graphs and readback labels between labelling operations. The hardware is also assumed to beconnected to a PC via a 133MByte/sec PCI bus, so hardware performance is

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf GPU and the Alternatives 13 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 27: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Alternatives to GPUs: CPUs (and MPI. . . )

CPUs with multiple coresare flexible, have largeaddress spaces and awealth of flexible numericalroutines. This makesimplementation ofnumerically demandingtasks relativelystraightforward. Inparticular there is less of anincentive to consider how aproblem is bestimplemented in softwarethat takes advantage ofhardware features.

size

time

10

20

30

40

100 500 1000 5000 10000 50000

processor

CPU

GPU

6-core Xeon vs M2050 (448 cores) programmed in

OpenCL

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf GPU and the Alternatives 14 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 28: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

(Pseudo) Random Numbers

The Mersenne-Twister is one of the standard random numbergenerators for simulation. MT19937 has a period of219937 − 1 ≈ 4× 106001.

But MT does not have cryptographic strength (once 624 iterates havebeen observed, all future states are predictable), unlikeBlum-Blum-Shub,

xn+1 = xnmodM,

where M = pq with p, q large prime numbers. But it is too slow forsimulations.

Parallel Random Number Generation• Running RNGs in parallel does not produce reliable sets of random

numbers.• We need algorithms produce large numbers of parallel streams of

“good” random numbers.• We also need better algorithms for weighted sampling.

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Random Numbers on GPUs 15 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 29: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

(Pseudo) Random Numbers

The Mersenne-Twister is one of the standard random numbergenerators for simulation. MT19937 has a period of219937 − 1 ≈ 4× 106001.But MT does not have cryptographic strength (once 624 iterates havebeen observed, all future states are predictable), unlikeBlum-Blum-Shub,

xn+1 = xnmodM,

where M = pq with p, q large prime numbers. But it is too slow forsimulations.

Parallel Random Number Generation• Running RNGs in parallel does not produce reliable sets of random

numbers.• We need algorithms produce large numbers of parallel streams of

“good” random numbers.• We also need better algorithms for weighted sampling.

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Random Numbers on GPUs 15 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 30: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

(Pseudo) Random Numbers

The Mersenne-Twister is one of the standard random numbergenerators for simulation. MT19937 has a period of219937 − 1 ≈ 4× 106001.But MT does not have cryptographic strength (once 624 iterates havebeen observed, all future states are predictable), unlikeBlum-Blum-Shub,

xn+1 = xnmodM,

where M = pq with p, q large prime numbers. But it is too slow forsimulations.

Parallel Random Number Generation• Running RNGs in parallel does not produce reliable sets of random

numbers.• We need algorithms produce large numbers of parallel streams of

“good” random numbers.• We also need better algorithms for weighted sampling.

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Random Numbers on GPUs 15 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 31: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

What GPUs are Good At

• GPUs are good for linear threads involving mathematical functions.• We should avoid loops and branches in the code.• In an optimal world we should aim for all threads to finish at the

same time.

Execution cycle

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Random Numbers on GPUs 16 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 32: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Generating RNGs in Parallel

We assume that we have a function xn+1 = f (xn) which generates astream of (pseudo) random numbers

x0, x1, x2, . . . , xr , . . . , xs, . . . , xt , . . . , xz

Parallel Streams

Sub-Streams

x0, x1, . . .

x ′0, x ′

1, . . .

x ′′0 , x ′′

1 , . . .

x0, . . . , xr−1

xr , . . . , xs−1

xs, . . . , xt−1

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Random Numbers on GPUs 17 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 33: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Generating RNGs in Parallel

We assume that we have a function xn+1 = f (xn) which generates astream of (pseudo) random numbers

x0, x1, x2, . . . , xr , . . . , xs, . . . , xt , . . . , xz

Parallel Streams

Sub-Streams

x0, x1, . . .

x ′0, x ′

1, . . .

x ′′0 , x ′′

1 , . . .

x0, . . . , xr−1

xr , . . . , xs−1

xs, . . . , xt−1

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Random Numbers on GPUs 17 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 34: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Counter-Based RNGs

We can represent RNGs as state-space processes,

yn+1 = f (yn) with f :Y −→ Y

xn+1 = gk ,nmodJ(ybn+1/Jc) with g :Y ×K×ZJ −→ X ,

where x are essentially behaving as x ∼ U[0,1]; here K is the keyspace, J the number of random numbers generated from the internalstate of the RNG.

Salmon et al.(SC11) propose to use a simple form for f (·) and

gk ,j = hk ◦ bk

For a sufficiently complex bijection bk we can use simple updatesand leave the randomization to bk . Here cryptographic routines comein useful.

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Random Numbers on GPUs 18 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 35: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Counter-Based RNGs

We can represent RNGs as state-space processes,

yn+1 = f (yn) with f :Y −→ Y

xn+1 = gk ,nmodJ(ybn+1/Jc) with g :Y ×K×ZJ −→ X ,

where x are essentially behaving as x ∼ U[0,1]; here K is the keyspace, J the number of random numbers generated from the internalstate of the RNG.Salmon et al.(SC11) propose to use a simple form for f (·) and

gk ,j = hk ◦ bk

For a sufficiently complex bijection bk we can use simple updatesand leave the randomization to bk . Here cryptographic routines comein useful.

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Random Numbers on GPUs 18 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 36: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Counter-Based RNGs

We can represent RNGs as state-space processes,

yn+1 = f (yn) with f :Y −→ Y

xn+1 = gk ,nmodJ(ybn+1/Jc) with g :Y ×K×ZJ −→ X ,

where x are essentially behaving as x ∼ U[0,1]; here K is the keyspace, J the number of random numbers generated from the internalstate of the RNG.Salmon et al.(SC11) propose to use a simple form for f (·) and

gk ,j = hk ◦ bk

For a sufficiently complex bijection bk we can use simple updatesand leave the randomization to bk . Here cryptographic routines comein useful.

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Random Numbers on GPUs 18 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 37: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

RNG Performance on CPUs and GPUsMethod Max. Min. Output Intel CPU Nvidia GPU AMD GPU

input state size cpB GB/s cpB GB/s cpB GB/s

Counter-based, CryptographicAES(sw) (1+0)×16 11×16 1×16 31.2 0.4 – – – –AES(hw) (1+0)×16 11×16 1×16 1.7 7.2 – – – –

Threefish (Threefry-4×64-72) (4+4)×8 0 4×8 7.3 1.7 51.8 15.3 302.8 4.5

Counter-based, Crush-resistantARS-5(hw) (1+1)×16 0 1×16 0.7 17.8 – – – –ARS-7(hw) (1+1)×16 0 1×16 1.1 11.1 – – – –

Threefry-2×64-13 (2+2)×8 0 2×8 2.0 6.3 13.6 58.1 25.6 52.5Threefry-2×64-20 (2+2)×8 0 2×8 2.4 5.1 15.3 51.7 30.4 44.5Threefry-4×64-12 (4+4)×8 0 4×8 1.1 11.2 9.4 84.1 15.2 90.0Threefry-4×64-20 (4+4)×8 0 4×8 1.9 6.4 15.0 52.8 29.2 46.4Threefry-4×32-12 (4+4)×4 0 4×4 2.2 5.6 9.5 83.0 12.8 106.2Threefry-4×32-20 (4+4)×4 0 4×4 3.9 3.1 15.7 50.4 25.2 53.8

Philox2×64-6 (2+1)×8 0 2×8 2.1 5.9 8.8 90.0 37.2 36.4Philox2×64-10 (2+1)×8 0 2×8 4.3 2.8 14.7 53.7 62.8 21.6Philox4×64-7 (4+2)×8 0 4×8 2.0 6.0 8.6 92.4 36.4 37.2

Philox4×64-10 (4+2)×8 0 4×8 3.2 3.9 12.9 61.5 54.0 25.1Philox4×32-7 (4+2)×4 0 4×4 2.4 5.0 3.9 201.6 12.0 113.1

Philox4×32-10 (4+2)×4 0 4×4 3.6 3.4 5.4 145.3 17.2 79.1

Conventional, Crush-resistantMRG32k3a 0 6×4 1000×4 3.8 3.2 – – – –MRG32k3a 0 6×4 4×4 20.3 0.6 – – – –MRGk5-93 0 5×4 1×4 7.6 1.6 9.2 85.5 – –

Conventional, CrushableMersenne Twister 0 312×8 1×8 2.0 6.1 43.3 18.3 – –

XORWOW 0 6×4 1×4 1.6 7.7 5.8 136.7 16.8 81.1

Table 2: Memory and performance characteristics for a variety of counter-based and conventional PRNGs.Maximum input is written as (c+k)×w, indicating a counter type of width c×w bytes and a key type of widthk×w bytes. Minimum state and output size are c×w bytes. Counter-based PRNG performance is reportedwith the minimal number of rounds for Crush-resistance, and also with extra rounds for “safety margin.”Performance is shown in bold for recommended PRNGs that have the best platform-specific performance withsignificant safety margin. The multiple recursive generator MRG32k3a [21] is from the Intel Math KernelLibrary and MRGk5-93 [23] is adapted from the GNU Scientific Library (GSL). The Mersenne Twister [29]on CPUs is std::mt19937 64 from the C++0x library. On NVIDIA GPUs, the Mersenne Twister is portedfrom the GSL (the 32-bit variant with an output size of 1×4 bytes). XORWOW is adapted from [27] and isthe PRNG in the NVIDIA CURAND library.

a single Xeon core, but it is significantly faster on a GPU.On an NVIDIA GTX580 GPU, Philox-4×32-7 producesrandom numbers at 202 GB per second per chip, the highestoverall single-chip throughput that we are aware of forany PRNG (conventional or counter-based). On an AMDHD6970 GPU, it generates random numbers at 113 GB persecond per chip, which is also an impressive single-chip rate.Even Philox-4×32-10, with three extra rounds of safetymargin, is as fast as the non-Crush-resistant XORWOW[27]PRNG on GPUs.

We have only tested Philox with periods up to 2256, butas with Threefish and Threefry, the Philox SP network isspecified for widths up to 16×64 bits, and correspondingperiods up to 21024. There is every reason to expect suchwider forms also to be Crush-resistant.

5. COMPARISON OF PRNGSWhen choosing a PRNG, there are several questions that

users and application developers ask: Will the PRNG causemy simulation to produce incorrect results? Is the PRNGeasy to use? Will it slow down my simulation? Will it spill

registers, dirty my cache, or consume precious memory? InTable 2, we try to provide quantitative answers to thesequestions for counter-based PRNGs, as well as for a fewpopular conventional PRNGs.12

All the counter-based PRNGs we consider are Crush-resistant; there is no evidence whatsoever that they producestatistically flawed output. All of them conform to anidentical, naturally parallel API, so ease-of-use considera-tions are moot. All of them can be written and validatedin portable C, but as with many algorithms, performancecan be improved by relying on compiler-specific features or“intrinsics.” All of them have periods in excess of 2128 andkey spaces in excess of 264—well beyond the practical needs

12Tests were run on a 3.07 GHz quad-core Intel Xeon X5667 (West-mere) CPU, a 1.54 GHz NVIDIA GeForce GTX580 (512 “CUDAcores”) and an 880 Mhz AMD Radeon HD6970 (1536 “streamprocessors”). The same source code was used for counter-basedPRNGs on all platforms and is available for download. It wascompiled with gcc 4.5.3 for the Intel CPU, the CUDA 4.0.17toolkit for the NVIDIA GPU and OpenCL 1.1 (AMD-APP-SDK-v2.4) for the AMD GPU. Software AES used OpenSSL-0.9.8e.The MRG generators were from version 10.3.2 of the Intel MathKernel library and version 1.12 of the GNU Scientific Library.

Taken from Salmon et al.(see References).

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Random Numbers on GPUs 19 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 38: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Computational Challenges of GPUs

• Address spaces are a potential issue: using single precision weare prone to run out of numbers for challenging MCMC/SMCapplications very quickly.

• Memory is an issue. In particular, registers/memory close to thecomputing cores are precious.

• Bandwidth is an issue — coordination between GPU and CPU ismuch more challenging in statistical applications than e.g. fortexture mapping of graphics rendering tasks.

• Programming has to be much more hardware aware than for CPUs— more precisely, non-hardware adapted programming will bemore obviously less efficient than for CPUs.

• There are some differences from conventional ANSI standards formathematics (e.g. rounding).

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Conclusions 20 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 39: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Computational Challenges of GPUs

• Address spaces are a potential issue: using single precision weare prone to run out of numbers for challenging MCMC/SMCapplications very quickly.

• Memory is an issue. In particular, registers/memory close to thecomputing cores are precious.

• Bandwidth is an issue — coordination between GPU and CPU ismuch more challenging in statistical applications than e.g. fortexture mapping of graphics rendering tasks.

• Programming has to be much more hardware aware than for CPUs— more precisely, non-hardware adapted programming will bemore obviously less efficient than for CPUs.

• There are some differences from conventional ANSI standards formathematics (e.g. rounding).

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Conclusions 20 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 40: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Computational Challenges of GPUs

• Address spaces are a potential issue: using single precision weare prone to run out of numbers for challenging MCMC/SMCapplications very quickly.

• Memory is an issue. In particular, registers/memory close to thecomputing cores are precious.

• Bandwidth is an issue — coordination between GPU and CPU ismuch more challenging in statistical applications than e.g. fortexture mapping of graphics rendering tasks.

• Programming has to be much more hardware aware than for CPUs— more precisely, non-hardware adapted programming will bemore obviously less efficient than for CPUs.

• There are some differences from conventional ANSI standards formathematics (e.g. rounding).

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Conclusions 20 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 41: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Computational Challenges of GPUs

• Address spaces are a potential issue: using single precision weare prone to run out of numbers for challenging MCMC/SMCapplications very quickly.

• Memory is an issue. In particular, registers/memory close to thecomputing cores are precious.

• Bandwidth is an issue — coordination between GPU and CPU ismuch more challenging in statistical applications than e.g. fortexture mapping of graphics rendering tasks.

• Programming has to be much more hardware aware than for CPUs— more precisely, non-hardware adapted programming will bemore obviously less efficient than for CPUs.

• There are some differences from conventional ANSI standards formathematics (e.g. rounding).

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Conclusions 20 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 42: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Computational Challenges of GPUs

• Address spaces are a potential issue: using single precision weare prone to run out of numbers for challenging MCMC/SMCapplications very quickly.

• Memory is an issue. In particular, registers/memory close to thecomputing cores are precious.

• Bandwidth is an issue — coordination between GPU and CPU ismuch more challenging in statistical applications than e.g. fortexture mapping of graphics rendering tasks.

• Programming has to be much more hardware aware than for CPUs— more precisely, non-hardware adapted programming will bemore obviously less efficient than for CPUs.

• There are some differences from conventional ANSI standards formathematics (e.g. rounding).

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Conclusions 20 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 43: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

References

GPUs and ABC

• Zhou, Liepe, Sheng, Stumpf, Barnes (2011) GPU acceleratedbiochemical network simulation. Bioinformatics 27:874-876.

• Liepe, Barnes, Cule, Erguler, Kirk, Toni, Stumpf (2010)ABC-SysBioapproximate Bayesian computation in Python with GPUsupport. Bioinformatics 26:1797-1799.

GPUs and Random Numbers

• Salmon, Moraes, Dror, Shaw (2011) Parallel Random Numbers: As Easyas 1, 2, 3. in Proceedings of 2011 International Conference for HighPerformance Computing, Networking, Storage and Analysis, ACM, NewYork; http://doi.acm.org/10.1145/2063384.2063405.

• Howes, Thomas (2007) Efficient Random Number Generation andApplication Using CUDA in GPU Gems 3, Addison-Wesley Professional,Boston; http://http.developer.nvidia.com/GPUGems3/gpugems3_ch37.html.

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Conclusions 21 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2

Page 44: Statistical analysis of network data and evolution on GPUs ... · Statistical analysis of network data and evolution on GPUs: High-performance statistical computing Thomas Thorne

Acknowledgements

Network Analysis on GPUs Thomas Thorne & Michael P.H. Stumpf Conclusions 22 of 22

Nat

ure

Pre

cedi

ngs

: doi

:10.

1038

/npr

e.20

12.6

874.

1 : P

oste

d 9

Feb

201

2