a massively parallel solution to a massively parallel problem hpc4ngs workshop, may 21-22, 2012...

67
GPGPU in NGS Bioinformatics A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Upload: daniel-jenkins

Post on 26-Dec-2015

215 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

GPGPU in NGS BioinformaticsA massively parallel solution to a massively parallel problem

HPC4NGS Workshop, May 21-22, 2012Principe Felipe Research Center, Valencia

Brian Lam, PhD

Page 2: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Talk Outline

Next-generation sequencing and current challenges

What is GPGPU computing?

The use of GPGPU in NGS bioinformatics

How it works – the BarraCUDA project

System requirements for GPU computing

What if I want to develop my own GPU code?

What to look out for in the near future?

Conclusions

Page 3: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Talk Outline

Next-generation sequencing and current challenges

What is GPGPU computing?

The use of GPGPU in NGS bioinformatics

How it works – the BarraCUDA project

System requirements for GPU computing

What if I want to develop my own GPU code?

What to look out for in the near future?

Conclusions

Page 4: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

DNA sequencing

“DNA sequencing includes several methods and technologies that are used for determining the order of the nucleotide bases A, G, C, T in a molecule of DNA.” - Wikipedia

Page 5: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Next-generation DNA sequencing

Commercialised in 2005 by 454 Life Science

Sequencing in a massively parallel fashion

Page 6: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

How it works

An example: Whole genome sequencing

Extract genomic DNA from blood/mouth swabs Break into small DNA fragments of 200-400 bp Attach DNA fragments to a surface (flow

cells/slides/microtitre plates) at a high density Perform concurrent “cyclic sequencing reaction”

to obtain the sequence of each of the attached DNA fragments

An Illumina HiSeq 2000 can interrogate 825K spots / mm2

Page 7: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

GTCCTGA

Capturing the sequencing signals

TATTTTT

ATTCNGG

Not to scale

Page 8: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

What do we get from the sequencers

Billions of short DNA sequences, also called sequence reads ranging from

25 to 400 bp

Page 9: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

The throughput of NGS has increased dramatically

Page 10: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Current bioinformatics pipeline

Sequence Alignment

Image AnalysisBase Calling

Variant Calling, Peak calling

Page 11: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Just how complex is the pipeline?

Source: Genologics

Page 12: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Talk Outline

Next-generation sequencing and current challenges

What is Many-core/GPGPU computing?

The use of GPGPU in NGS bioinformatics

How it works – the BarraCUDA project

System requirements for GPU computing

What if I want to develop my own GPU code?

What to look out for in the near future?

Conclusions

Page 13: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Many-core computing architectures

A physical die that contains a ‘large number’ of processing

i.e. computation can be done in a massively parallel manner

Modern graphics cards (GPUs) consist of hundreds to thousands of computing cores

Page 14: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Just how fast are today’s GPUs

Page 15: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

If GPUs are so fast why do we need CPUs?

GPUs are fast, but there is a catch: SIMD – Single instruction, multiple data

VS

CPUs are powerful multi-purpose processors MIMD – Multiple Instructions, multiple

data

Page 16: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

CPU vs GPU

Page 17: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

SIMD – pros and cons

The very same (low level) instruction is applied to multiple data at the same time e.g. a GTX680 can do addition to 1536 data

point at a time, versus 16 on a 16-core CPU.

Branching results in serialisations

The ALUs on GPU are usually much more primitive compared to their CPU counterparts.

Page 18: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

GPU in scientific computing

Scientific computing often deal with large amount of data, and in many occasions, applying the same set of instructions to these data. Examples:▪ Monte Carlo simulations▪ Image analysis▪ Next-generation sequencing data analysis

Page 19: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Why bother?

Low capital cost and energy efficient Dell 12-core workstation £5,000, ~1kW Dell 40-core computing cluster £ 20,000+, ~6kW

NVIDIA Geforce GTX680 (1536 cores): £400, <0.2kW NVIDIA C2070 (448 cores): £1000, 0.2kW

Many supercomputers also contain multiple GPU nodes for parallel computations

Page 20: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

GPGPU in bioinformatics

Examples: CUDASW++ 6.3X

MUMmerGPU 3.5X

GPU-HMMer 60-100x

Page 21: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Talk Outline

Next-generation sequencing and current challenges

What is Many-core/GPGPU computing?

The use of GPGPU in NGS bioinformatics

How it works – the BarraCUDA project

System requirements for GPU computing

What if I want to develop my own GPU code?

What to look out for in the near future?

Conclusions

Page 22: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

A few GPGPU programs are available

Ion-torrent server (T7500 workstation) uses GPUs for base-calling

MummerGPU – comparisons among genomes

BarraCUDA, Soap3 – short read alignments

Page 23: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Talk Outline

Next-generation sequencing and current challenges

What is Many-core/GPGPU computing?

The use of GPGPU in NGS bioinformatics

How it works – the BarraCUDA project

System requirements for GPU computing

What if I want to develop my own GPU code?

What to look out for in the near future?

Conclusions

Page 24: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Heterogeneous computing

Page 25: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Current bioinformatics pipeline

Sequence Alignment

Image AnalysisBase Calling

Variant Calling, Peak calling

Page 26: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Sequence alignment

Sequence alignment is a crucial step in the bioinformatics pipeline for downstream

analyses

This step often takes many CPU hours to perform

Usually done on HPC clusters

Page 27: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

The BarraCUDA Project – an example of GPGPU computing

The main objective of the BarraCUDA project is to develop a software that runs on GPU/ many-core architectures

i.e. to map sequence reads the same way as they come out from the NGS instrument

Page 28: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Software Pipeline

Genome

Read library

CPU

Copy read library to GPU

Copy genome to GPU

CPU

Copy alignment

results to CPU

Write to disk

Alignment Results

GPU

AlignmentAlignment

AlignmentAlignment

Alignment

AlignmentAlignment

AlignmentAlignment

Alignment

AlignmentAlignment

AlignmentAlignment

Alignment

Page 29: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Burrows-Wheeler transform

Originally intended for data compression, performs reversible transformation of a string

In 2000, Ferragina and Manzini introduced BWT-based index data structure for fast substring matching at O(n)

Sub-string matching is performed in a tree traversal-like manner

Used in major sequencing read mapping programs e.g. BWA, Bowtie, Soap2

Page 30: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

How it works – a backward search algorithm

matching substring ‘banan’

Page 31: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

In mathematical terms

,

Modified from Li & Durbin Bioinformatics 2009, 25:14, 1754-1760

Page 32: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Programming code

BWT_exactmatch(READ,i,k,l){

if (i < 0) then return RESULTS;k = C(READ[i]) + O(READ[i],k-1)+1;l = C(READ[i]) + O(READ[i],l);if (k <= l) then BWT_exactmatch(READ,i-1,k,l);

}main(){

Calculate reverse BWT string B from reference string X Calculate arrays C(.) and O (.,.) from B Load READS from disk

For every READ in READS do{i = |READ|; Positionk = 0; Lower boundl = |X|; Upper boundBWT_exactmatch(READ,i,k,l);

}

Write RESULTS to disk}

Modified from Li & Durbin Bioinformatics 2009, 25:14, 1754-1760

Page 33: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Porting to GPU

Simple data parallelism

Used mainly the GPU for matching

Page 34: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Porting to GPU

__device__BWT_GPU_exactmatch(W,i,k,l){ if (i < 0) then return RESULTS; k = C(W[i]) + O(W[i],k-1)+1; l = C(W[i]) + O(W[i],l); if (k <= l) then BWT_GPU_exactmatch(W,i-1,k,l);}__global__GPU_kernel(){ W = READS[thread_no]; i = |W|; Position k = 0; Lower bound l = |X|; Upper bound BWT_GPU_exactmatch(W,i,k,l);}

main(){

Calculate reverse BWT string B from reference string X Calculate array C(.) and O(.,.) from B Load READS from disk Copy B, C(.) and O(.) to GPU Copy READS to GPU

Launch GPU_kernel with <<|READS|>> concurrent threads

COPY Alignment Results back from GPU Write RESULTS to disk}

Page 35: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

How fast is that?

Very fast indeed, using a Tesla C2050, we can match 25 million 100bp reads to the BWT in just over 1 min.

But… is this relevant?

Page 36: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Inexact Matching?

matching substring ‘anb’ where‘b’ is subsituted with an ‘a’

Page 37: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Inexact matching requires base substitution within the query substring

Search space complexity = O(9n)!

Klus et al., BMC Res Notes 2012 5:27

Page 38: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

BarraCUDA started its life as a GPU version of BWA

It _________worked! 10% faster than 8x X5472 cores @ 3GHz BWA uses a greedy breadth-first search approach (takes

up to 40MB per thread) Not enough workspace for thousands of concurrent

kernel threads (@ 4KB) – i.e. reduced accuracy – NOT GOOD ENOUGH!

partially

Page 39: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

BarraCUDA uses a depth-first search approach

hithit

Page 40: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

SIMD – the catch

The very same (low level) instruction is applied to multiple data at the same time e.g. a GTX680 can do addition to 1536 data

point at a time, versus 16 on a 16-core CPU.

Branching results in serialisations

The ALUs on GPU are usually much more primitive compared to their CPU counterparts.

Page 41: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Branch serialisation

Page 42: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Branch divergence

Page 43: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Multi-kernel design

A B

hithit

A B

CPU thread queue:

Thread 1.1

Thread 1.2

Thread 1

Page 44: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Mapping accuracy

BWA 0.5.8 BarraCUDA

Mapping 89.95% 91.42%

Error 0.06% 0.05%

Klus et al., BMC Res Notes 2012 5:27

Page 45: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Mapping speed

5160 (1C) X5670 (1C)

5160 (2C) 5160 (4Cs) X5670 (6Cs)

M20900

40

80

120

160153

105

86

52

21 20

Tim

e T

ake

n (

min

)

0

0

Klus et al., BMC Res Notes 2012 5:27

Page 46: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Porting to GPU

Simple data parallelism

Used mainly the GPU for matching

Page 47: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

ScalabilityKlus et al., BMC Res Notes 2012 5:27

Page 48: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Talk Outline

Next-generation sequencing and current challenges

What is Many-core/GPGPU computing?

The use of GPGPU in NGS bioinformatics

How it works – the BarraCUDA project

System requirements for GPU computing

What if I want to develop my own GPU code?

What to look out for in the near future?

Conclusions

Page 49: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

System requirements

Hardware The system must have at least one decent GPU

▪ NVIDIA Geforce 210 will not work!

One or more PCIe x16 slots

A decent power supply with appropriate power connectors▪ 550W for one Tesla C2075, and + 225W for each additional cards▪ Don’t use any Molex converters!

Ideally, dedicate cards for computation and use a separate cheap card for display

Page 50: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

System requirements

Page 51: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

System requirements

Software CUDA▪ CUDA toolkit▪ Appropriate NVIDIA drivers

OpenCL▪ CUDA toolkit (NVIDIA)▪ AMD APP SDK (AMD)▪ Appropriate AMD/NVIDIA drivers

Page 52: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

System requirements

For example: CUDAtoolkit v4.2

Page 53: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

DEMO

Page 54: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Talk Outline

Next-generation sequencing and current challenges

What is Many-core/GPGPU computing?

The use of GPGPU in NGS bioinformatics

How it works – the BarraCUDA project

System requirements for GPU computing

What if I want to develop my own GPU code?

What to look out for in the near future?

Conclusions

Page 55: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Coding effort

In the end, how much effort have we put in so far?

3300 lines of new code for the alignment core▪ Compared to 1000 lines in BWA▪ Still on going!

1000 lines for SA to linear space conversion

Page 56: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

CUDA vs OpenCL

CUDA (in general) is faster and more complete Tied to NVIDIA hardware

OpenCL Still new, AMD and Apple are pushing

hard on this Can run on a variety of hardware

including CPUs

Page 57: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Where to start

NVIDIA developer zone http://developer.nvidia.com/

OpenCL developer zone http://developer.amd.com/zones/

OpenCLZone/Pages/default.aspx

Page 58: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Things to consider while coding

Different hardware has different capabilities GT200 is very different from GF100, in terms of

cache arrangement and concurrent kernel executions

GF100 is also different from GK110 where the latter can do dynamic parallelism

Low level data management is often required, e.g. earmarking data for memory type, coalescing

memory access

Page 59: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

OpenACC directives

The easy way out! A set of compiler directives

It allows programmers to use ‘accelerator’ without explicitly writing GPU code

Supported by CAPS, Cray, NVIDIA, PGI

Page 60: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

How it works

Source: NVIDIA

Page 61: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

More information?

http://www.openacc-standard.org/

Page 62: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Talk Outline

Next-generation sequencing and current challenges

What is Many-core/GPGPU computing?

The use of GPGPU in NGS bioinformatics

How it works – the BarraCUDA project

System requirements for GPU computing

What if I want to develop my own GPU code?

What to look out for in the near future?

Conclusions

Page 63: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Future Outlook

The game is changing quickly CPUs are also getting more cores : AMD 16-core Opteron 6200

series

Many-core platforms are evolving Intel MIC processors (50 x86 cores) NVIDIA Kepler platform (1536 cores) AMD Radeon 7900 series (2048 cores)

SIMD nodes are becoming more common in supercomputers

OpenCL

Page 64: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Talk Outline

Next-generation sequencing and current challenges

What is Many-core/GPGPU computing?

The use of GPGPU in NGS bioinformatics

How it works – the BarraCUDA project

System requirements for GPU computing

What if I want to develop my own GPU code?

What to look out for in the near future?

Conclusions

Page 65: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Conclusions

Few software are available for NGS bioinformatics yet, more to come

BarraCUDA is one of the first attempts to accelerate NGS bioinformatics pipeline

Significant amount of coding is usually required, but more programming tools are becoming available

Many-core is still evolving rapidly and things can be very different in the next 2-3 years

Page 66: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Acknowledgements

IMS-MRL, CambridgeGiles YeoPetr KlusSimon Lam

NIHR-CBRC, CambridgeIan McFarlaneCassie Ragnauth

Whittle Lab, CambridgeGraham PullanTobias Brandvik

Microbiology, University College CorkDag Lyberg

Gurdon Institute, CambridgeNicole Cheung

HPCS, CambridgeStuart Rankin

NVIDIA CorporationThomas BradleyTimothy Lanfear

Page 67: A massively parallel solution to a massively parallel problem HPC4NGS Workshop, May 21-22, 2012 Principe Felipe Research Center, Valencia Brian Lam, PhD

Thank you.