next generation genomics: petascale data in the life sciences

Download Next generation genomics: Petascale data in the life sciences

If you can't read please download the document

Upload: guy-coates

Post on 16-Apr-2017

3.670 views

Category:

Technology


0 download

TRANSCRIPT

Next Generation Genomics:
Petascale data in the life sciences

Guy Coates

Wellcome Trust Sanger Institute

[email protected]

Outline

DNA Sequencing and Informatics

Managing Data

Sharing Data

Adventures in the Cloud

The Sanger Institute

Funded by Wellcome Trust.2nd largest research charity in the world.

~700 employees.

Based in Hinxton Genome Campus, Cambridge, UK.

Large scale genomic research.Sequenced 1/3 of the human genome. (largest single contributor).

We have active cancer, malaria, pathogen and genomic variation / human health studies.

All data is made publicly available.Websites, ftp, direct database. access, programmatic APIs.

DNA sequencing

Next-generation Sequencing

Life sciences is drowning in data from our new sequencing machines.

Traditional sequencing:96 sequencing reactions carried out per run.

Next-generation: sequencing.52 Million reactions per run.

Machines are cheap(ish) and small.Small labs can afford one.

Big labs can afford lots of them.

Economic Trends:

As cost of sequencing halves every 12 months.cf Moore's Law

The Human genome project: 13 years.

23 labs.

$500 Million.

A Human genome today:3 days.

1 machine.

$10,000.

Large centres are now doing studies with 1000s and 10,000s of genomes.

Changes in sequencing technology are going to continue this trend.Next-next generation sequencers are on their way.

$500 genome is probable within 5 years.

Output Trends

Our peak old generation sequencing:August 2007: 3.5 Gbases/month.

Current output:Jan 2010: 4 Tbases/month.

1000x increase in our sequencing output.In August 2007, total size of genbank was 200 Gbases.

Improvements in chemistry continue to increase the output of machines.

The scary graph

Instrument upgrades

Peak Yearly capillary sequencing

Managing Growth

We have exponential growth in storage and compute.Storage /compute doubles every 12 months.2009 ~7 PB raw

Gigabase of sequence Gigbyte of storage.16 bytes per base for for sequence data.

Intermediate analysis typically need 10x disk space of the raw data.

Moore's law will not save us.Transistor/disk density: Td=18 months

Sequencing cost:Td=12 months

Sequencing Informatics

DNA Sequencing

TCTTTATTTTAGCTGGACCAGACCAATTTTGAGGAAAGGATACAGACAGCGCCTG

AAGGTATGTTCATGTACATTGTTTAGTTGAAGAGAGAAATTCATATTATTAATTA

TGGTGGCTAATGCCTGTAATCCCAACTATTTGGGAGGCCAAGATGAGAGGATTGC

ATAAAAAAGTTAGCTGGGAATGGTAGTGCATGCTTGTATTCCCAGCTACTCAGGAGGCTG

TGCACTCCAGCTTGGGTGACACAG CAACCCTCTCTCTCTAAAAAAAAAAAAAAAAAGG

AAATAATCAGTTTCCTAAGATTTTTTTCCTGAAAAATACACATTTGGTTTCA

ATGAAGTAAATCG ATTTGCTTTCAAAACCTTTATATTTGAATACAAATGTACTCC

250 Million * 75-108 Base fragments

Human Genome (3GBases)

Alignment

Find the best match of fragments to a known genome / genomes. grep for DNA sequences.

Use more sophisticated algorithms that can do fuzzy matching.Real DNA has Insertions, deletions and mutations.

Typical algorithms are maq, bwa, ssaha, blast.

Look for differences Single base pair differences (SNP).

Larger insertions/deletions/mutations.

Typical experiment:Compare cancer cell genomes with healthy ones.

Reference: ...TTTGCTGAAACCCAAGTGACGCCATCCAGCGTGACCACTGCATTTTTCTCGGTCATCACCAGCATTCTC.... Query: CAAGTGACGCCATCCAGCGTGACCACTGCATTTTTCTAGGTCATCACCAGCA

Assembly

Assemble fragments into a complete genome.Typical experiment: collect reference genome for a new species.

De-novo assembly.Assemble fragment with no external data.

Harder than it looks.Non uniform coverage, low depth, non-unique sequence (repeats).

Alignment based assembly.Align fragments to a related genome.

Starting scaffold which can then be refined.Eg H. neanderthal. is being assembled against a H. sapiens sequence.

Cancer Genomes

Cancer is a disease caused by abnormalities in a cell's genome.

Mutation Details

Lung Carcenoma genomeNature 2010 463; 184-90.

22,910 mutations

58 rearrangements

334 copy number segments

Analysing Cancer Genomes

Cancer genomes contains a lot of genetic damage.Many of the mutations in cancer are incidental.

Initial mutation disrupts the normal DNA repair/replication processes.

Corruption spreads through the rest of the genome.

Today: Find the driver mutations amongst the thousands of passengers.Identifying the driver mutations will give us new targets for therapies.

Tomorrow: Analyse the cancer genome of every patient in the clinic.Variations in a patient and cancer genetic makeup play a major role in how effective a particular drugs will be.

Clinicians will use this information to tailor therapies.

International Cancer Genome Project

Many cancer mutations are rare.Low signal-to-noise ratio.

How do we find the rare but important mutations?Sequence lots of cancer genomes.

International Cancer Genome Project.Consortia of sequencing and cancer research centres in 10 countries.

Aim of the consortia.Complete genomic analysis of 50 different tumor types. (50,000 genomes).

Past Collaborations

DataSequencingCentre + DCCSequencingcentreSequencingcentreSequencingcentreSequencingcentre

Future Collaborations

SequencingcentreSequencingcentreSequencingcentre

SequencingcentreFederatedaccess

Collaborations are short term: 18 months-3 years.

Genomics Data

Intensities / raw data (2TB)

Alignments (200 GB)

Sequence + quality data (500 GB)

Variation data (1GB)

Individual features (3MB)

Structured data(databases)

Unstructured data(flat files)

Data size per Genome

Clinical Researchers,non-infomaticians Sequencing informatics specialists

Where can grid technologies help us?

Managing data.

Sharing data.

Making our software resources available.

Managing Data

Bulk Data

Intensities / raw data (2TB)

Alignments (200 GB)

Sequence + quality data (500 GB)

Variation data (1GB)

Individual features (3MB)

Structured data(databases)

Unstructured data(flat files)

Data size per Genome

Sequencing informatics specialists

Bulk Data Management

We though we were really good at it.All samples that come through the sequencing lab are bar-coded and tracked (Laboratory Information Systems).

Sequencing machines fed into an automated analysis pipeline.

All the data was tracked, analysed and archived appropriately.

Strict meta-data controls. Experiments do not start in the wet-lab until the investigator has supplied all the required data privacy and archiving requirements.Anonymised data straight into the archive.

Identifiable data private/controlled archives.

Some data held back until journal publication.

Compute farmanalysis/QC pipelineAlignment/assembly

suckers

Data pull

...

Final Repository(Oracle)100TB / yr

staging area500 TB Seq 1Seq 38

It turn out we were looking in the wrong place

We had been focused on the sequencing pipeline.For many investigators, data coming off the end of the sequencing pipeline is where they start.

Investigators take the mass of finished sequence data out of the archives, onto our compute farms and do stuff.

Huge explosion of data and disk use all over the institute.We had no idea what people were doing with their data.

...

Compute Farm

Compute farmdisk

Collaberators /3rd party sequencingUnmanged

Compute farmanalysis/QC pipelineassembly/alignment

suckers

Data pull

...

Final Repository(Oracle)100TB / yr

staging area500TBSeq 1Seq 38?

LIMS managed data

Accidents waiting to happen...

From: (who left 12 months ago)I find the directory is removed . The original directory is "/scratch/ (who left 6 months ago)"..where is it ?If this problem cannot be solved ,I am afriaid that cannot be released.

An idea whose time had come

Forward thinking groups had hacked up a file tracking systems for their unstructured data.They could not keep track of where the results.

Problem exacerbated with student turnover (summer students, PhD students on rotation).

Big wins with little effort.Disk space usage dropped by 2/3.Lots of individuals keeping copies of the same data set so I know where it is.

Team leaders are happy that their data is where they thing it is.Important stuff is on filesystems that are backed up etc.

But:Systems are ad-hoc, quick hacks.

We want an institute wide, standardised system.Invest in people to maintain/develop it.

iRODS

iRODS: Integrated Rule-Oriented Data System.

Produced by DICE (Data Intensive Cyber Environments) groups at U. North Carolina, Chapel Hill.

Successor to SRB.

iRODS

ICATCataloguedatabaseRule EngineImplements policiesIrods ServerData on diskUser interfaceWebDAV, icommands,fuse

Irods ServerData in database

Basic Features

Catalogue:Put data on disk and keeps a record of where it it.

Add query-able metadata to files.

Rules engine.Do things to files based on file data and metadata.Eg move data between fast/archival storage.

Implement policies. Experiment A data should be publicly viewable, but experiment B is restricted to certain users until 6 months after deposition.

Efficient.Copes with PB of data and 100,000M+ files.

Fast parallel data transfers across local and wide area network links.

Advanced Features

ExtensibleLink the system out to external services.Eg external databases holding metadata, external authentication systems.

FederatedPhysically and logically separated iRODS installs can be federated.

Allows user at institute A to seamlessly access data at institute B in a controlled manner.

Supports replication. Useful for disaster recovery/backup scenarios.

Policy enforcementsEnforces data sharing / data privacy rules.

What are we doing with it?

Piloting it for internal use.Help groups keep track of their data.

Move files between different storage pools.Fast scratch space warehouse disk Offsite DR centre.

Link metadata back to our LIMs/tracking databases.

We need to share data with other institutions.Public data is easy: FTP/http.

Controlled data is hard:

Encrypt files and place on private FTP dropboxes.

Cumbersome to manage and insecure.

Proof of concept to use iRODS to provide controlled access to datasets.Will we get buy in for the community?

Sharing data

Structured Data

Intensities / raw data (2TB)

Alignments (200 GB)

Sequence + quality data (500 GB)

Variation data (1GB)

Individual features (3MB)

Structured data(databases)

Unstructured data(flat files)

Data size per Genome

Clinical Researchers,non-infomaticians

Raw Genomes are not useful

TCCTCTCTTTATTTTAGCTGGACCAGACCAATTTTGAGGAAAGGATACAGACAGCGCCTGGAATTGTCAGACATATACCAAATCCCTTCTGTTGATTCTGCTGACAATCTATCTGAAAAATTGGAAAGGTATGTTCATGTACATTGTTTAGTTGAAGAGAGAAATTCATATTATTAATTATTTAGAGAAGAGAAAGCAAACATATTATAAGTTTAATTCTTATATTTAAAAATAGGAGCCAAGTATGGTGGCTAATGCCTGTAATCCCAACTATTTGGGAGGCCAAGATGAGAGGATTGCTTGAGACCAGGAGTTTGATACCAGCCTGGGCAACATAGCAAGATGTTATCTCTACACAAAATAAAAAAGTTAGCTGGGAATGGTAGTGCATGCTTGTATTCCCAGCTACTCAGGAGGCTGAAGCAGGAGGGTTACTTGAGCCCAGGAGTTTGAGGTTGCAGTGAGCTATGATTGTGCCACTGCACTCCAGCTTGGGTGACACAGCAAAACCCTCTCTCTCTAAAAAAAAAAAAAAAAAGGAACATCTCATTTTCACACTGAAATGTTGACTGAAATCATTAAACAATAAAATCATAAAAGAAAAATAATCAGTTTCCTAAGAAATGATTTTTTTTCCTGAAAAATACACATTTGGTTTCAGAGAATTTGTCTTATTAGAGACCATGAGATGGATTTTGTGAAAACTAAAGTAACACCATTATGAAGTAAATCGTGTATATTTGCTTTCAAAACCTTTATATTTGAATACAAATGTACTCC

Genomes need to be annotated.Locations of genes.

Functions of genes.

Relationships between genes (homologues, functional groups)

Links to the medical/scientific literature

Ensembl

Ensembl is a system for genome Annotation.

Compute Pipeline.Take a raw genome and run it through a compute pipeline to find genes and other features of interest.

Ensembl at Sanger/EBI provides automated analysis for 51 vertebrate genomes.

Data visualisation.www.ensembl.org

Provides web interface to genomic data.

10k visitors / 126k page views per day.

Data access and mining.OO Perl / Java APIs.

Direct SQL access.

Bulk data download.

BioMart, DAS

Software is Open Source (apache license).

Data is free for download.

Example annotation

Example annotation

Example annotation

Sharing data with Web Services

Distributed Annotation Service

Labs may have data that they want to view with Ensembl.Put data into context with everything else.

DAS is a web-services protocol that allows sharing of annotation information.Developed at Cold Spring Harbor Lab and extended by Sanger Institute and others.

DAS Information;metadata: Description of the dataset, features supported.

This can be optionally registered/validated at das.registry.org.

Data: Object type.

Co-ordinates (typically genome species/version and position).

Stylesheet; (how should the data be displayed, eg histogram, color gradient).

DAS community

Currently ~600 DAS providers spread across 45 institutions and 18 counties.

Removal of non-responsive services

BioMART

Provides query based access to structured data.Collaboration between CSHL, European Bioinformatics Institute and Ontario Institute for Cancer Research.

Tell me the function of genes that have substitution mutations in breast-cancer samples.

Query requires queries across multiple databases.Mutations are stored in COSMIC, Cancer Genome database.

Gene function is stored in Ensembl.

BioMart provides a unified entry point to these databases.

BioMART

OracleCSVMysqlMARTMARTMART

XMLGUIPERL SOAP/RESTJAVA

Transform / Import

Query

Common IDs: federatable

Common IDs:federatable

Clouds

Disclaimer

This talk will use Amazon/EC2.

We tested it.

It is not a commercial endorsement.

Other cloud providers exist.

It a short hand; feel free to insert your favourite cloud provider instead.

Cloud-ifying Ensembl

WebsiteLAMP stack.

Ports easily to Amazon.

Provides virtual world-wide co-lo.

Compute PipelineHPTC workload

Compute pipeline is a harder problem.

Expanding markets

There are going to be lots of new genomes that need annotating.Sequencers moving into small labs, clinical settings.

Limited informatics / systems experience.Typically postdocs/PhD who have a real job to do.

We have already done all the hard work on installing the software and tuning it.Can we package up the pipeline, put it in the cloud?

Goal: End user should simply be able to upload their data, insert their credit-card number, and press GO.

Gene Finding

DNA

HMM Prediction

Alignment with known proteins

Alignment with fragments recovered in vivo

Alignment with other genes and other species

Compute Pipeline

Architecture:OO perl pipeline manager.

Core algorithms are C.

200 auxiliary binaries.

Workflow:Investigator describes analysis at high level.

Pipeline manager splits the analysis into parallel chunks.Typically 50k-100k jobs.

Sorts out the dependences and then submits jobs to a DRM.Typically LSF or SGE.

Pipeline state and results are stored in a mysql database.

Workflow is embarrassingly parallel.Integer, not floating point.

64 bit memory address is nice, but not required.64 bit file access is required.

Single threaded jobs.

Very IO intensive.

Running the pipeline in practice

Requires a significant amount of domain knowledge.

Software install is complicated.Lots of perl modules and dependencies.

Apache wranging if you want to run a website.

Need a well tuned compute cluster.Pipeline takes ~500 CPU days for a moderate genome.Ensembl chewed up 160k CPU days last year.

Code is IO bound in a number of places.

Typically need a high performance filesystem.Lustre, GPFS, Isilon, Ibrix etc.

Need large mysql database.100GB-TB mysql instances, very high query load generated from the cluster.

How does this port to cloud environments?

Creating the software stack / machine image.Creating images with software is reasonably straightforward.

Getting queuing system etc running requires jumping through some hoops.

Mysql databasesLots of best practice on how to do that on EC2.

But it took time, even for experienced systems people.(You will not be firing your system-administrators just yet!).

Moving data is hard

Moving large amounts of data across the public internet is hard.Commonly used tools are not suited to wide-area networks.There is a reason gridFTP/FDT/Aspera exist.

Data transfer rates (gridFTP/FDT):Cambridge EC2 East coast: 12 Mbytes/s (96 Mbits/s)

Cambridge EC2 Dublin: 25 Mbytes/s (200 Mbits/s)

11 hours to move 1TB to Dublin.

23 hours to move 1 TB to East coast.

What speed should we get?Once we leave JANET (UK academic network) finding out what the connectivity is and what we should expect is almost impossible.

IO Architecture

CPUCPUCPUFat NetworkPosix Global filesystemCPUCPUCPUCPUthin networkLocalstorageLocalstorageLocalstorageLocalstorageBatch schedularhadoop/S3VS

Storage / IO is hard

No viable global filesystems on EC2.

NFS has poor scaling at the best of times.EC2 has poor inter-node networking. > 8 NFS clients, everything stops.

The cloud way: store data in S3.Web based object store.Get, put, delete objects.

Not POSIX.Code needs re-writing / forking.

Limitations; cannot store objects > 5GB.

Nasty-hacks:Subcloud; commercial product that allows you to run a POSIX filesystem on top of S3.Interesting performance, and you are paying by the hour...

Going forward

Cloud vs HPTC

Re-writing apps to use S3 or hadoop/HDFS is a real hurdle.Not an issue for new apps.

But new apps do not exist in isolation.

Barrier for entry is much lower for file-systems.

Am I being a reactionary old fart?15 years ago clusters of PCs were not real supercomputers.

...then beowulf took over the world.

Big difference: porting applications between the two architectures was easy.MPI/PVM etc.

Will the market provide traditional compute clusters in the cloud?

Networking

How do we improve data transfers across the public internet?CERN approach; don't.

Dedicated networking has been put in between CERN and the T1 centres who get all of the CERN data.

Our collaborations are different.We have relatively short lived and fluid collaborations. (1-2 years, many institutions).

As more labs get sequencers, our potential collaborators also increase.

We need good connectivity to everywhere.

Can we turn the problem on its head?

Fixing the internet is not going to be cost effective for us.

Amazon fixing the internet may be cost effective for them.Core to their business model.

All we need to do is get data into Amazon, and then everyone else can get the data from there.

Cloud as virtual co-location site.Mass datastores.

Host mirror sites for our web services.

Requires us to invest in a fast links to Amazon.It changes the business dynamic.

We have effectively tied ourselves to a single provider.

Expensive mistake if you change your mind, or your provider goes .

Identity management

Web services for linking databases together are mature.They are currently all public.

There will be demand for restricted services.Patient identifiable data.

Our next big challenge.Lots of solutions:openID, shibboleth, aspis, globus etc.

Finding consensus will be hard.

Culture shock.

Acknowledgements

Sanger Institute

Phil Butcher

ISGJames Beal

Gen-Tao Chiang

Pete Clapham

Simon Kelley

Cancer-genome ProjectAdam Butler

John Teague

STFCDavid Corney

Jens Jensen

Sites of interest

http://www.ensembl.org

http://www.sanger.ac.uk/cosmic

http://www.biomart.org

http://www.biodas.org

http://www.icgc.org

Click to edit the title text format

Click to edit the outline text formatSecond Outline LevelThird Outline LevelFourth Outline LevelFifth Outline LevelSixth Outline LevelSeventh Outline LevelEighth Outline Level

18/06/10

Disk StorageYearTerabytesTerabytes

19940.1

19950.2

19960.4

19970.8

19981.75

19994.5

20009

200124

2002100

2003160

2004260

2005360

2006500

20071000

20082000

20096000

GbasesCapillaryIllumina

Jan 20103.54000