mining 'bigger' datasets to create, validate and share machine learning models

51
Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models Sean Ekins 1,2,3* and Alex M. Clark 4 1 Collaborations Pharmaceuticals, Inc., 5616 Hilltop Needmore Road, Fuquay-Varina, NC 27526, USA 2 Collaborative Drug Discovery, 1633 Bayshore Highway, Suite 342, Burlingame, CA 94010, USA 3 Collaborations in Chemistry, 5616 Hilltop Needmore Road, Fuquay-Varina, NC 27526, USA 4 Molecular Materials Informatics, 1900 St. Jacques #302, Montreal H3J 2S1, Quebec, Canada Disclosure: As well as employee of above funded by NIH, EC FP7, consultant for several rare disease foundations, drug companies and consumer product companies etc.

Upload: sean-ekins

Post on 23-Jan-2017

476 views

Category:

Science


4 download

TRANSCRIPT

Page 1: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Sean Ekins1,2,3* and Alex M. Clark4

1 Collaborations Pharmaceuticals, Inc., 5616 Hilltop Needmore Road, Fuquay-Varina, NC 27526, USA 2 Collaborative Drug Discovery, 1633 Bayshore Highway, Suite 342, Burlingame, CA 94010, USA

3 Collaborations in Chemistry, 5616 Hilltop Needmore Road, Fuquay-Varina, NC 27526, USA 4 Molecular Materials Informatics, 1900 St. Jacques #302, Montreal H3J 2S1, Quebec, Canada

Disclosure: As well as employee of above funded by NIH, EC FP7, consultant for several rare disease foundations, drug companies and consumer product companies etc.

Page 2: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Laboratories past and present

Lavoisier’s lab 18th C Edison’s lab 20th C

Author’s lab 21th C

+ Network of global collaborators

Page 3: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

"Rub al Khali 002" by Nepenthes

The chemistry/ biology data desert outside of pharma circa early 2000’s

Limited ADME/Tox data

Paucity of Structure Activity Data

Small datasets for modeling

Drug companies – gate keepers of information for drug discovery

Page 4: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

"Oasis in Libya" by Sfivat

The growing chemistry/biology data Oasis outside of pharma circa 2015

Page 5: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

ADME/Tox models 15 yrs on: Then & Now• Datasets very small < 100 cpds• Heavy focus on P450• Models rarely used • Very limited number of

properties addressed• Few tools / agorithms used• Limited access to models

• Much bigger datasets > 1000s cpds >10,000

• Broader range of models• Models more widely used and

reported• More accessible models• Pharma making data available

70 hERG models (Villoutreix and

Taboroureau 2015) 19 protein binding models

(Lambrinidis et al 2015) 40 BBB models upto 2009

Page 6: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Model resources for ADME/Tox

Page 7: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

CYP 1A2 2C9 2C19

Substrate (mM) phenacetin (10) diclofenac (10) omeprazole (0.5)

Inhibitor naphthoflavone sulfaphenazole tranylcypromine

Compounds IC50 (mM) IC50 (mM) IC50 (mM)

JSF-2019 2.25 3.55 10.8

Retinal dehydrogenase 1

ADME SARfari predicts importance of CYP1A2, CYP2C9, CYP2C19

The Naïve Bayes model was built with 142345 compounds (training and validation) and features 135 learned classes.

Testing by Dr. Joel Freundlich

Page 8: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models
Page 9: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Just a matter of scale?

Drug Discovery’s definition of Big data

Everyone else’s definition of Big data

Page 10: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

• Data Sources

• PubChem

• ChEMBL

• ToxCast over 1800 molecules tested against over 800 endpoints

Where can we get the datasets

Page 11: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Open source – but much smaller

400 diverse, drug-like molecules active against neglected diseases

400 cpds from around 20,000 hits

generated screening campaign ~ four million compounds from the libraries of St. Jude Children's Research Hospital, TN, USA, Novartis and GSK.

Many screens completed

Page 12: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Bigger datasets and model collections

• Profiling “big datasets” is going to be the norm.

• A recent study mined PubChem datasets for compounds that have rat in vivo acute toxicity data

• This could be used in other big data initiatives like ToxCast (> 1000 compounds x 800 assays) and Tox21 etc.

• Kinase screening data (1000s mols x 100s assays)

• GPCR datasets etc (1000s mols x 100s assays)

Zhang J, Hsieh JH, Zhu H (2014) Profiling Animal Toxicants by Automatically Mining Public Bioassay Data: A Big Data Approach for Computational Toxicology. PLoS ONE 9(6): e99863. doi:10.1371/journal.pone.0099863http://127.0.0.1:8081/plosone/article?id=info:doi/10.1371/journal.pone.0099863

Page 13: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models
Page 14: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

‘Bigger’ and not ‘Big’

Page 15: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

(220463)

(102633)

(23797)

(346893)

(2273)

(1783)

(1248)

(5304)

(218640)

(102634)

(23737)

(345011)

1771924

Are bigger models better for tuberculosis ?

Ekins et al., J Chem Inf Model

54: 2157-2165 (2014)

Page 16: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

No relationship between internal or external ROC and the number of molecules in the training set?

PCA of combined data and ARRA(red)

Ekins et al., J Chem Inf Model

54: 2157-2165 (2014)

Internal and leave out 50%x100 ROC track each other

External ROC less correlation

Smaller models do just as well with external testing

~350,000

Page 17: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

The Opportunity

•Get pharmas to use open source molecular descriptors and algorithms

•Benefit from initial work done by Pfizer/CDD

•Avoid repetition of open source tools vs commercial tools comparisons

•Change the mindset from real data to virtual data – confirm predictions

•ADME/Tox is precompetitive

•Expand the chemical space and predictivity of models

•Share models with collaborators – Companies could share data as models

Ekins and Williams, Lab On A Chip, 10: 13-22, 2010.

Gupta RR, et al., Drug Metab Dispos, 38: 2083-2090, 2010

Page 18: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Pfizer Open models and descriptors

Gupta RR, et al., Drug Metab Dispos, 38: 2083-2090, 2010

• What can be developed with very large training and test sets?

• HLM training 50,000 testing 25,000 molecules

• training 194,000 and testing 39,000

• MDCK training 25,000 testing 25,000

• MDR training 25,000 testing 18,400

• Open molecular descriptors / models vs commercial descriptors

Page 19: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

• Examples – Metabolic Stability

Gupta RR, et al., Drug Metab Dispos, 38: 2083-2090, 2010

HLM Model with CDK and

SMARTS Keys:

HLM Model with MOE2D and

SMARTS Keys

# Descriptors: 578 Descriptors

# Training Set compounds:

193,650

Cross Validation Results: 38,730

compounds

Training R2: 0.79

20% Test Set R2: 0.69

Blind Data Set (2310

compounds):

R2 = 0.53

RMSE = 0.367

Continuous Categorical:

κ = 0.40

Sensitivity = 0.16

Specificity = 0.99

PPV = 0.80

Time (sec/compound): 0.252

# Descriptors: 818 Descriptors

# Training Set compounds:

193,930

Cross Validation Results: 38,786

compounds

Training R2: 0.77

20% Test Set R2: 0.69

Blind Data Set (2310

compounds):

R2 = 0.53

RMSE = 0.367

Continuous Categorical:

κ = 0.42

Sensitivity = 0.24

Specificity = 0.987

PPV = 0.823

Time (sec/compound): 0.303

PCA of training (red) and test (blue)

compounds

Overlap in Chemistry space

Page 20: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

• Examples – P-gp

Gupta RR, et al., Drug Metab Dispos, 38: 2083-2090, 2010

Open source descriptors CDK and C5.0 algorithm

~60,000 molecules with P-gp efflux data from Pfizer

MDR <2.5 (low risk) (N = 14,175) MDR > 2.5 (high risk) (N = 10,820)

Test set MDR <2.5 (N = 10,441) > 2.5 (N = 7972)

Could facilitate model sharing?

CDK +fragment descriptors MOE 2D +fragment descriptors

Kappa 0.65 0.67

sensitivity 0.86 0.86

specificity 0.78 0.8

PPV 0.84 0.84

Page 21: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

MoDELS RESIDE IN PAPERS

NOT ACCESSIBLE…THIS IS

UNDESIRABLE

How do we share them?

How do we use Them?

Page 22: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Open Extended Connectivity Fingerprints

ECFP_6 FCFP_6• Collected,

deduplicated, hashed

• Sparse integers

• Invented for Pipeline Pilot: public method, proprietary details

• Often used with Bayesian models: many published papers

• Built a new implementation: open source, Java, CDK– stable: fingerprints don't change with each new toolkit release

– well defined: easy to document precise steps

– easy to port: already migrated to iOS (Objective-C) for TB Mobile app

• Provides core basis feature for CDD open source model serviceClark et al., J Cheminform 6:38 2014

Page 23: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Uses Bayesian algorithm and FCFP_6 fingerprints

Bayesian models

Clark et al., J Cheminform 6:38 2014

Page 24: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Exporting models from CDD

Clark et al., JCIM 55: 1231-1245 (2015)

Page 25: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Machine Learning – Different tools• Models generated using : molecular

function class fingerprints of maximum

diameter 6 (FCFP_6), AlogP, molecular

weight, number of rotatable bonds,

number of rings, number of aromatic

rings, number of hydrogen bond

acceptors, number of hydrogen bond

donors, and molecular fractional polar

surface area.

• Models were validated using five-fold

cross validation (leave out 20% of the

database).

• Bayesian, Support Vector Machine and

Recursive Partitioning Forest and single

tree models built.

• RP Forest and RP Single Tree models

used the standard protocol in Discovery

Studio.

• 5-fold cross validation or leave out 50%

x 100 fold cross validation was used to

calculate the ROC for the models

generated

• *fingerprints only Ai et al., ADDR 86: 46-60, 2015

KCNQ1

Page 26: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Ames Bayesian model built with 6512 molecules (Hansen et al., 2009)

Features important for Ames actives. Features important for Ames inactives.

Page 27: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Ames Bayesian model built using CDD Models showing ROC for 3 fold cross validation. Note only FCFP_6 descriptors were used

Page 28: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

FCFP6 fingerprint models in CDD

Clark et al., JCIM 55: 1231-1245 (2015)

Page 29: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

ECFP6 fingerprint only models in MMDS

Clark et al., JCIM 55: 1231-1245 (2015)

Page 30: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Using AZ-ChEMBL data for CDD Models

Page 31: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

• Human microsomal intrinsic clearance

• Rat hepatocyte intrinsic clearance

Page 32: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

What if the models were already built for you

• Instead of having to go into a database and find data

• The models are already prebuilt

• Ready to use

• Shareable

• Create a repository of models

Page 33: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Previous work by others

• Using large datasets to predict targets with Bayesian algorithm

• Bayesian classifier - 698 target models (> 200,000 molecules, 561,000 measurements) Paolini et al 2006

• 246 targets (65,241 molecules) Similarity ensemble analysis Keiser et al 2007

• 2000 targets (167,000 molecules) target identification from zebrafish screen Laggner et al 2012

• 70 targets (100,269 data points) Bender et al 2007

• Many others…..

• None of these enable you qualitatively or quantitatively predict activity for a single target.

Page 34: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Recent Studies

• Bit folding – trade off between performance & efficacy

• Model cut-off selection for cross validation

• Scalability of ECFP6 and FCFP6 using ChEMBL 20 mid size datasets

• CDK codebase on Github (http://github.com/cdk/cdk: look for class org.open-science.cdk.fingerprint.model.Bayesian )

• Made the models accessible http://molsync.com/bayesian2

Clark and Ekins, J Chem Inf Model. 2015 Jun 22;55(6):1246-60

Page 35: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

What do 2000 ChEMBL models look like

Folding bit size

AverageROC

http://molsync.com/bayesian2Clark and Ekins, J Chem Inf Model. 2015 Jun 22;55(6):1246-60

Page 36: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

ChEMBL 20

• Skipped targets with > 100,000 assays and sets with < 100 measurements

• Converted data to –log

• Dealt with duplicates

• 2152 datasets

• Cutoff determination

• Balance active/ inactive ratio

• Favor structural diversity and activity distribution

Clark and Ekins, J Chem Inf Model. 2015 Jun 22;55(6):1246-60

Page 37: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Desirability score• ROC integral for model using subset of molecules and

threshold for partitioning active / inactive (higher is better)

• Second derivative of population interpolated from the current threshold (lower is better)

• Ratio of actives to inactives if the collection partitioned (actives+1) / (inactives+1) or reciprocal..whichever greater

Clark and Ekins, J Chem Inf Model. 2015 Jun 22;55(6):1246-60

Ekins et al Drug Metab Dispos 43(10):1642-5, 2015

Page 38: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Models from ChEMBL data

http://molsync.com/bayesian2 Clark and Ekins, J Chem Inf Model. 2015 Jun 22;55(6):1246-60

Page 39: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Results

• Bit folding – plateau at 4096, can use 1024 with little degredation

• Cut off – works well

• Evaluated balanced training: test and diabolical were test and training sets are structurally differentEasy ROC 0.83 ± 0.11 Hard ROC 0.39 ± 0.23

Clark and Ekins, J Chem Inf Model. 2015 Jun 22;55(6):1246-60

Page 40: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Models in mobile app

• Added atom coloring using ECFP6 fingerprints

• Red and green high and low probability of activity, respectively

Clark and Ekins, J Chem Inf Model. 2015 Jun 22;55(6):1246-60

Page 41: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Results for Bayesian model cross validation. 5-fold and Leave one out (LOO) validation with Bayesian models generated with Discovery Studio and Open Models implemented in the mobile app MMDS. * = previously published

Ekins et al Drug Metab Dispos 43(10):1642-5, 2015

Transporter models

Page 42: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Ekins et al Drug Metab Dispos 43(10):1642-5, 2015

Transporter models

Page 43: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

ToxCast data

• Few studies use the ToxCast data for machine learning

• Recent reviews Sipes et al., Chem Res Toxicol. 2013 Jun 17; 26(6): 878–895.

• Liu et al., Chem Res Toxicol. 2015 Apr 20;28(4):738-51

• A set of 677 chemicals was represented by 711 in vitro bioactivity descriptors • (from ToxCast assays), 4,376 chemical structure descriptors (from QikProp,

OpenBabel, PaDEL, and PubChem), and three hepatotoxicity categories

• six machine learning algorithms: linear discriminant analysis (LDA), Naïve Bayes (NB), support vector machines (SVM), classification and regression trees (CART), k-nearest neighbors (KNN), and an ensemble of these classifiers (ENSMB)from animal studies)

• nuclear receptor activation and mitochondrial functions were frequently found in highly predictive classifiers of hepatotoxicity

• CART, ENSMB, and SVM classifiers performed the best

Page 44: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

CDD Models for human P450s (NVS data) from ToxCast (n=1787) <1uM cutoff

CYP1A1 CYP1A2 CYP2B6 CYP2C18

CYP2C19 CYP2C9 CYP3A4 CYP3A5

Page 45: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

ToxCast models in a mobile app

IC50 1A2 = 2.25 uMIC50 2C9 = 3.55 uMIC50 2C19 = 10.8 uM

In vitro data

Courtesy Dr. Joel Freundlich

Page 46: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

PolyPharma a new free app for drug discovery

Page 47: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Composite models - Binned Bayesians

Clark et al., Submitted 2015

Page 48: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Summary

• Shown that open source models/ descriptors comparable to previously published models with commercial software

• Implemented Bayesian machine learning in CDD Vault• Can be used on private or public data• Can enable sharing of models in CDD Vault• Enabled export of models – can use models in 3rd part mobile apps

or other tools• Demonstrated various ADME/Tox models and transporters

• Make ToxCast data into models that can be used by anyone• Provide more information on models and predictions• Visualize training set molecules vs test compounds• Use a model to predict compounds and then test them

Page 49: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Future ?

+ = Big Models

Thousands of Big Models

How do you validate 1000’s of modelsHow do algorithms hands 500K – 1M moleculesNeed new algorithms, data visualization, mining approachesModel sharing is hereNeed for broad Biology & Chemistry knowledge – open minds, BIG thinkers

Page 50: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Acknowledgments

• Alex Clark Antony Williams

• Joel Freundlich Robert Reynolds

• Steven Wright

• Krishna Dole and all colleagues at CDD

• Award Number 9R44TR000942-02 “Biocomputation across distributed private datasets to enhance drug discovery” from the NIH National Center for Advancing Translational Sciences.

• R41-AI108003-01 “Identification and validation of targets of phenotypic high throughput screening” from NIH National Institute of Allergy and Infectious Diseases

• Bill and Melinda Gates Foundation (Grant#49852 “Collaborative drug discovery for TB through a novel database of SAR data optimized to promote data archiving and sharing”).

Page 51: Mining 'Bigger' Datasets to Create, Validate and Share Machine Learning Models

Software on github

Models can be accessed at

• http://molsync.com/bayesian1

• http://molsync.com/bayesian2

• http://molsync.com/transporters