comparison of resource platform selection...

23
COMPARISON OF RESOURCE PLATFORM SELECTION APPROACHES FOR SCIENTIFIC WORKFLOWS Yogesh Simmhan, MSR & Lavanya Ramakrishnan, LBL Science Cloud 2010

Upload: others

Post on 11-Oct-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

COMPARISON OF RESOURCE

PLATFORM SELECTION APPROACHES

FOR SCIENTIFIC WORKFLOWS

Yogesh Simmhan, MSR & Lavanya Ramakrishnan, LBL Science Cloud 2010

Page 2: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

Workflows for Modeling eScience

Workflows common for in silico experiments DAGs & dataflows

Allow easy composition Change often

Loosely-coupled tasks, Tighly-coupled MPI Different task characteristics

Compute & Data intensive

Challenge of eScience problem sizes

Resource needs often exceed available ones

Scale-out beyond current resources

Page 3: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

Flourishing Space of Resource Platforms

Cluster, Cloud, HPC, Desktop

Local, captive resources

Batch systems

On-demand platforms

Different characteristics of

resource platforms

Page 4: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

Cloud Platform for eScience

On-demand

Scale out

Available

Management Ease

Economical (TCO)

Simple APIs

Page 5: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

Resource Selection for Workflows

Scientists need to select from existing & emerging resources

Ad hoc, Rule of thumb, based on familiarity Can be sub-optimal, punitive

Different characteristics of resource platforms Dynamic over short, long term

Different goals Makespan, usage, co$t

DAG Scheduling Algorithms Tasks/WF scheduled to a

platform

Automatic WF Scheduling Pegasus, Swift, Trident, etc.

schedule WFs to remote resources

Support various platforms: Cluster, HPC, EC2.

Mandal, et al. Perf-based advance reservation for GrADS

Batch Queue Prediction Service Blythe, et al. Task level greedy

algorithm v. WF level optimization

Page 6: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

Resource Selection for Workflows

These need information about WFs

Structural, task level details, data flow

Fine grained details hard to determine & specify

Provenance mining, perf models

Different granularity of WF details

Blackbox, Graybox, Whitebox

Fine grained workflow specs, evolving resource platforms pose overhead for users

Page 7: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

Hypothesis —

Can we make intelligent

resource platform

selection with limited

workflow information?

Length

Width

Data In/Out

What are trade-offs of

running applications on

different platforms?

Page 8: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

OVERVIEW

Page 9: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

Workflow Characteristics

Structural Information Pattern: Sequential, Fork-Join,

Control flow Length: # of stages, length per

stage, total length Width: Fanout

Resource Usage Data: In/Out Compute: Cores required

CPU Time,

Fanout

Intermediate

Data

CPU Time

Input Data

CPU Time,

Fanout

Intermediate

Data

R

P

… I I

Output Data

Page 10: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

Resource Platforms & Characteristics

Desktop

Full application control

Growth of multi-core

eScience beats Moores Law

Cluster

Small-Mid Clusters (~256 core)

Under-subscribed, instant use

Large science apps don’t fit

HPC

Shared, national centers

Large # of cores (>1000)

Over-subscribed queues, policies

Cloud

Infra. & Platform as Service

On-demand, customizable

Virtualization impact, Bandwidth

● Available cores ● Queue/VM Latency ● N/W Bandwidth ● Core Speed

Page 11: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

PLATFORM SELECTION FOR WORKFLOWS

Page 12: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

Whitebox Selection (Fine Grained)

Full workflow & Task details available

Total runtime due to CPU Time I/O

data transfer Queue/VM Overhead

Time is from by each independent task 90mins,

135x

100KB

30secs

13MB

60secs

500KB

R

P

… I I

599MB*2 71MB

MO

TIF

WO

RK

FLO

W

𝐹𝑊𝑜𝑟𝑘𝑓𝑙𝑜𝑤𝑇𝑖𝑚𝑒 = 𝐹𝑆𝑡𝑎𝑔𝑒𝑇𝑖𝑚𝑒𝑖

# 𝑠𝑡𝑎𝑔𝑒𝑠

𝑖=0

𝐹𝑆𝑡𝑎𝑔𝑒𝑇𝑖𝑚𝑒𝑖 = 𝑇𝑂𝑣𝑒𝑟𝑕𝑒𝑎𝑑𝑂𝑛𝑒

𝑖 + 𝑇𝐷𝑎𝑡𝑎𝑖

+𝑇𝑇𝑎𝑠𝑘𝐿𝑒𝑛𝑔𝑡𝑕𝑖 × 𝑁𝑇𝑎𝑠𝑘𝑊𝑖𝑑𝑡𝑕

𝑖

𝑁𝐶𝑜𝑟𝑒𝑠𝑖

Page 13: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

Graybox Selection (Hybrid)

90mins,

135x

100KB

30secs

13MB

60secs

500KB

R

P

… I I

1269MB

MO

TIF

WO

RK

FLO

W Workflow Stage details available

Stages opaque. No task details available.

Time is from by each independent stage

Overhead time only for longest stage

Queue/VM times pipelined

𝐹𝑊𝑜𝑟𝑘𝑓𝑙𝑜𝑤𝑇𝑖𝑚𝑒 = 𝑇𝑂𝑣𝑒𝑟𝑕𝑒𝑎𝑑𝑀𝑎𝑥 + 𝐹𝑆𝑡𝑎𝑔𝑒𝑇𝑖𝑚𝑒𝑖

# 𝑠𝑡𝑎𝑔𝑒𝑠

𝑖=0

𝐹𝑆𝑡𝑎𝑔𝑒𝑇𝑖𝑚𝑒𝑖 = 𝑇𝐷𝑎𝑡𝑎

𝑖 +𝑇𝑆𝑡𝑎𝑔𝑒𝐿𝑒𝑛𝑔𝑡𝑕𝑖 × 𝑁𝑆𝑡𝑎𝑔𝑒𝑊𝑖𝑑𝑡𝑕

𝑖

𝑁𝐶𝑜𝑟𝑒𝑠𝑖

Page 14: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

Blackbox Selection (Coarse Grained)

91.5mins

135x

13MB

R

P

… I I

1269MB

MO

TIF

WO

RK

FLO

W Only Workflow outline details available

Workflow internals opaque

Total CPU Time Data xfer at boundary

Overhead time for entire workflow

All required cores for workflow acquired

𝐹𝑊𝑜𝑟𝑘𝑓𝑙𝑜𝑤𝑇𝑖𝑚𝑒= 𝑇𝑂𝑣𝑒𝑟𝑕𝑒𝑎𝑑𝑀𝑎𝑥 + 𝑇𝐷𝑎𝑡𝑎

+𝑇𝑊𝑜𝑟𝑘𝑓𝑙𝑜𝑤𝐿𝑒𝑛𝑔𝑡𝑕 × 𝑁𝑊𝑜𝑟𝑘𝑓𝑙𝑜𝑤𝑊𝑖𝑑𝑡𝑕

𝑁𝐶𝑜𝑟𝑒𝑠

Page 15: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

EARLY EVALUATION

Page 16: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

eScience Workloads for Evaluation

MOTIF Network Workflow

Gene regulation dependency

networks

Compute & data intensive

13MB input, 1300MB output

90mins long, 135 task wide

3 Stages: Fork, Compute

fanout, Join

GWAS Workflow

Genome wide association study

Compute intensive & wide

150MB input, 160MB output

19mins long, 1100 task wide

6 stages: Two compute fanouts 1100 and 150 tasks wide

Page 17: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

Resource Platforms for Evaluation

Local Workstation

1 Core, 2.5GHz

All data local

Local Cluster

Up to 256 cores of 2.5GHz

Data remote on client

1Gbps LAN bandwidth

Teragrid HPC Clusters SDSC & BigRed clusters

1 – 2048 cores of 2.5GHz

NWS Batch Queue Prediction Service (95% Quantile)

Data remote. 10Mbps WAN.

Azure Cloud Small VM, 1 core, 1.6GHz

VM start time ~200 + 20𝑐 secs

Data remote. 10Mbps WAN.

Page 18: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

Results: Motif Workflow

Workflow makespans calculated using White/Gray/Black

box functions using workflow & resource specifications

0

10

20

30

40

50

0 50 100 150

Esti

mate

d T

ime (

Hrs

)

Number of Cores

MOTIF (Whitebox) Desktop

Cluster

Cloud-Azure

HPC-BigRed

HPC-SDSC

0

10

20

30

40

50

0 50 100 150

Es

tim

ate

d T

ime

(H

rs)

Number of Cores

MOTIF (Graybox) Desktop

Cluster

Cloud-Azure

HPC-BigRed

HPC-SDSC

0

10

20

30

40

50

0 50 100 150

Es

tim

ate

d T

ime

(H

rs)

Number of Cores

MOTIF (Blackbox) Desktop

Cluster

Cloud-Azure

HPC-BigRed

HPC-SDSC

Whitebox Estimate Graybox Estimate Blackbox Estimate

Black & Graybox ordering of platforms same for different # cores

Black & Whitebox ordering similar … except for the two HPC’s

Page 19: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

Results: Motif Workflow

Black & Graybox absolute difference is small across platforms

Black & Whitebox absolute difference for BigRed is large

Blackbox – Graybox

Absolute Difference Blackbox – Whitebox

Absolute Difference

-40%

-20%

0%

20%

40%

60%

80%

1 8

16

32

64

12

8

13

5

Err

or

% o

f B

lac

kb

ox

fro

m W

hit

eb

ox

Number of Cores

MOTIF (Blackbox-Whitebox Error)

Desktop Cluster

Cloud-Azure HPC-BigRed

HPC-SDSC-20%

0%

20%

1 8

16

32

64

12

8

13

5

Err

or

% o

f B

lac

kb

ox

fro

m G

rayb

ox

Number of Cores

MOTIF (Blackbox-Graybox Error)

Desktop Cluster

Cloud-Azure HPC-BigRed

HPC-SDSC

Page 20: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

Results: GWAS Workflow

Whitebox Estimate Graybox Estimate Blackbox Estimate

Black & Graybox ordering of platforms same for varying # cores

0

10

20

30

40

50

0 500 1000 1500 2000

Es

tim

ate

d T

ime

(H

rs)

Number of Cores

GWAS (Whitebox)

Desktop Cluster

Cloud-Azure HPC-BigRed

HPC-SDSC

0

10

20

30

40

50

0 500 1000 1500 2000

Es

tim

ate

d T

ime

(H

rs)

Number of Cores

GWAS (Graybox)

Desktop Cluster

Cloud-Azure HPC-Bigred

HPC-SDSC

0

10

20

30

40

50

0 500 1000 1500 2000

Es

tim

ate

d T

ime

(H

rs)

Number of Cores

GWAS (Blackbox)

Desktop Cluster

Cloud-Azure HPC-Bigred

HPC-SDSC

Blackbox & Whitebox ordering similar

BigRed: 1 core job queue time in WBox faster than width-core job for BBox

SDSC/Azure: Azure linear time; SDSC has step at 1024 cores for BBox

Page 21: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

Conclusion & Future Work

Page 22: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

Conclusions & Future work

Runtime estimated from Blackbox good enough for relative comparison

Absolute values vary

Queue overhead for task v. WF as a job has impact

Azure linear, HPC step times

Graybox ~= Blackbox

More complex workflows

Simulation v. Calculation

Synthetic workflow runs

Effect of each workflow attribute on estimate

Other WF features that have impact

E.g. Min required cores per stage

Page 23: COMPARISON OF RESOURCE PLATFORM SELECTION …datasys.cs.iit.edu/events/ScienceCloud2010/s10.pdfCloud-Azure HPC-Bigred HPC-SDSC 0 10 20 30 40 50 0 500 1000 1500 2000) Number of Cores

Dept of Energy | C. van Ingen, R. Barga, J. Listgarten MSR |

K. Jackson, D. Agarwal LBL | E. Soroush UW Acknowledgements

Comparison of Resource Platform Selection Approaches for

Scientific Workflows Yogesh Simmhan, Microsoft Research

Lavanya Ramakrishnan, Lawrence Berkeley Lab

Science Cloud Workshop, 2010