charm++ load balancing framework gengbin zheng [email protected] parallel programming laboratory...

23
Charm++ Load Charm++ Load Balancing Balancing Framework Framework Gengbin Zheng Gengbin Zheng [email protected] [email protected] Parallel Programming Laboratory Parallel Programming Laboratory Department of Computer Science Department of Computer Science University of Illinois at Urbana-C University of Illinois at Urbana-C hampaign hampaign http://charm.cs.uiuc.edu http://charm.cs.uiuc.edu

Upload: cecily-hodge

Post on 11-Jan-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

Charm++ Load Charm++ Load Balancing FrameworkBalancing Framework

Gengbin ZhengGengbin [email protected]@uiuc.edu

Parallel Programming LaboratoryParallel Programming Laboratory

Department of Computer ScienceDepartment of Computer ScienceUniversity of Illinois at Urbana-ChampaignUniversity of Illinois at Urbana-Champaign

http://charm.cs.uiuc.eduhttp://charm.cs.uiuc.edu

Page 2: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

22

MotivationMotivation

Irregular or dynamic applicationsIrregular or dynamic applications Initial static load balancingInitial static load balancing Application behaviors change dynamicallyApplication behaviors change dynamically Difficult to implement with good parallel Difficult to implement with good parallel

efficiencyefficiency

Versatile, automatic load balancersVersatile, automatic load balancers Application independentApplication independent No/little user effort is needed in load balanceNo/little user effort is needed in load balance Based on Charm++ and Adaptive MPIBased on Charm++ and Adaptive MPI

Page 3: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

33

Parallel Objects,

Adaptive Runtime System

Libraries and Tools

Molecular Dynamics

Computational Cosmology

Rocket Simulation

Protein FoldingQuantum Chemistry

(QM/MM)

Crack Propagation

Dendritic Growth

Space-time meshes

Page 4: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

44

Load Balancing in Charm++Load Balancing in Charm++

Viewing an application as a collection of cViewing an application as a collection of communicating objectsommunicating objectsObject migration as mechanism for adjustiObject migration as mechanism for adjusting loadng loadMeasurement based strategyMeasurement based strategy PrinciplePrinciple of of persistent computation and commpersistent computation and comm

unication structure.unication structure. Instrument cpu usage and communicationInstrument cpu usage and communication

Overload vs. underload processorOverload vs. underload processor

Page 5: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

55

Load Balancing – graph partitioningLoad Balancing – graph partitioning

LB View

mapping of objects

Weighted object graph in view of Load Balance

Charm++ PE

Page 6: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

66

Load Balancing FrameworkLoad Balancing Framework

LB Framework

Page 7: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

77

Centralized vs. Distributed Load Centralized vs. Distributed Load BalancingBalancing

CentralizedCentralized Object load data are Object load data are

sent to processor 0sent to processor 0 Integrate to a complete Integrate to a complete

object graphobject graph Migration decision is Migration decision is

broadcasted from broadcasted from processor 0processor 0

Global barrierGlobal barrier

DistributedDistributed Load balancing among Load balancing among

neighboring processorsneighboring processors Build partial object Build partial object

graphgraph Migration decision is Migration decision is

sent to its neighborssent to its neighbors No global barrierNo global barrier

Page 8: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

88

Load Balancing StrategiesLoad Balancing Strategies

BaseLB

CentralLB NborBaseLB

OrbLBDummyLB MetisLB RecBisectBfLB

GreedyLB RandCentLB RefineLB

GreedyCommLB RandRefLB RefineCommLB

NeighborLB

GreedyRefLB

Page 9: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

99

Strategy Example - GreedyCommLBStrategy Example - GreedyCommLB

Greedy algorithmGreedy algorithm Put the heaviest object to the most underloadPut the heaviest object to the most underload

ed processored processor

Object load is its cpu load plus comm costObject load is its cpu load plus comm cost Communication cost is computed as Communication cost is computed as αα++ββmm

Page 10: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

1010

Strategy Example - GreedyCommLBStrategy Example - GreedyCommLB

Page 11: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

1111

Strategy Example - GreedyCommLBStrategy Example - GreedyCommLB

Page 12: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

1212

Strategy Example - GreedyCommLBStrategy Example - GreedyCommLB

Page 13: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

1313

64 processors 1024 processors

Min load Max load Ave load Min load Max load Ave load

------------------- 13.952 15.505 14.388 42.801 45.971 44.784

GreedyRefLB 14.104 14.589 14.351 43.585 45.195 44.777

GreedyCommLB 13.748 14.396 14.025 40.519 46.922 43.777

RecBisectBfLB 11.701 13.771 12.709 35.907 48.889 43.953

MetisLB 14.061 14.506 14.341 41.477 48.077 44.772

RefineLB 14.043 14.977 14.388 42.801 45.971 44.783

RefineCommLB 14.015 15.176 14.388 42.801 45.971 44.783

OrbLB 11.350 12.414 11.891 31.269 44.940 38.200

Comparison of StrategiesComparison of Strategies

Jacobi1D program with 2048 chares on 64 pes and 10240 chares on 1024 pes

Page 14: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

1414

1000 processors

Min load Max load Ave load

-------------- 0 0.354490 0.197485

GreedyLB 0.190424 0.244135 0.197485

GreedyRefLB 0.191403 0.201179 0.197485

GreedyCommLB 0.197262 0.198238 0.197485

RefineLB 0.193369 0.200194 0.197485

RefineCommLB 0.193369 0.200194 0.197485

OrbLB 0.179689 0.220700 0.197485

Comparison of StrategiesComparison of Strategies

NAMD atpase Benchmark 327506 atomsNumber of chares:31811 migratable:31107

Page 15: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

1515

User InterfacesUser Interfaces

Fully automatic load balancingFully automatic load balancing Nothing needs to be changed in application codeNothing needs to be changed in application code Load balancing happens periodically and transparentlLoad balancing happens periodically and transparentl

yy +LBPeriod to control the load balancing interval+LBPeriod to control the load balancing interval

User controlled load balancingUser controlled load balancing Insert AtSync() calls at places ready for load balancinInsert AtSync() calls at places ready for load balancin

g (hint)g (hint) LB pass control back to ResumeFromSync() after migLB pass control back to ResumeFromSync() after mig

ration finishesration finishes

Page 16: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

1616

Migrating ObjectsMigrating Objects

Moving dataMoving data Runtime packs object data into a message Runtime packs object data into a message

and send to its destinationand send to its destination Runtime unpacks the data and creates objectRuntime unpacks the data and creates object User needs to write pup function for User needs to write pup function for

packing/unpacking object datapacking/unpacking object data

Page 17: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

1717

Compiler InterfaceCompiler Interface

Link time optionsLink time options -module: Link load balancers as modules-module: Link load balancers as modules Link multiple modules into binary Link multiple modules into binary

Runtime optionsRuntime options +balancer: Choose to invoke a load balancer+balancer: Choose to invoke a load balancer Can have multiple load balancersCan have multiple load balancers

+balancer GreedyCommLB +balancer RefineLB+balancer GreedyCommLB +balancer RefineLB

Page 18: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

1818

NAMD case studyNAMD case study

Molecular dynamicsMolecular dynamicsAtoms move slowlyAtoms move slowlyInitial load balancing can be as simple as Initial load balancing can be as simple as round-robinround-robinLoad balancing is only needed for once for Load balancing is only needed for once for a while, typically once every thousand a while, typically once every thousand stepsstepsGreedy balancer followed by Refine Greedy balancer followed by Refine strategystrategy

Page 19: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

1919

Load Balancing StepsLoad Balancing StepsRegular Timesteps

Instrumented Timesteps

Detailed, aggressive Load Balancing

Refinement Load Balancing

Page 20: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

2020

Processor Utilization against Time on (a) 128 (b) 1024 processors

On 128 processor, a single load balancing step suffices, but

On 1024 processors, we need a “refinement” step.

Load Balancing

Aggressive Load Balancing

Refinement Load

Balancing

Page 21: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

2121

Processor Utilization across processors after (a) greedy load balancing and (b) refining

Note that the underloaded processors are left underloaded (as they don’t impact perforamnce); refinement deals only with the overloaded ones

Some overloaded processors

Page 22: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

2222

Profile view of a 3000 processor run of NAMD(White shows idle time)

Page 23: Charm++ Load Balancing Framework Gengbin Zheng gzheng@uiuc.edu Parallel Programming Laboratory Department of Computer Science University of Illinois at

2323

Load Balance Research with Blue Load Balance Research with Blue GeneGene

Centralized load balancerCentralized load balancer Bottleneck for communication on processor 0Bottleneck for communication on processor 0 Memory constraintMemory constraint

Fully distributed load balancerFully distributed load balancer Neighborhood balancingNeighborhood balancing Without global load informationWithout global load information

Hierarchical distributed load balancerHierarchical distributed load balancer Divide into processor groupsDivide into processor groups Different strategies at each levelDifferent strategies at each level