irregular graph algorithms on parallel processing …slotag/pres/pres_sc15.pdfdistributed/shared...

23
Irregular Graph Algorithms on Parallel Processing Systems George M. Slota 1,2 Kamesh Madduri 1 (advisor) Sivasankaran Rajamanickam 2 (Sandia mentor) 1 Penn State University, 2 Sandia National Laboratories [email protected], [email protected], [email protected] SC15 PhD Forum 19 Nov 2015

Upload: others

Post on 23-May-2020

13 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Irregular Graph Algorithms on Parallel

Processing Systems

George M. Slota1,2 Kamesh Madduri1 (advisor)Sivasankaran Rajamanickam2 (Sandia mentor)

1Penn State University, 2Sandia National [email protected], [email protected], [email protected]

SC15 PhD Forum 19 Nov 2015

Page 2: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Summary of accomplishments

Multistep graph connectivity algorithms; on average 2× fasterthan prior state-of-the-art.

Manycore optimization methodology for graph algorithmimplementation on GPU

FASCIA and FastPath subgraph counting and min-weight pathfinding programs; up to orders-of-magnitude execution timeimprovement over prior art.

PuLP partitioner; order of magnitude faster, order of magnitudeless memory, comparable or better partition quality than otherstate-of-the-art utilities. Scales to 100B+ edge graphs, which it canpartition in minutes.

Distributed Graph Layout methodology for distributed memorygraph storage; up to 12× performance improvements over naivemethods

Complex Analysis of largest publicly available web crawl usingtechniques derived from above work (3.5B vertices and 129Bedges), analytic suite completes in 20 minutes on 256 nodes.

Page 3: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Graphs are...

Everywhere

InternetSocial networks, communicationBiology, chemistryScientific modeling, meshes,interactions

Figure sources: Franzosa et al. 2012, http://www.unc.edu/ unclng/Internet History.htm

Page 4: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Graphs are...

Everywhere

InternetSocial networks, communicationBiology, chemistryScientific modeling, meshes,interactions

Figure sources: Franzosa et al. 2012, http://www.unc.edu/ unclng/Internet History.htm

Page 5: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Graphs are...

Big

Internet - 50B+ pages indexed by Google, trillions ofhyperlinksFacebook - 800M users, 100B friendshipsHuman brain - 100B neurons, 1,000T synapticconnections

Figure sources: Facebook, Science Photo Library - PASIEKA via Getty Images

Page 6: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Graphs are...

BigInternet - 50B+ pages indexed by Google, trillions ofhyperlinksFacebook - 800M users, 100B friendshipsHuman brain - 100B neurons, 1,000T synapticconnections

Figure sources: Facebook, Science Photo Library - PASIEKA via Getty Images

Page 7: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Graphs are...

Complex

Graph analytics is listed as one of DARPA’s 23 toughestmathematical challengesExtremely variable - O(2n

2) possible simple graph

structures for n verticesReal-world graph characteristics makes computationalanalytics tough

Skewed degree distributionsSmall-world natureDynamically changing

Page 8: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Graphs are...

Complex

Graph analytics is listed as one of DARPA’s 23 toughestmathematical challengesExtremely variable - O(2n

2) possible simple graph

structures for n verticesReal-world graph characteristics makes computationalanalytics tough

Skewed degree distributionsSmall-world natureDynamically changing

Page 9: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Graphs are...

Complex

Graph analytics is listed as one of DARPA’s 23 toughestmathematical challengesExtremely variable - O(2n

2) possible simple graph

structures for n verticesReal-world graph characteristics makes computationalanalytics tough

Skewed degree distributionsSmall-world natureDynamically changing

Page 10: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Scope of Thesis ResearchKey challenges and goals

Challenge: Irregular and small-world graphs makeparallelization difficult

Goal: Optimize graph algorithm design atshared-memory levelGoal: Investigated distributed memory graph layout(partitioning-ordering) and optimizationGoal: Introduce methodology for core-to-system levelparallelization of large-scale analytics

Challenge: Perform end-to-end execution of graphanalytics on supercomputers

End-to-end - read in graph data, create distributedrepresentation, perform analytic, output resultsGoal: Using lessons learned to minimize end-to-endexecution times and allow scalability to massive graphs

Page 11: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Scope of Thesis ResearchKey challenges and goals

Challenge: Irregular and small-world graphs makeparallelization difficult

Goal: Optimize graph algorithm design atshared-memory levelGoal: Investigated distributed memory graph layout(partitioning-ordering) and optimizationGoal: Introduce methodology for core-to-system levelparallelization of large-scale analytics

Challenge: Perform end-to-end execution of graphanalytics on supercomputers

End-to-end - read in graph data, create distributedrepresentation, perform analytic, output resultsGoal: Using lessons learned to minimize end-to-endexecution times and allow scalability to massive graphs

Page 12: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Shared-Memory Algorithm OptimizationMultistep for Graph Connectivity

Multistep approach to graph connectivity, weakconnectivity, strong connectivity and biconnectivity (Slotaet al. 2014, Slota and Madduri, 2014)

Key Optimizations:

Emphasis on data locality: multiple queue levelsMinimize synchronization costs: few atomics and nocriticalsSubroutines optimization: direction-optimizing BFS,push- and queue-based color propagation

Page 13: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Shared-Memory Algorithm OptimizationMultistep for Graph Connectivity: Performance

On average 2× faster than prior state-of-the-art for SCC,up to 7× faster for biconnectivity

Compare SCC times to state-of-the-art (Hong et al.2013) in terms of speedup versus optimal serial algorithm

Twitter ItWeb WikiLinks LiveJournal

Xyce R−MAT_24 GNP_1 GNP_10

5

10

15

20

0

1

2

3

12345

2.55.07.5

10.012.5

1234

2.5

5.0

7.5

10203040

0.4

0.8

1.2

1.6

12 4 8 16 12 4 8 16 12 4 8 16 12 4 8 16

12 4 8 16 12 4 8 16 12 4 8 16 12 4 8 16Cores

Sp

eed

up

vs.

Tar

jan

's

Algorithm Multistep Hong

Page 14: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Shared-Memory Algorithm OptimizationMultistep for Graph Connectivity: Manycore Optimization

Optimized SCC code for highperformance on manycoreprocessors such as GPUs(Slota et al. 2015)Up to 3.25× performanceimprovement over CPUMethodology generalizable tobroad class of graphalgorithms

Key optimizations:Loop manipulation forhigher parallelism(Manhattan Collapse)Shared-memory and otherlocality considerationsWarp and team-basedatomics and operations(team scan, team reduce,etc.)

SNB KNC K20X K40M

0.00.20.40.6

0.000.050.100.150.200.25

0.000.250.500.751.00

012345

DB

pediaX

yceTestLiveJournal

RM

AT2M

B MG

ML

OM

P

B MG

ML

OM

P

B MG

ML

OM

P

B MG

ML

OM

P

SCC AlgorithmsB

illio

ns o

f Edg

es P

roce

ssed

per

Sec

ondCross-platform comparison of optimized manycore

code. Performance given in billions of edges processedper second.

Page 15: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Shared-Memory Algorithm OptimizationFASCIA for Subgraph Counting and Enumeration

FASCIA: implementation of color-coding subgraphcounting algorithm (Alon et al. 1995)

Subgraph counting in shared and distributed memory, andminimum weight path finding on weighted networks(FastPath)

Key optimizations:

Abstraction of key dynamic programming phaseDesign storage structures for optimal cache utilizationUse of communication and computation avoidancetechniquesMultiple memory reduction strategies

Page 16: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Shared-Memory Algorithm OptimizationFASCIA for Subgraph Counting and Enumeration: Performance

Can count nontrivialsubgraphs in minuteson multi-million edgegraphs in sharedmemory (top) andmulti-billion edge graphin distributed memory(bottom)

Orders-of-magnitudeimprovement over priorart (Slota and Madduri2013, 2014, 2015)

Portland Orkut

0

50

100

0

300

600

900

U5−1 U5−2 U7−1 U7−2 U10−1 U10−2 U12−1 U12−2 U5−1 U5−2 U7−1 U7−2 U10−1 U10−2 U12−1 U12−2Template

Sin

gle

Itera

tion

Exe

cutio

n T

ime

(s)

sk−2005 Twitter

0

250

500

750

U5−1 U5−2 U7−1 U7−2 U10−1 U10−2 U5−1 U5−2 U7−1 U7−2 U10−1 U10−2Template

Sin

gle

Itera

tion

Exe

cutio

n T

ime

(s)

Page 17: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Distributed-memory layout for graphsPartitioning and ordering

Partitioning - how to distribute vertices and edges amongMPI tasks

Objectives - minimize both edges between tasks (cut)and maximal number of edges coming out of any giventask (max cut)Constraints - balance vertices per part and edges per partWant balanced partitions with low cut to minimizecommunication, computation, and idle time amongparts!

Ordering - how to order intra-part vertices and edges inmemory

Ordering affects execution time by optimizing formemory access locality and cache utilization

Both are very difficult with small-world graphs

Page 18: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Distributed-memory layout for graphsPartitioning with PuLP

PuLP partitioner: generating multi-constraintmulti-objective partitions

Designed specifically to partition small-world and irregulargraphs

Only partitioner available that’s scalable to multi-billionedge graphs and able to satisfy multiple constraints andobjectives, all without sacrificing cut quality

PuLP Algorithm: initialize, balance vertices, refine, balanceedges, refine, balance and minimize cut

Page 19: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Distributed-memory layout for graphsPartitioning with PuLP: Performance

PuLP demonstrates 14.5× average speedup across a suite of testgraphs compared to METIS and ParMETIS

PuLP uses up to 38× less memory than METIS or KaFFPa

PuLP’s generated partitions are equivalent or better in quality

● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●

LiveJournal R−MAT uk−2005 Twitter

0

25

50

75

0

500

1000

1500

0

250

500

750

0

5000

10000

15000

2 4 8 16 32 64 128 2 4 8 16 32 64 128 2 4 8 16 32 64 128 2 4 8 16 32 64 128Number of Partitions

Run

ning

Tim

e

Partitioner ● PULP PULP−M PULP−MM ParMETIS METIS−M (serial) PULP−M (serial)

● ●

●●

● ●

●● ● ● ● ●

● ●●

●●

●●

LiveJournal R−MAT uk−2005 Twitter

0.02

0.04

0.06

0.08

0.00.10.20.30.4

0.000.010.020.030.04

0.0

0.1

0.2

0.3

2 4 8 16 32 64 128 2 4 8 16 32 64 128 2 4 8 16 32 64 128 2 4 8 16 32 64 128Number of PartitionsM

ax P

er−

Par

t Rat

io

Execution times for PuLP relative to METIS and ParMETIS (top) and partition quality in terms of edge cut forPuLP and ParMETIS (bottom).

Page 20: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Distributed-memory layout for graphsPerformance results

DGL: Distributed graph layout (partitioning+ordering)

Developed new fast ordering method and analyzed graph analyticperformance impacts of PuLP partitioning and our orderingstrategy

PuLP partitioning can have up to a 12× speedup oncommunication times; our vertex ordering can have up to 5×speedup on computation times against naive methods

Twitter uk−2005 WebBase

0.0

0.5

1.0

1.5

0.0

2.5

5.0

7.5

10.0

0.0

2.5

5.0

7.5

10.0

12.5

Random

PuLP

ME

TIS

Random

PuLP

ME

TIS

Random

PuLP

ME

TIS

Partitioner

Spe

edup

vs.

Ran

dom Twitter uk−2005 WebBase

0.0

0.5

1.0

0

1

2

3

4

5

0

1

2

3

4

5

Random

DG

L

RC

M

Random

DG

L

RC

M

Random

DG

L

RC

M

Ordering

Spe

edup

vs.

Ran

dom

Impact of partitioning quality (left) and intra-task vertex orderings (right) on execution times of graph analytics.

Page 21: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Large-scale Graph Analytics

Using optimizations and techniques from thesis research efforts,implemented analytic suite for large-scale analytics (connectivity,k-core, community detection, PageRank, centrality measures)

Ran algorithm suite on only 256 nodes of Blue Waters system, fullend-to-end execution time in 20 minutes

Novel insights gathered from analysis - largest communitiesdiscovered, communities appear to have scale-free or heavy-taileddistribution

Largest Communities Discovered (numbers in millions)

Pages Internal Links External Links Rep. Page

112 2126 32 YouTube18 548 277 Tumblr

9 516 84 Creative Commons8 186 85 WordPress7 57 83 Amazon6 41 21 Flickr

Page 22: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Large-scale Graph AnalyticsComparison to other approaches

We compare our implementation approach to several populardistributed/shared memory frameworks (GraphX, PowerGraph,PowerLyra, FlashGraph)

Across suite of test graphs and frameworks, our PageRankimplementation (left) is 37× faster on average and our WCCimplementation (right) is 97× faster on average while running on16 nodes

Comparison of graph analysis frameworks for PageRank (left) and WCC (right).

Page 23: Irregular Graph Algorithms on Parallel Processing …slotag/pres/pres_SC15.pdfdistributed/shared memory frameworks (GraphX, PowerGraph, PowerLyra, FlashGraph) Across suite of test

Conclusions and Going Forward

Real-world graphs = big, complex, difficult to effectivelyrun on in parallelDemonstrated various optimization approaches for sharedand distributed memoryHopefully this work will enable:

Implementation of more complex analytics for largenetworksScaling to larger networks and on larger future systemsGreater insight into larger networks than currentlypossible

Thanks to NSF, Penn State (Kamesh Madduri), SandiaLabs (Siva Rajamanickam), and NCSA!

This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation(awards OCI-0725070, ACI-1238993, and ACI-1444747) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois atUrbana-Champaign and its National Center for Supercomputing Applications. This work is also supported by NSF grants ACI-1253881,CCF-1439057, and the DOE Office of Science through the FASTMath SciDAC Institute. Sandia National Laboratories is a multi-programlaboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S.Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.