how to boost the performance of your mpi and pgas...

113
How to Boost the Performance of Your MPI and PGAS Applications with MVAPICH2 Libraries The MVAPICH Team The Ohio State University E-mail: [email protected] http://www.cse.ohio-state.edu/~panda A Tutorial at MVAPICH User Group 2017 by

Upload: others

Post on 04-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

How to Boost the Performance of Your MPI and PGAS Applications with MVAPICH2 Libraries

The MVAPICH Team

The Ohio State University

E-mail: [email protected]

http://www.cse.ohio-state.edu/~panda

A Tutorial at

MVAPICH User Group 2017

by

Page 2: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 2 Network Based Computing Laboratory

Parallel Programming Models Overview

P1 P2 P3

Shared Memory

P1 P2 P3

Memory Memory Memory

P1 P2 P3

Memory Memory Memory

Logical shared memory

Shared Memory Model

SHMEM, DSM

Distributed Memory Model

MPI (Message Passing Interface)

Partitioned Global Address Space (PGAS)

Global Arrays, UPC, Chapel, X10, CAF, …

• Programming models provide abstract machine models

• Models can be mapped on different types of systems

– e.g. Distributed Shared Memory (DSM), MPI within a node, etc.

• PGAS models and Hybrid MPI+PGAS models are gradually receiving

importance

Page 3: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 3 Network Based Computing Laboratory

Supporting Programming Models for Multi-Petaflop and Exaflop Systems: Challenges

Programming Models MPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP,

OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc.

Application Kernels/Applications

Networking Technologies (InfiniBand, 40/100GigE,

Aries, and Omni-Path)

Multi-/Many-core Architectures

Accelerators (GPU and MIC)

Middleware Co-Design

Opportunities

and Challenges

across Various

Layers

Performance

Scalability

Resilience

Communication Library or Runtime for Programming Models

Point-to-point

Communication

Collective

Communication

Energy-

Awareness

Synchronization

and Locks

I/O and

File Systems

Fault

Tolerance

Page 4: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 4 Network Based Computing Laboratory

Designing (MPI+X) for Exascale • Scalability for million to billion processors

– Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided)

• Scalable Collective communication – Offloaded

– Non-blocking

– Topology-aware

• Balancing intra-node and inter-node communication for next generation multi-/many-core (128-1024 cores/node)

– Multiple end-points per node

• Support for efficient multi-threading

• Integrated Support for GPGPUs and Accelerators

• Fault-tolerance/resiliency

• QoS support for communication and I/O

• Support for Hybrid MPI+PGAS programming

• MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM, CAF, MPI + UPC++…

• Virtualization

• Energy-Awareness

Page 5: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 5 Network Based Computing Laboratory

Overview of the MVAPICH2 Project • High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE)

– MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.0), Started in 2001, First version available in 2002

– MVAPICH2-X (MPI + PGAS), Available since 2011

– Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014

– Support for Virtualization (MVAPICH2-Virt), Available since 2015

– Support for Energy-Awareness (MVAPICH2-EA), Available since 2015

– Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015

– Used by more than 2,800 organizations in 85 countries

– More than 424,000 (> 0.4 million) downloads from the OSU site directly

– Empowering many TOP500 clusters (June ‘17 ranking)

• 1st, 10,649,600-core (Sunway TaihuLight) at National Supercomputing Center in Wuxi, China

• 15th, 241,108-core (Pleiades) at NASA

• 20th, 462,462-core (Stampede) at TACC

• 44th, 74,520-core (Tsubame 2.5) at Tokyo Institute of Technology

– Available with software stacks of many vendors and Linux Distros (RedHat and SuSE)

– http://mvapich.cse.ohio-state.edu

• Empowering Top500 systems for over a decade

– System-X from Virginia Tech (3rd in Nov 2003, 2,200 processors, 12.25 TFlops) ->

– Stampede at TACC (12th in Jun’16, 462,462 cores, 5.168 Plops)

Page 6: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 6 Network Based Computing Laboratory

Architecture of MVAPICH2 Software Family

High Performance Parallel Programming Models

Message Passing Interface (MPI)

PGAS (UPC, OpenSHMEM, CAF, UPC++)

Hybrid --- MPI + X (MPI + PGAS + OpenMP/Cilk)

High Performance and Scalable Communication Runtime Diverse APIs and Mechanisms

Point-to-

point

Primitives

Collectives

Algorithms

Energy-

Awareness

Remote

Memory

Access

I/O and

File Systems

Fault

Tolerance Virtualization

Active

Messages Job Startup

Introspectio

n & Analysis

Support for Modern Networking Technology (InfiniBand, iWARP, RoCE, Omni-Path)

Support for Modern Multi-/Many-core Architectures (Intel-Xeon, OpenPower, Xeon-Phi (MIC, KNL), NVIDIA GPGPU)

Transport Protocols Modern Features

RC XRC UD DC UMR ODP SR-

IOV

Multi

Rail

Transport Mechanisms

Shared

Memory CMA IVSHMEM

Modern Features

MCDRAM* NVLink* CAPI*

* Upcoming

Page 7: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 7 Network Based Computing Laboratory

• Research is done for exploring new designs

• Designs are first presented to conference/journal publications

• Best performing designs are incorporated into the codebase

• Rigorous Q&A procedure before making a release

– Exhaustive unit testing

– Various test procedures on diverse range of platforms and interconnects

– Performance tuning

– Applications-based evaluation

– Evaluation on large-scale systems

• Even alpha and beta versions go through the above testing

Strong Procedure for Design, Development and Release

Page 8: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 8 Network Based Computing Laboratory

MVAPICH2 Software Family

Requirements Library

MPI with IB, iWARP and RoCE MVAPICH2

Advanced MPI, OSU INAM, PGAS and MPI+PGAS with IB and RoCE MVAPICH2-X

MPI with IB & GPU MVAPICH2-GDR

MPI with IB & MIC MVAPICH2-MIC

HPC Cloud with MPI & IB MVAPICH2-Virt

Energy-aware MPI with IB, iWARP and RoCE MVAPICH2-EA

MPI Energy Monitoring Tool OEMT

InfiniBand Network Analysis and Monitoring OSU INAM

Microbenchmarks for Measuring MPI and PGAS Performance OMB

Page 9: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 9 Network Based Computing Laboratory

• Released on 08/10/2017

• Major Features and Enhancements

– Based on MPICH-3.2

– Enhance performance of point-to-point operations for CH3-Gen2 (InfiniBand), CH3-PSM, and CH3-PSM2 (Omni-Path) channels

– Improve performance for MPI-3 RMA operations

– Introduce support for Cavium ARM (ThunderX) systems

– Improve support for process to core mapping on many-core systems

• New environment variable MV2_THREADS_BINDING_POLICY for multi-threaded MPI and MPI+OpenMP applications

• Support `linear' and `compact' placement of threads

• Warn user if over-subscription of core is detected

– Improve launch time for large-scale jobs with mpirun_rsh

– Add support for non-blocking Allreduce using Mellanox SHARP

– Efficient support for different Intel Knight's Landing (KNL) models

– Improve performance for Intra- and Inter-node communication for OpenPOWER architecture

– Improve support for large processes per node and huge pages on SMP systems

– Enhance collective tuning for many architectures/systems

– Enhance support for MPI_T PVARs and CVARs

MVAPICH2 2.3b

Page 10: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 10 Network Based Computing Laboratory

• Job start-up

• Point-to-point Inter-node Protocol

• Transport Type Selection

• Multi-rail and QoS

• Process Mapping and Point-to-point Intra-node Protocols

• Collectives

• MPI_T Support

Presentation Overview

Page 11: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 11 Network Based Computing Laboratory

• Near-constant MPI and OpenSHMEM

initialization time at any process count

• 10x and 30x improvement in startup time

of MPI and OpenSHMEM respectively at

16,384 processes

• Memory consumption reduced for remote

endpoint information by O(processes per

node)

• 1GB Memory saved per node with 1M

processes and 16 processes per node

Towards High Performance and Scalable Startup at Exascale

P M

O

Job Startup Performance

Mem

ory

Req

uir

ed t

o S

tore

En

dp

oin

t In

form

atio

n a b c d

e P

M

PGAS – State of the art

MPI – State of the art

O PGAS/MPI – Optimized

PMIX_Ring

PMIX_Ibarrier

PMIX_Iallgather

Shmem based PMI

b

c

d

e

a On-demand Connection

On-demand Connection Management for OpenSHMEM and OpenSHMEM+MPI. S. Chakraborty, H. Subramoni, J. Perkins, A. A. Awan, and D K

Panda, 20th International Workshop on High-level Parallel Programming Models and Supportive Environments (HIPS ’15)

PMI Extensions for Scalable MPI Startup. S. Chakraborty, H. Subramoni, A. Moody, J. Perkins, M. Arnold, and D K Panda, Proceedings of the 21st

European MPI Users' Group Meeting (EuroMPI/Asia ’14)

Non-blocking PMI Extensions for Fast MPI Startup. S. Chakraborty, H. Subramoni, A. Moody, A. Venkatesh, J. Perkins, and D K Panda, 15th

IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid ’15)

SHMEMPMI – Shared Memory based PMI for Improved Performance and Scalability. S. Chakraborty, H. Subramoni, J. Perkins, and D K Panda,

16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid ’16)

a

b

c d

e

Page 12: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 12 Network Based Computing Laboratory

Startup Performance on KNL + Omni-Path

0

50

100

150

200

MP

I_In

it (

Seco

nd

s)

Number of Processes

MPI_Init - TACC Stampede-KNL

Intel MPI 2018 beta

MVAPICH2 2.3a

0

5

10

15

20

25

64

12

8

25

6

51

2

1K

2K

4K

8K

16

K

32

K

64

K

Tim

e Ta

ken

(Se

con

ds)

Number of Processes

MPI_Init & Hello World - Oakforest-PACS

Hello World (MVAPICH2-2.3a)

MPI_Init (MVAPICH2-2.3a)

• MPI_Init takes 51 seconds on 231,956 processes on 3,624 KNL nodes (Stampede – Full scale) • 8.8 times faster than Intel MPI at 128K processes (Courtesy: TACC) • At 64K processes, MPI_Init and Hello World takes 5.8s and 21s respectively (Oakforest-PACS) • All numbers reported with 64 processes per node

5.8s

21s

51s

8.8x

New designs available in MVAPICH2-2.3a and as patch for SLURM-15.08.8 and SLURM-16.05.1

Page 13: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 13 Network Based Computing Laboratory

• SHMEMPMI allows MPI processes to directly read remote endpoint (EP) information from the process

manager through shared memory segments

• Only a single copy per node - O(processes per node) reduction in memory usage

• Estimated savings of 1GB per node with 1 million processes and 16 processes per node

• Up to 1,000 times faster PMI Gets compared to default design

• Available for MVAPICH2 2.2rc1 and SLURM-15.08.8

Process Management Interface (PMI) over Shared Memory (SHMEMPMI)

0

50

100

150

200

250

300

1 2 4 8 16 32

Tim

e Ta

ken

(m

illis

eco

nd

s)

Number of Processes per Node

Time Taken by one PMI_Get Default

SHMEMPMI

0.0001

0.001

0.01

0.1

1

10

100

1000

10000

16

32

64

12

8

25

6

51

2

1K

2K

4K

8K

16

K

32

K

64

K

12

8K

25

6K

51

2K

1MM

emo

ry U

sage

per

No

de

(MB

) Number of Processes per Job

Memory Usage for Remote EP Information

Fence - Default

Allgather - Default

Fence - Shmem

Allgather - Shmem

Estimated 1

00

0x

Actual

16x

Page 14: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 14 Network Based Computing Laboratory

On-demand Connection Management for OpenSHMEM+MPI

0

5

10

15

20

25

30

35

32 64 128 256 512 1K 2K 4K

Tim

e T

ake

n (

Seco

nd

s)

Number of Processes

Breakdown of OpenSHMEM Startup

Connection Setup

PMI Exchange

Memory Registration

Shared Memory Setup

Other

0

20

40

60

80

100

120

16 32 64 128 256 512 1K 2K 4K 8K

Tim

e T

ake

n (

Seco

nd

s)

Number of Processes

Performance of OpenSHMEM Initialization and Hello World

Hello World - Static

Initialization - Static

Hello World - On-demand

Initialization - On-demand

• Static connection establishment wastes memory and takes a lot of time

• On-demand connection management improves OpenSHMEM initialization time by 29.6 times

• Time taken for Hello World reduced by 8.31 times at 8,192 processes

• Available since MVAPICH2-X 2.1rc1

Page 15: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 15 Network Based Computing Laboratory

Using SLURM as launcher

• Use PMI2

– ./configure --with-pm=slurm --with-pmi=pmi2

– srun --mpi=pmi2 ./a.out

• Use PMI Extensions

– Patch for SLURM available at

http://mvapich.cse.ohio-state.edu/download/

– Patches available for SLURM 15, 16, and 17

– PMI Extensions are automatically detected by

MVAPICH2

Using mpirun_rsh as launcher

• MV2_MT_DEGREE

– degree of the hierarchical tree used by

mpirun_rsh

• MV2_FASTSSH_THRESHOLD

– #nodes beyond which hierarchical-ssh scheme is

used

• MV2_NPROCS_THRESHOLD

– #nodes beyond which file-based communication

is used for hierarchical-ssh during start up

How to Get the Best Startup Performance with MVAPICH2?

• MV2_HOMOGENEOUS_CLUSTER=1 //Set for homogenous clusters

• MV2_ON_DEMAND_UD_INFO_EXCHANGE=1 //Enable UD based address exchange

Page 16: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 16 Network Based Computing Laboratory

• Job start-up

• Point-to-point Inter-node Protocol

• Transport Type Selection

• Multi-rail and QoS

• Process Mapping and Point-to-point Intra-node Protocols

• Collectives

• MPI_T Support

Presentation Overview

Page 17: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 17 Network Based Computing Laboratory

Inter-node Point-to-Point Tuning: Eager Thresholds

• Switching Eager to Rendezvous transfer

• Default: Architecture dependent on common platforms, in order to achieve both best performance and

memory footprint

• Threshold can be modified by users to get smooth performance across message sizes

• mpirun_rsh –np 2 –hostfile hostfile MV2_IBA_EAGER_THRESHOLD=32K a.out

• Memory footprint can increase along with eager threshold

0

5

10

15

20

25

1 2 4 8 16 32 64 128 256 512 1K 2K 4k 8k 16k 32k

Late

ncy

(u

s)

Message Size (Bytes)

Eager Rendezvous

Eager threshold

Eager vs Rendezvous

0

2

4

6

8

10

12

14

16

18

0 1 2 4 8 16 32 64 128256512 1K 2K 4K 8K 16K32K64K

Late

ncy

(u

s)

Message Size (Bytes)

eager_th=1K

eager_th=2K

eager_th=4K

eager_th=8K

eager_th=16K

eager_th=32K

Impact of Eager Threshold

Page 18: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 18 Network Based Computing Laboratory

• Application processes schedule communication operation

• Network adapter progresses communication in the background

• Application process free to perform useful compute in the foreground

• Overlap of computation and communication => Better Overall Application

Performance

• Increased buffer requirement

• Poor communication performance if used for all types of communication

operations

Analyzing Overlap Potential of Eager Protocol

Application

Process

Application

Process

Network Interface

Card

Network Interface

Card

Schedule

Send

Operation

Schedule

Receive

Operation

Check for

Completion

Check for

Completion

Complete Complete

Impact of changing Eager Threshold on performance of multi-pair

message-rate benchmark with 32 processes on Stampede Computation Communication Progress

Page 19: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 19 Network Based Computing Laboratory

• Application processes schedule communication operation

• Application process free to perform useful compute in the

foreground

• Little communication progress in the background

• All communication takes place at final synchronization

• Reduced buffer requirement

• Good communication performance if used for large

message sizes and operations where communication

library is progressed frequently

• Poor overlap of computation and communication => Poor

Overall Application Performance

Analyzing Overlap Potential of Rendezvous Protocol Application

Process

Application

Process

Network Interface

Card

Network Interface

Card

Schedule

Send

Operation

Schedule

Receive

Operation

RTS

Check for

Completion

Check for

Completion

Not Complete

Not Complete

CTS

Check for

Completion

Check for

Completion

Not Complete

Not Complete

Check for

Completion

Check for

Completion

Complete

Complete

Computation Communication Progress

Page 20: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 20 Network Based Computing Laboratory

Dynamic and Adaptive MPI Point-to-point Communication Protocols

Process on Node 1 Process on Node 2

Eager Threshold for Example Communication Pattern with Different Designs

0 1 2 3

4 5 6 7

Default

16 KB 16 KB 16 KB 16 KB

0 1 2 3

4 5 6 7

Manually Tuned

128 KB 128 KB 128 KB 128 KB

0 1 2 3

4 5 6 7

Dynamic + Adaptive

32 KB 64 KB 128 KB 32 KB

H. Subramoni, S. Chakraborty, D. K. Panda, Designing Dynamic & Adaptive MPI Point-to-Point Communication Protocols for Efficient Overlap of Computation & Communication, ISC'17 - Best Paper

0

100

200

300

400

500

600

128 256 512 1K

Wal

l Clo

ck T

ime

(sec

on

ds)

Number of Processes

Execution Time of Amber

Default Threshold=17K Threshold=64K Threshold=128K Dynamic Threshold

0123456789

128 256 512 1KRel

ativ

e M

emo

ry C

on

sum

pti

on

Number of Processes

Relative Memory Consumption of Amber

Default Threshold=17K Threshold=64K Threshold=128K Dynamic Threshold

Default Poor overlap; Low memory requirement Low Performance; High Productivity

Manually Tuned Good overlap; High memory requirement High Performance; Low Productivity

Dynamic + Adaptive Good overlap; Optimal memory requirement High Performance; High Productivity

Process Pair Eager Threshold (KB)

0 – 4 32

1 – 5 64

2 – 6 128

3 – 7 32

Desired Eager Threshold

Page 21: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 21 Network Based Computing Laboratory

Dynamic and Adaptive Tag Matching

Normalized Total Tag Matching Time at 512 Processes Normalized to Default (Lower is Better)

Normalized Memory Overhead per Process at 512 Processes Compared to Default (Lower is Better)

Adaptive and Dynamic Design for MPI Tag Matching; M. Bayatpour, H. Subramoni, S. Chakraborty, and D. K. Panda; IEEE Cluster 2016. [Best Paper Nominee]

Ch

alle

nge

Tag matching is a significant overhead for receivers

Existing Solutions are

- Static and do not adapt dynamically to communication pattern

- Do not consider memory overhead

Solu

tio

n

A new tag matching design

- Dynamically adapt to communication patterns

- Use different strategies for different ranks

- Decisions are based on the number of request object that must be traversed before hitting on the required one

Res

ult

s Better performance than other state-of-the art tag-matching schemes

Minimum memory consumption

Will be available in future MVAPICH2 releases

Page 22: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 22 Network Based Computing Laboratory

0

0.5

1

1.5

2

2.5

3

3.5

4

0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K

Late

ncy

(u

s)

MVAPICH2OpenMPI-v2.1.1

Intra-node Point-to-point Performance on ARMv8

0

500

1000

1500

2000

2500

3000

3500

4000

4500

1 2 4 8

16

32

64

12

8

25

6

51

2

1K

2K

4K

8K

32

K

64

K

12

8K

25

6K

51

2K

1M

2M

4M

Ban

dw

idth

(M

B/s

)

MVAPICH2

OpenMPI-v2.1.1

0

2000

4000

6000

8000

10000

1 2 4 8

16

32

64

12

8

25

6

51

2

1K

2K

4K

8K

16

K

32

K

64

K

12

8K

25

6K

51

2K

1M

2M

Bid

irec

tio

nal

Ban

dw

idth

MVAPICH2

OpenMPI-v2.1.1

Platform: ARMv8 (aarch64) processor with 96 cores dual-socket CPU. Each socket contains 48 cores.

0

100

200

300

400

500

600

700

800

8K 16K 32K 64K 128K 256K 512K 1M 2M

Late

ncy

(u

s)

MVAPICH2

OpenMPI-v2.1.1

Small Message Latency Large Message Latency

Bi-directional Bandwidth Bandwidth

Available in MVAPICH2 2.3b

0.92 us (OpenMPI)

0.74 us (MVAPICH2)

(19.5% improvement at 4-byte)

53% at 4K 28% at 2M

38% at 2M 1.85 X at 2M

** OpenMPI version 2.2.1 used with “––mca bta self,sm”

Page 23: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 23 Network Based Computing Laboratory

• Job start-up

• Point-to-point Inter-node Protocol

• Transport Type Selection

• Multi-rail and QoS

• Process Mapping and Point-to-point Intra-node Protocols

• Collectives

• MPI_T Support

Presentation Overview

Page 24: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 24 Network Based Computing Laboratory

Hybrid (UD/RC/XRC) Mode in MVAPICH2

• Both UD and RC/XRC have benefits

• Hybrid for the best of both

• Enabled by configuring MVAPICH2 with the

–enable-hybrid

• Available since MVAPICH2 1.7 as integrated

interface

0

2

4

6

128 256 512 1024

Tim

e (

us)

Number of Processes

UD Hybrid RC

26% 40% 30% 38%

• Refer to Running with Hybrid UD-RC/XRC section of MVAPICH2 user guide for more information

• http://mvapich.cse.ohio-state.edu/static/media/mvapich/mvapich2-2.3a-userguide.html#x1-690006.11

Parameter Significance Default Notes

MV2_USE_UD_HYBRID • Enable / Disable use of UD transport in Hybrid mode

Enabled • Always Enable

MV2_HYBRID_ENABLE_THRESHOLD_SIZE • Job size in number of processes beyond which hybrid mode will be enabled

1024 • Uses RC/XRC connection until job size < threshold

MV2_HYBRID_MAX_RC_CONN • Maximum number of RC or XRC connections created per process • Limits the amount of connection memory

64 • Prevents HCA QP cache thrashing

Performance with HPCC Random Ring

Page 25: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 25 Network Based Computing Laboratory

Minimizing Memory Footprint by Direct Connect (DC) Transport

No

de

0

P1

P0

Node 1

P3

P2

Node 3

P7

P6

No

de

2

P5

P4

IB

Network

• Constant connection cost (One QP for any peer)

• Full Feature Set (RDMA, Atomics etc)

• Separate objects for send (DC Initiator) and receive (DC Target)

– DC Target identified by “DCT Number”

– Messages routed with (DCT Number, LID)

– Requires same “DC Key” to enable communication

• Available since MVAPICH2-X 2.2a

0

0.2

0.4

0.6

0.8

1

1.2

160 320 620

No

rmal

ized

Exe

cuti

on

Tim

e

Number of Processes

NAMD - Apoa1: Large data set

RC DC-Pool UD XRC

10

22

47 97

1 1 1 2

10 10 10 10

1 1

3 5

1

10

100

80 160 320 640

Co

nn

ect

ion

Mem

ory

(K

B)

Number of Processes

Memory Footprint for Alltoall

RC DC-Pool UD XRC

H. Subramoni, K. Hamidouche, A. Venkatesh, S. Chakraborty and D. K. Panda, Designing MPI Library with Dynamic Connected Transport (DCT)

of InfiniBand : Early Experiences. IEEE International Supercomputing Conference (ISC ’14)

Page 26: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 26 Network Based Computing Laboratory

• Job start-up

• Point-to-point Inter-node Protocol

• Transport Type Selection

• Multi-rail and QoS

• Process Mapping and Point-to-point Intra-node Protocols

• Collectives

• MPI_T Support

Presentation Overview

Page 27: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 27 Network Based Computing Laboratory

MVAPICH2 Multi-Rail Design

• What is a rail?

– HCA, Port, Queue Pair

• Automatically detects and uses all active HCAs in a system

– Automatically handles heterogeneity

• Supports multiple rail usage policies

– Rail Sharing – Processes share all available rails

– Rail Binding – Specific processes are bound to specific rails

Page 28: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 28 Network Based Computing Laboratory

Performance Tuning on Multi-Rail Clusters

0

1

2

3

4

5

6

7

Me

ss

ag

e R

ate

(M

illi

on

s o

f M

es

sa

ge

s / S

ec

)

Message Size (Bytes)

Impact of Default Rail Binding on Message Rate

Single-Rail

Dual-Rail

0

1

2

3

4

5

6

7

8

Me

ss

ag

e R

ate

(M

illi

on

s o

f M

es

sa

ge

s/s

ec

)

Message Size (Bytes)

Impact of Advanced Multi-rail Tuning on Message Rate

Use FirstRound RobinScatterBunch

Parameter Significance Default Notes

MV2_IBA_HCA • Manually set the HCA to be used

Unset • To get names of HCA ibstat | grep “^CA”

MV2_DEFAULT_PORT • Select the port to use on a active multi port HCA 0 • Set to use different port

MV2_RAIL_SHARING_LARGE_MSG_THRESHOLD • Threshold beyond which striping will take place 16 Kbyte

MV2_RAIL_SHARING_POLICY • Choose multi-rail rail sharing / binding policy • For Rail Sharing set to USE_FIRST or ROUND_ROBIN • Set to FIXED_MAPPING for advanced rail binding options

Rail Binding in Round Robin

mode

• Advanced tuning can result in better performance

MV2_PROCESS_TO_RAIL_MAPPING • Determines how HCAs will be mapped to the rails BUNCH • Options: SCATTER and custom list

• Refer to Enhanced design for Multiple-Rail section of MVAPICH2 user guide for more information

• http://mvapich.cse.ohio-state.edu/static/media/mvapich/mvapich2-2.3a-userguide.html#x1-700006.12

0

1000

2000

3000

4000

5000

6000

7000

Ban

dw

idth

(M

B/s

ec

)

Message Size (Bytes)

Impact of Default Message Striping on Bandwidth

Single-Rail

Dual-Rail

Two 24-core Magny Cours nodes with two Mellanox ConnectX QDR adapters

Six pairs with OSU Multi-Pair bandwidth and messaging rate benchmark

98% 130%

7%

Page 29: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 29 Network Based Computing Laboratory

• Job start-up

• Point-to-point Inter-node Protocol

• Transport Type Selection

• Multi-rail and QoS

• Process Mapping and Point-to-point Intra-node Protocols

• Collectives

• MPI_T Support

Presentation Overview

Page 30: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 30 Network Based Computing Laboratory

Process Mapping support in MVAPICH2

Process-Mapping support in

MVAPICH2

(available since v1.4)

bunch

(Default) scatter

core

(Default) socket numanode

Preset Binding Policies User-defined binding

MPI rank-to-core binding

• MVAPICH2 detects processor architecture at job-launch

Policy

Granularity

hybrid

Page 31: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 31 Network Based Computing Laboratory

Preset Process-binding Policies – Bunch

• “Core” level “Bunch” mapping (Default)

– MV2_CPU_BINDING_POLICY=bunch

• “Socket/Numanode” level “Bunch” mapping

– MV2_CPU_BINDING_LEVEL=socket MV2_CPU_BINDING_POLICY=bunch

Page 32: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 32 Network Based Computing Laboratory

Preset Process-binding Policies – Scatter

• “Core” level “Scatter” mapping

– MV2_CPU_BINDING_POLICY=scatter

• “Socket/Numanode” level “Scatter” mapping

– MV2_CPU_BINDING_LEVEL=socket MV2_CPU_BINDING_POLICY=scatter

Page 33: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 33 Network Based Computing Laboratory

• A new process binding policy – “hybrid”

– MV2_CPU_BINDING_POLICY = hybrid

• A new environment variable for co-locating Threads with MPI Processes

– MV2_THREADS_PER_PROCESS = k

– Automatically set to OMP_NUM_THREADS if OpenMP is being used

– Provides a hint to the MPI runtime to spare resources for application threads.

• New variable for threads bindings with respect to parent process and architecture

– MV2_THREADS_BINDING_POLICY = {linear | compact}

• Linear – binds MPI ranks and OpenMP threads sequentially (one after the other)

– Recommended to be used on non-hyperthreaded systems with MPI+OpenMP

• Compact – binds MPI rank to physical-core and locates respective OpenMP threads on hardware threads

– Recommended to be used on multi-/many-cores e.g., KNL, POWER8, and hyper-threaded Xeon, etc.

Process and thread binding policies in hybrid MPI+Threads

Page 34: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 34 Network Based Computing Laboratory

Binding Example in Hybrid (MPI+Threads)

• MPI Processes = 4, OpenMP Threads per Process = 4

• MV2_CPU_BINDING_POLICY = hybrid

• MV2_THREADS_PER_PROCESS = 4

• MV2_THREADS_BINDING_POLICY = compact

Core0

HWT HWT

HWT

Core2

HWT HWT

HWT

Core1

HWT HWT

HWT

Core3

HWT HWT

HWT

Core0

HWT HWT

HWT

HWT HWT

HWT

Core1

HWT HWT

HWT

Core3

HWT HWT

HWT

Rank0 Rank1

Rank2 Rank3

Core2

• Detects hardware-threads support in architecture

• Assigns MPI ranks to physical cores and respective OpenMP Threads to HW threads

Page 35: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 35 Network Based Computing Laboratory

Binding Example in Hybrid (MPI+Threads) ---- Cont’d

• MPI Processes = 4, OpenMP Threads per Process = 4

• MV2_CPU_BINDING_POLICY = hybrid

• MV2_THREADS_PER_PROCESS = 4

• MV2_THREADS_BINDING_POLICY = linear

Core0

Core2 Core3

Core1

Core8

Core10 Core11

Core9

Core4

Core6 Core7

Core5

Core12

Core14 Core15

Core13

• MPI Rank-0 with its 4-OpenMP threads gets bound on Core-0 through Core-3, and so on

Core0

Core2 Core3

Core1

Core8

Core10 Core11

Core9

Core4

Core6 Core7

Core5

Core12

Core14 Core15

Core13

Rank0 Rank1

Rank3 Rank2

Page 36: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 36 Network Based Computing Laboratory

User-Defined Process Mapping

• User has complete-control over process-mapping

• To run 4 processes on cores 0, 1, 4, 5:

– $ mpirun_rsh -np 4 -hostfile hosts MV2_CPU_MAPPING=0:1:4:5 ./a.out

• Use ‘,’ or ‘-’ to bind to a set of cores:

– $mpirun_rsh -np 64 -hostfile hosts MV2_CPU_MAPPING=0,2-4:1:5:6 ./a.out

• Is process binding working as expected?

– MV2_SHOW_CPU_BINDING=1

• Display CPU binding information

• Launcher independent

• Example

– MV2_SHOW_CPU_BINDING=1 MV2_CPU_BINDING_POLICY=scatter

-------------CPU AFFINITY-------------

RANK:0 CPU_SET: 0

RANK:1 CPU_SET: 8

• Refer to Running with Efficient CPU (Core) Mapping section of MVAPICH2 user guide for more information

• http://mvapich.cse.ohio-state.edu/static/media/mvapich/mvapich2-2.3a-userguide.html#x1-600006.5

Page 37: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 37 Network Based Computing Laboratory

• Job start-up

• Point-to-point Inter-node Protocol

• Transport Type Selection

• Multi-rail and QoS

• Process Mapping and Point-to-point Intra-node Protocols

• Collectives

• MPI_T Support

Presentation Overview

Page 38: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 38 Network Based Computing Laboratory

Collective Communication in MVAPICH2

Run-time flags: All shared-memory based collectives : MV2_USE_SHMEM_COLL (Default: ON) Hardware Mcast-based collectives : MV2_USE_MCAST (Default : OFF)

Multi-Core Aware Designs

Collective Algorithms in MV2

Conventional (Flat)

Inter-Node Communication

Intra-Node Communication

Intra-Node (Pt-to-Pt)

Intra-Node (Shared-Memory)

Intra-Node (Kernel-Assisted)

Inter-Node (Pt-to-Pt)

Inter-Node (Hardware-Mcast)

Page 39: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 39 Network Based Computing Laboratory

Hardware Multicast-aware MPI_Bcast on TACC Stampede

05

10152025303540

2 8 32 128 512

Late

ncy

(u

s)

Message Size (Bytes)

Small Messages (102,400 Cores)

Default

Multicast

050

100150200250300350400450

2K 8K 32K 128K

Late

ncy

(u

s)

Message Size (Bytes)

Large Messages (102,400 Cores)

Default

Multicast

0

5

10

15

20

25

30

Late

ncy

(u

s)

Number of Nodes

16 Byte Message

Default

Multicast

0

50

100

150

200

Late

ncy

(u

s)

Number of Nodes

32 KByte Message

Default

Multicast

• MCAST-based designs improve latency of MPI_Bcast by up to 85%

• Use MV2_USE_MCAST=1 to enable MCAST-based designs

80%

85%

Page 40: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 40 Network Based Computing Laboratory

MPI_Scatter - Benefits of using Hardware-Mcast

02468

101214161820

1 2 4 8 16

Late

ncy

(u

sec)

Message Length (Bytes)

512 Processes

Scatter-Default Scatter-Mcast

0

5

10

15

20

25

30

1 2 4 8 16

Late

ncy

(u

sec)

Message Length (Bytes)

1,024 Processes

• Enabling MCAST-based designs for MPI_Scatter improves small message up to 75%

57% 75%

Parameter Description Default

MV2_USE_MCAST = 1 Enables hardware Multicast features Disabled

--enable-mcast Configure flag to enable Enabled

Page 41: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 41 Network Based Computing Laboratory

0

1

2

3

4

5

6

7

8

(4,4) (8,4) (16,4)

Late

ncy

(u

s)

(Number of Nodes, PPN)

MV2-128B

MV2-SHArP-128B

MV2-32B

MV2-SHArP-32B2.3x

0

2

4

6

8

10

12

4 8 16 32 64 128 256

Late

ncy

(u

s)

Message Size (Bytes)

4 PPN*, 16 Nodes MVAPICH2

MVAPICH2-SHArP

0

2

4

6

8

10

12

14

16

4 8 16 32 64 128 256

Late

ncy

(u

s)

Message Size (Bytes)

28 PPN, 16 Nodes MVAPICH2

MVAPICH2-SHArP

Advanced Allreduce Collective Designs Using SHArP osu_allreduce (OSU Micro Benchmark) using MVAPICH2 2.3b

2.3x

*PPN: Processes Per Node

1.5x

0

2

4

6

8

10

12

(4,28) (8,28) (16,28)

Late

ncy

(u

s)

(Number of Nodes, PPN)

MV2-128B

MV2-SHArP-128B

MV2-32B

MV2-SHArP-32B

1.4x

Page 42: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 42 Network Based Computing Laboratory

0

0.05

0.1

0.15

0.2

(4,28) (8,28) (16,28)

Late

ncy

(se

con

ds)

(Number of Nodes, PPN)

MVAPICH2

MVAPICH2-SHArP

13%

Mesh Refinement Time of MiniAMR

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

(4,28) (8,28) (16,28)

Late

ncy

(se

con

ds)

(Number of Nodes, PPN)

MVAPICH2

MVAPICH2-SHArP

Benefits of SHARP at Application Level

12%

Avg DDOT Allreduce time of HPCG

SHARP support available since MVAPICH2 2.3a

Parameter Description Default

MV2_ENABLE_SHARP=1 Enables SHARP-based collectives Disabled

--enable-sharp Configure flag to enable SHARP Disabled

• Refer to Running Collectives with Hardware based SHArP support section of MVAPICH2 user guide for more information

• http://mvapich.cse.ohio-state.edu/static/media/mvapich/mvapich2-2.3b-userguide.html#x1-990006.26

Page 43: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 43 Network Based Computing Laboratory

Problems with Blocking Collective Operations Application

Process

Application

Process

Application

Process

Application

Process

Computation

Communication

• Communication time cannot be used for compute

– No overlap of computation and communication

– Inefficient

Page 44: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 44 Network Based Computing Laboratory

• Application processes schedule collective operation

• Check periodically if operation is complete

• Overlap of computation and communication => Better Performance

• Catch: Who will progress communication

Concept of Non-blocking Collectives Application

Process

Application

Process

Application

Process

Application

Process

Computation

Communication

Communication

Support Entity

Communication

Support Entity

Communication

Support Entity

Communication

Support Entity

Schedule

Operation

Schedule

Operation

Schedule

Operation

Schedule

Operation

Check if

Complete

Check if

Complete

Check if

Complete

Check if

Complete

Check if

Complete

Check if

Complete

Check if

Complete

Check if

Complete

Page 45: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 45 Network Based Computing Laboratory

• Enables overlap of computation with communication

• Non-blocking calls do not match blocking collective calls

– MPI may use different algorithms for blocking and non-blocking collectives

– Blocking collectives: Optimized for latency

– Non-blocking collectives: Optimized for overlap

• A process calling a NBC operation

– Schedules collective operation and immediately returns

– Executes application computation code

– Waits for the end of the collective

• The communication progress by

– Application code through MPI_Test

– Network adapter (HCA) with hardware support

– Dedicated processes / thread in MPI library

• There is a non-blocking equivalent for each blocking operation

– Has an “I” in the name

• MPI_Bcast -> MPI_Ibcast; MPI_Reduce -> MPI_Ireduce

Non-blocking Collective (NBC) Operations

Page 46: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 46 Network Based Computing Laboratory

void main()

{

MPI_Init()

…..

MPI_Ialltoall(…)

Computation that does not depend on result of Alltoall

MPI_Test(for Ialltoall) /* Check if complete (non-blocking) */

Computation that does not depend on result of Alltoall

MPI_Wait(for Ialltoall) /* Wait till complete (Blocking) */

MPI_Finalize()

}

How do I write applications with NBC?

Page 47: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 47 Network Based Computing Laboratory

Performance of P3DFFT Kernel

• Weak scaling experiments; problem size increases with job size

• RDMA-Aware delivers 19% improvement over Default @ 8,192 processes

• Default-Thread exhibits worst performance

– Possibly because threads steal CPU cycles from P3DFFT

– Do not consider for large scale experiments

• Do not consider Default-Iprobe as it requires re-design of P3DFFT

0

2

4

6

8

10

12

14

128 256 512 1K 2K 4K 8K

CP

U T

ime

Pe

r Lo

op

(S

eco

nd

s)

Number of Processes

Large Scale Runs

Default RDMA-Aware

0

5

10

15

20

128 256 512

CP

U T

ime

Pe

r Lo

op

(S

eco

nd

s)

Number of Processes

Small Scale Runs

Default RDMA-Aware Default-Thread 19%

Page 48: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 48 Network Based Computing Laboratory

• Job start-up

• Point-to-point Inter-node Protocol

• Transport Type Selection

• Multi-rail and QoS

• Process Mapping and Point-to-point Intra-node Protocols

• Collectives

• MPI_T Support

Presentation Overview

Page 49: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 49 Network Based Computing Laboratory

MPI Tools Information Interface (MPI_T)

• Introduced in MPI 3.0 standard to expose internals of MPI to tools and applications

• Generalized interface – no defined variables in the standard

• Variables can differ between

- MPI implementations

- Compilations of same MPI library (production vs debug)

- Executions of the same application/MPI library

- There could be no variables provided

• Control Variables (CVARS) and Performance Variables (PVARS)

• More about the interface: mpi-forum.org/docs/mpi-3.0/mpi30-report.pdf

Page 50: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 50 Network Based Computing Laboratory

MPI_T usage semantics

Initialize MPI-T

Get #variables

Query Metadata

Allocate Session

Allocate Handle

Read/Write/Reset

Start/Stop var

Free Handle

Finalize MPI-T

Free Session

Allocate Handle

Read/Write var

Free Handle

Performance

Variables

Control

Variables

int MPI_T_init_thread(int required, int *provided); int MPI_T_cvar_get_num(int *num_cvar);

int MPI_T_cvar_get_info(int cvar_index, char *name, int *name_len, int *verbosity, MPI_Datatype *datatype, MPI_T_enum *enumtype, char *desc, int *desc_len, int *bind, int *scope);

int MPI_T_pvar_session_create(MPI_T_pvar_session *session); int MPI_T_pvar_handle_alloc(MPI_T_pvar_session session, int pvar_index, void *obj_handle, MPI_T_pvar_handle *handle, int *count);

int MPI_T_pvar_start(MPI_T_pvar_session session, MPI_T_pvar_handle handle); int MPI_T_pvar_read(MPI_T_pvar_session session, MPI_T_pvar_handle handle, void* buf); int MPI_T_pvar_reset(MPI_T_pvar_session session, MPI_T_pvar_handle handle);

int MPI_T_pvar_handle_free(MPI_T_pvar_session session, MPI_T_pvar_handle *handle); int MPI_T_pvar_session_free(MPI_T_pvar_session *session); int MPI_T_finalize(void);

Page 51: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 51 Network Based Computing Laboratory

MPI_T_init_thread(..)

MPI_T_cvar_get_info(MV2_EAGER_THRESHOLD)

if (msg_size < MV2_EAGER_THRESHOLD + 1KB)

MPI_T_cvar_write(MV2_EAGER_THRESHOLD, +1024)

MPI_Send(..)

MPI_T_finalize(..)

Co-designing Applications to use MPI-T

Example Pseudo-code: Optimizing the eager limit dynamically:

Page 52: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 52 Network Based Computing Laboratory

Evaluating Applications with MPI-T

0

1000

2000

3000

4000

5000

1216 1824 2432

Mill

ion

s o

f m

ess

age

s

#processes

Communication profile (ADCIRC)

Intranode Internode

0

20

40

60

80

100

32 64 128 256

Mill

ion

s o

f m

ess

age

s

#processes

Communication profile (WRF)

Intranode Internode

0

1000

2000

3000

4000

5000

256 512 1024

Max

. # u

ne

xp. r

ecv

s

#processes

Unexpected message profile (UH3D)

• Users can gain insights into application communication

characteristics!

Page 53: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 53 Network Based Computing Laboratory

● Enhance existing support for MPI_T in MVAPICH2 to expose a richer set of

performance and control variables

● Get and display MPI Performance Variables (PVARs) made available by the runtime

in TAU

● Control the runtime’s behavior via MPI Control Variables (CVARs)

● Introduced support for new MPI_T based CVARs to MVAPICH2

○ MPIR_CVAR_MAX_INLINE_MSG_SZ, MPIR_CVAR_VBUF_POOL_SIZE,

MPIR_CVAR_VBUF_SECONDARY_POOL_SIZE

● TAU enhanced with support for setting MPI_T CVARs in a non-interactive mode for

uninstrumented applications

● S. Ramesh, A. Maheo, S. Shende, A. Malony, H. Subramoni, and D. K. Panda, MPI

Performance Engineering with the MPI Tool Interface: the Integration of MVAPICH

and TAU, EuroMPI/USA ‘17, Best Paper Finalist

● More details in Prof. Malony’s talk today and poster presentations

Performance Engineering Applications using MVAPICH2 and TAU

VBUF usage without CVAR based tuning as displayed by ParaProf VBUF usage with CVAR based tuning as displayed by ParaProf

Page 54: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 54 Network Based Computing Laboratory

MVAPICH2 Software Family

Requirements Library

MPI with IB, iWARP and RoCE MVAPICH2

Advanced MPI, OSU INAM, PGAS and MPI+PGAS with IB and RoCE MVAPICH2-X

MPI with IB & GPU MVAPICH2-GDR

MPI with IB & MIC MVAPICH2-MIC

HPC Cloud with MPI & IB MVAPICH2-Virt

Energy-aware MPI with IB, iWARP and RoCE MVAPICH2-EA

MPI Energy Monitoring Tool OEMT

InfiniBand Network Analysis and Monitoring OSU INAM

Microbenchmarks for Measuring MPI and PGAS Performance OMB

Page 55: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 55 Network Based Computing Laboratory

MVAPICH2-X for Hybrid MPI + PGAS Applications

• Current Model – Separate Runtimes for OpenSHMEM/UPC/UPC++/CAF and MPI – Possible deadlock if both runtimes are not progressed

– Consumes more network resource

• Unified communication runtime for MPI, UPC, UPC++, OpenSHMEM, CAF

– Available with since 2012 (starting with MVAPICH2-X 1.9)

– http://mvapich.cse.ohio-state.edu

Page 56: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 56 Network Based Computing Laboratory

UPC++ Support in MVAPICH2-X

MPI + {UPC++} Application

GASNet Interfaces

UPC++ Interface

Network

Conduit (MPI)

MVAPICH2-X Unified Communication

Runtime (UCR)

MPI + {UPC++} Application

UPC++ Runtime

MPI Interfaces

• Full and native support for hybrid MPI + UPC++ applications

• Better performance compared to IBV and MPI conduits

• OSU Micro-benchmarks (OMB) support for UPC++

• Available since MVAPICH2-X (2.2rc1)

0

5000

10000

15000

20000

25000

30000

35000

40000

1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M

Tim

e (u

s)

Message Size (bytes)

GASNet_MPI

GASNET_IBV

MV2-X

14x

Inter-node Broadcast (64 nodes 1:ppn)

MPI Runtime

More Details in Student Poster

Presentation

Page 57: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 57 Network Based Computing Laboratory

Application Level Performance with Graph500 and Sort Graph500 Execution Time

J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming Models,

International Supercomputing Conference (ISC’13), June 2013

J. Jose, K. Kandalla, M. Luo and D. K. Panda, Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance Evaluation,

Int'l Conference on Parallel Processing (ICPP '12), September 2012

0

5

10

15

20

25

30

35

4K 8K 16K

Tim

e (

s)

No. of Processes

MPI-Simple

MPI-CSC

MPI-CSR

Hybrid (MPI+OpenSHMEM)13X

7.6X

• Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design

• 8,192 processes - 2.4X improvement over MPI-CSR

- 7.6X improvement over MPI-Simple

• 16,384 processes - 1.5X improvement over MPI-CSR

- 13X improvement over MPI-Simple

J. Jose, K. Kandalla, S. Potluri, J. Zhang and D. K. Panda, Optimizing Collective Communication in OpenSHMEM, Int'l Conference on Partitioned

Global Address Space Programming Models (PGAS '13), October 2013.

Sort Execution Time

0

500

1000

1500

2000

2500

3000

500GB-512 1TB-1K 2TB-2K 4TB-4K

Tim

e (

seco

nd

s)

Input Data - No. of Processes

MPI Hybrid

51%

• Performance of Hybrid (MPI+OpenSHMEM) Sort Application

• 4,096 processes, 4 TB Input Size - MPI – 2408 sec; 0.16 TB/min

- Hybrid – 1172 sec; 0.36 TB/min

- 51% improvement over MPI-design

Page 58: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 58 Network Based Computing Laboratory

OpenSHMEM Application kernels

• AVX-512 vectorization showed up to 3X improvement over default

• KNL showed on-par performance as Broadwell on Node-by-node comparison

0.01

0.1

1

10

100

2 4 8 16 32 64 128

Tim

e (s

)

No. of processes

KNL (Default) KNL (AVX-512)KNL (AVX-512+MCDRAM) Broadwell

Scalable Integer Sort Kernel (ISx)

0.01

0.1

1

10

100

2DHeat HeatImg MatMul DAXPY ISx(strong)

Tim

e (s

)

KNL (68) Broadwell (28)

Single KNL and Single Broadwell node

• AVX-512 vectorization and MCDRAM based optimizations for MVAPICH2-X – Will be available in future MVAPICH2-X release

Exploiting and Evaluating OpenSHMEM on KNL Architecture J. Hashmi, M. Li, H. Subramoni, D. Panda Fourth Workshop on OpenSHMEM and Related Technologies, Aug 2017.

+30%

-5%

Page 59: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 59 Network Based Computing Laboratory

• Introduced by Mellanox to avoid pinning the pages of registered memory regions

• ODP-aware runtime could reduce the size of pin-down buffers while maintaining

performance

Implicit On-Demand Paging (ODP)

M. Li, X. Lu, H. Subramoni, and D. K. Panda, “Designing Registration Caching Free High-Performance MPI Library with Implicit

On-Demand Paging (ODP) of InfiniBand”, Under Review

1

10

100

1000

10000

AWP-ODC-256 AWP-ODC-512 BT CG LU MG SP LJ EAM Boltzmann LennardExec

uti

on

Tim

e (s

)

Applications (64 Processes)

Pin-down ODP-Enhanced ODP-Optimal ODP-Native

1

10

100

1000

10000

AWP-ODC-256 AWP-ODC-512 BT CG LU MG SP LJ EAM Boltzmann LennardExec

uti

on

Tim

e (s

)

Applications (64 Processes)

Pin-down ODP-Enhanced ODP-Optimal ODP-Native

1

10

100

1000

10000

AWP-ODC-256 AWP-ODC-512 BT CG LU MG SP LJ EAM Boltzmann LennardExec

uti

on

Tim

e (s

)

Applications (64 Processes)

Pin-down ODP-Enhanced ODP-Optimal ODP-Native

0.1

1

10

100

CG EP FT IS MG LU SP AWP-ODC Graph500

Exe

cuti

on

Tim

e (

s)

Applications (256 Processes)

Pin-down Explicit-ODP Implicit-ODP

1

10

100

1000

10000

AWP-ODC-256 AWP-ODC-512 BT CG LU MG SP LJ EAM Boltzmann LennardExec

uti

on

Tim

e (s

)

Applications (64 Processes)

Pin-down ODP-Enhanced ODP-Optimal ODP-Native

Page 60: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 60 Network Based Computing Laboratory

MVAPICH2 Software Family

Requirements Library

MPI with IB, iWARP and RoCE MVAPICH2

Advanced MPI, OSU INAM, PGAS and MPI+PGAS with IB and RoCE MVAPICH2-X

MPI with IB & GPU MVAPICH2-GDR

MPI with IB & MIC MVAPICH2-MIC

HPC Cloud with MPI & IB MVAPICH2-Virt

Energy-aware MPI with IB, iWARP and RoCE MVAPICH2-EA

MPI Energy Monitoring Tool OEMT

InfiniBand Network Analysis and Monitoring OSU INAM

Microbenchmarks for Measuring MPI and PGAS Performance OMB

Page 61: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 61 Network Based Computing Laboratory

CUDA-Aware MPI: MVAPICH2-GDR 1.8-2.2 Releases

• Support for MPI communication from NVIDIA GPU device memory

• High performance RDMA-based inter-node point-to-point communication (GPU-GPU,

GPU-Host and Host-GPU)

• High performance intra-node point-to-point communication for multi-GPU

adapters/node (GPU-GPU, GPU-Host and Host-GPU)

• Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node communication

for multiple GPU adapters/node

• Optimized and tuned collectives for GPU device buffers

• MPI datatype support for point-to-point and collective communication from GPU

device buffers

Page 62: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 62 Network Based Computing Laboratory

• Support for Efficient Small Message Communication with GPUDirect RDMA

• Multi-rail Support

• Support for Efficient Intra-node Communication using CUDA IPC

• MPI Datatype Support

• On-going Work

Presentation Overview

Page 63: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 63 Network Based Computing Laboratory

• Current MPI design using GPUDirect RDMA uses

Rendezvous protocol

• Has higher latency for small messages

• Can eager protocol be supported to improve performance

for small messages?

• Two schemes proposed and used

• Loopback (using network adapter to copy data)

• Fastcopy/GDRCOPY (using CPU to copy data)

Enhanced MPI Design with GPUDirect RDMA

Sender Receiver

Rendezvous Protocol

fin

rndz_start

rndz_reply

data

Sender Receiver

Eager Protocol

send

R. Shi, S. Potluri, K. Hamidouche M. Li, J. Perkins D. Rossetti and D. K. Panda, Designing Efficient Small Message Transfer Mechanism for Inter-node MPI Communication on InfiniBand GPU Clusters IEEE International Conference on High Performance Computing (HiPC'2014)

Page 64: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 64 Network Based Computing Laboratory

• Can we substitute the cudaMemcpy with a better design?

Host Memory

GPU Memory

Host Buf

GPU Buf

HCA PCI-E

Alternative Approaches

• CudaMemcpy: Default Scheme

• Big overhead for small message

• Loopback-based design: Uses GDR feature

• Process establishes self-connection

• Copy H-D ⇒ RDMA write (H, D)

• Copy D-H ⇒ RDMA write (D, H)

• P2P bottleneck ⇒ good for small and

medium sizes

• GDRCOPY-based design: New module for fast copies

• Involves GPU PCIe BAR1 mapping

• CPU performing the copy ⇒ block until completion

• Very good performance for H-D for small and medium sizes

• Very good performance for D-H only for very small sizes

Page 65: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 65 Network Based Computing Laboratory

0

1000

2000

3000

4000

5000

6000

1 2 4 8 16 32 64 128 256 512 1K 2K 4K

Ban

dw

idth

(M

B/s

)

Message Size (Bytes)

GPU-GPU Inter-node Bi-Bandwidth

MV2-(NO-GDR) MV2-GDR-2.3a

0500

100015002000250030003500

1 2 4 8 16 32 64 128 256 512 1K 2K 4K

Ban

dw

idth

(M

B/s

)

Message Size (Bytes)

GPU-GPU Inter-node Bandwidth

MV2-(NO-GDR) MV2-GDR-2.3a

0

5

10

15

20

25

30

0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K

Late

ncy

(u

s)

Message Size (Bytes)

GPU-GPU Inter-node Latency

MV2-(NO-GDR) MV2-GDR-2.3a

65

MVAPICH2-GDR-2.3a Intel Haswell (E5-2687W) node - 20 cores

NVIDIA Pascal P100 GPU Mellanox Connect-X5 EDR HCA

CUDA 8.0 Mellanox OFED 4.0 with GPU-Direct-RDMA

10x

9x

Performance of MVAPICH2-GPU with GPU-Direct RDMA (GDR)

1.91us

11X

Page 66: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 66 Network Based Computing Laboratory

0

5

10

15

20

1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K

Late

ncy

(u

s)

Message Size (Bytes)

INTRA-NODE LATENCY (SMALL)

INTRA-SOCKET(NVLINK) INTER-SOCKET

MVAPICH2-GDR: Performance on OpenPOWER (NVLink + Pascal)

0

5

10

15

20

25

30

35

1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M

Ban

dw

idth

(G

B/s

ec)

Message Size (Bytes)

INTRA-NODE BANDWIDTH

INTRA-SOCKET(NVLINK) INTER-SOCKET

0

1

2

3

4

5

6

7

1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M

Ban

dw

idth

(G

B/s

ec)

Message Size (Bytes)

INTER-NODE BANDWIDTH

Platform: OpenPOWER (ppc64le) nodes equipped with a dual-socket CPU, 4 Pascal P100-SXM GPUs, and 4X-FDR InfiniBand Inter-connect

0

100

200

300

400

500

16K 32K 64K 128K 256K 512K 1M 2M 4M

Late

ncy

(u

s)

Message Size (Bytes)

INTRA-NODE LATENCY (LARGE)

INTRA-SOCKET(NVLINK) INTER-SOCKET

0

5

10

15

20

25

30

35

1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K

Late

ncy

(u

s)

Message Size (Bytes)

INTER-NODE LATENCY (SMALL)

0

100

200

300

400

500

600

700

800

900

16K 32K 64K 128K 256K 512K 1M 2M 4M

Late

ncy

(u

s)

Message Size (Bytes)

INTER-NODE LATENCY (LARGE)

Intra-node Bandwidth: 32 GB/sec (NVLINK) Intra-node Latency: 16.8 us (without GPUDirectRDMA)

Inter-node Latency: 22 us (without GPUDirectRDMA) Inter-node Bandwidth: 6 GB/sec (FDR) Will be available in upcoming MVAPICH2-GDR

Page 67: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 67 Network Based Computing Laboratory

Tuning GDRCOPY Designs in MVAPICH2-GDR

Parameter Significance Default Notes

MV2_USE_GPUDIRECT_GDRCOPY

• Enable / Disable GDRCOPY-based designs

1 (Enabled)

• Always enable

MV2_GPUDIRECT_GDRCOPY_LIMIT

• Controls messages size until which GDRCOPY is used

8 KByte • Tune for your system • GPU type, host architecture. Impacts the eager performance

MV2_GPUDIRECT_GDRCOPY_LIB

• Path to the GDRCOPY library

Unset • Always set

MV2_USE_GPUDIRECT_D2H_GDRCOPY_LIMIT

• Controls messages size until which GDRCOPY is used at sender

16Bytes • Tune for your systems • CPU and GPU type

• Refer to Tuning and Usage Parameters section of MVAPICH2-GDR user guide for more information

• http://mvapich.cse.ohio-state.edu/userguide/gdr/#_tuning_and_usage_parameters

Page 68: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 68 Network Based Computing Laboratory

Tuning Loopback Designs in MVAPICH2-GDR

Parameter Significance Default Notes

MV2_USE_GPUDIRECT_LOOPBACK

• Enable / Disable LOOPBACK-based designs

1 (Enabled)

• Always enable

MV2_GPUDIRECT_LOOPBACK_LIMIT

• Controls messages size until which LOOPBACK is used

8 KByte • Tune for your system • GPU type, host architecture and HCA. Impacts the eager performance •Sensitive to the P2P issue

• Refer to Tuning and Usage Parameters section of MVAPICH2-GDR user guide for more information

• http://mvapich.cse.ohio-state.edu/userguide/gdr/#_tuning_and_usage_parameters

Page 69: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 69 Network Based Computing Laboratory

Tuning GPUDirect RDMA (GDR) Designs in MVAPICH2-GDR

Parameter Significance Default Notes

MV2_USE_GPUDIRECT • Enable / Disable GDR-based designs

1 (Enabled)

• Always enable

MV2_GPUDIRECT_LIMIT • Controls messages size until which GPUDirect RDMA is used

8 KByte • Tune for your system • GPU type, host architecture and CUDA version: impact pipelining overheads and P2P bandwidth bottlenecks

MV2_USE_GPUDIRECT_RECEIVE_LIMIT

• Controls messages size until which 1 hop design is used (GDR Write at the receiver)

256KBytes • Tune for your system • GPU type, HCA type and configuration

• Refer to Tuning and Usage Parameters section of MVAPICH2-GDR user guide for more information

• http://mvapich.cse.ohio-state.edu/userguide/gdr/#_tuning_and_usage_parameters

Page 70: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 70 Network Based Computing Laboratory

LENS (Oct '15) 70

• Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB)

• HoomdBlue Version 1.0.5

• GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768

MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768

MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384

Application-Level Evaluation (HOOMD-blue)

0

500

1000

1500

2000

2500

4 8 16 32

Ave

rage

Tim

e St

eps

per

sec

on

d (

TPS)

Number of Processes

MV2 MV2+GDR

0

500

1000

1500

2000

2500

3000

3500

4 8 16 32Ave

rage

Tim

e St

eps

per

se

con

d

(TP

S)

Number of Processes

64K Particles 256K Particles

2X 2X

Page 71: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 71 Network Based Computing Laboratory

• Support for Efficient Small Message Communication with GPUDirect RDMA

• Multi-rail Support

• Support for Efficient Intra-node Communication using CUDA IPC

• MPI Datatype Support

• On-going Work

Presentation Overview

Page 72: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 72 Network Based Computing Laboratory

Tuning Multi-rail Support in MVAPICH2-GDR

Parameter Significance Default Notes

MV2_RAIL_SHARING_POLICY

• How the Rails are bind/selected by processes

Shared • Sharing gives the best performance for pipeline design

PROCESS_TO_RAIL_MAPPING

• Explicit binding of the HCAs to the CPU

First HCA •

• Refer to Tuning and Usage Parameters section of MVAPICH2-GDR user guide for more information

• http://mvapich.cse.ohio-state.edu/userguide/gdr/#_tuning_and_usage_parameters

• Automatic rail and CPU binding depending on the GPU selection

• User selects the GPU and MVAPICH2-GDR selects the best HCA (avoids the P2P bottleneck)

• Multi-rail selection for large message size for better Bandwidth utilization (pipeline design)

Page 73: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 73 Network Based Computing Laboratory

Performance of MVAPICH2-GDR with GPU-Direct RDMA and Multi-Rail Support

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M

MV2-GDR 2.1

MV2-GDR 2.1 RC2

GPU-GPU Internode MPI Uni-Directional Bandwidth

Message Size (bytes)

Ban

dw

idth

(M

B/s

)

73 LENS (Oct '15)

MVAPICH2-GDR-2.2.b Intel Ivy Bridge (E5-2680 v2) node - 20 cores, NVIDIA Tesla K40c GPU

Mellanox Connect-IB Dual-FDR HCA CUDA 7 Mellanox OFED 2.4 with GPU-Direct-RDMA

0

2000

4000

6000

8000

10000

12000

14000

16000

1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M

MV2-GDR 2.1

MV2-GDR 2.1 RC2

GPU-GPU Internode Bi-directional Bandwidth

Message Size (bytes)

Bi-

Ban

dw

idth

(M

B/s

)

8,759 15,111

40% 20%

Page 74: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 74 Network Based Computing Laboratory

• Support for Efficient Small Message Communication with GPUDirect RDMA

• Multi-rail Support

• Support for Efficient Intra-node Communication using CUDA IPC

• MPI Datatype Support

• On-going Work

Presentation Overview

Page 75: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 75 Network Based Computing Laboratory

• Multi-GPU node architectures are becoming common

• Until CUDA 3.2

- Communication between processes staged through the host

- Shared Memory (pipelined)

- Network Loopback [asynchronous)

• CUDA 4.0 and later

- Inter-Process Communication (IPC)

- Host bypass

- Handled by a DMA Engine

- Low latency and Asynchronous

- Requires creation, exchange and mapping of memory handles

- Overhead

Multi-GPU Configurations

CPU

GPU 1 GPU 0

Memory

I/O Hub

Process 0 Process 1

HCA

Page 76: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 76 Network Based Computing Laboratory

Tuning IPC designs in MVAPICH2-GDR

• Works between GPUs within the same

socket or IOH

• Leads to significant benefits in

appropriate scenarios

0

10

20

30

1 4 16 64 256 1024

Late

ncy

(u

sec)

Message Size (Bytes)

Intra-node Small Message Latency

SHARED-MEM IPC SMP-IPC

Parameter Significance Default Notes

MV2_CUDA_IPC • Enable / Disable CUDA IPC- based designs

1 (Enabled) • Always leave set to 1

MV2_CUDA_SMP_IPC • Enable / Disable CUDA IPC fastpath design for short messages

0 (Disabled)

• Benefits Device-to-Device transfers • Hurts Device-to-Host/Host-to-Device transfers • Always set to 1 if application involves only Device-to-Device transfers

MV2_IPC_THRESHOLD • Message size where IPC code path will be used

16 KBytes • Tune for your system

Page 77: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 77 Network Based Computing Laboratory

• Double Buffering schemes

– Uses intermediate buffers (IPC Pinned)

– Control information through Host memories

• Exchange the handlers through the host for IPC completion

– Works for all CUDA versions (Since 5.5)

– Memory Overhead

• Cache based design

– Rendezvous based design

– Cache the IPC handlers at the source and destination (through the control messages)

– With Cache hit => direct data movement

– Requires CUDA 6.5 and onwards

– High Performance and memory efficiency

Alternative Designs

Page 78: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 78 Network Based Computing Laboratory

Tuning IPC designs in MVAPICH2-GDR

• Works between GPUs within the same socket or IOH

Parameter Significance Default Notes

MV2_CUDA_ENABLE_IPC_CACHE

• Enable / Disable CUDA IPC_CACHE- based designs

1 (Enabled)

• Always leave set to 1 • Best performance • Enables one-sided semantics

MV2_CUDA_IPC_BUFFERED

• Enable / Disable CUDA IPC_BUFFERED design

1 (Enabled)

• Used for subset of operations • Backup for the IPC-Cache design • Uses double buffering schemes • Used for efficient Managed support

MV2_CUDA_IPC_MAX_CACHE_ENTRIES

• Number of entries in the cache 64 • Tuned for your application • Depends on the communication patterns • Increase the value for irregular applications

MV2_CUDA_IPC_STAGE_BUF_SIZE

• The size of the staging buffers in the double buffering schemes

Page 79: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 79 Network Based Computing Laboratory

IPC-Cache Communication Enhancement

0

100

200

300

400

500

600

700

800

0 8 128 2K 32K 512K

MV2-2.1a

Enhanced

Message Size (bytes)

Late

ncy

(u

s)

0

10000

20000

30000

40000

50000

60000

70000

1 16 256 4K 64K 1M

MV2-2.1a

Enhanced

Message Size (bytes)

Ban

dw

idth

(MB

/s)

4.7X 7.8X

• Two Processes sharing the same K80 GPUs • Proposed designs achieve 4.7X improvement in latency • 7.8X improvement is delivered for Bandwidth • Available with the latest release of MVAPICH2-GDR 2.2

Page 80: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 80 Network Based Computing Laboratory

• Support for Efficient Small Message Communication with GPUDirect RDMA

• Multi-rail Support

• Support for Efficient Intra-node Communication using CUDA IPC

• MPI Datatype Support

• On-going Work

Presentation Overview

Page 81: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 81 Network Based Computing Laboratory

• Multi-dimensional data

• Row based organization

• Contiguous on one dimension

• Non-contiguous on other dimensions

• Halo data exchange

• Duplicate the boundary

• Exchange the boundary in each iteration

Halo data exchange

Non-contiguous Data Exchange

Page 82: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 82 Network Based Computing Laboratory

MPI Datatype support in MVAPICH2

• Datatypes support in MPI

– Operate on customized datatypes to improve productivity

– Enable MPI library to optimize non-contiguous data

At Sender: MPI_Type_vector (n_blocks, n_elements, stride, old_type, &new_type);

MPI_Type_commit(&new_type);

MPI_Send(s_buf, size, new_type, dest, tag, MPI_COMM_WORLD);

• Inside MVAPICH2 - Use datatype specific CUDA Kernels to pack data in chunks

- Efficiently move data between nodes using RDMA

- In progress - currently optimizes vector and hindexed datatypes

- Transparent to the user

H. Wang, S. Potluri, D. Bureddy, C. Rosales and D. K. Panda, GPU-aware MPI on RDMA-Enabled Clusters: Design, Implementation and Evaluation, IEEE Transactions on Parallel and Distributed Systems, Accepted for Publication.

Page 83: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 83 Network Based Computing Laboratory

MPI Datatype Processing (Computation Optimization )

• Comprehensive support

• Targeted kernels for regular datatypes - vector, subarray, indexed_block

• Generic kernels for all other irregular datatypes

• Separate non-blocking stream for kernels launched by MPI library

• Avoids stream conflicts with application kernels

• Flexible set of parameters for users to tune kernels

• Vector

• MV2_CUDA_KERNEL_VECTOR_TIDBLK_SIZE

• MV2_CUDA_KERNEL_VECTOR_YSIZE

• Subarray

• MV2_CUDA_KERNEL_SUBARR_TIDBLK_SIZE

• MV2_CUDA_KERNEL_SUBARR_XDIM

• MV2_CUDA_KERNEL_SUBARR_YDIM

• MV2_CUDA_KERNEL_SUBARR_ZDIM

• Indexed_block

• MV2_CUDA_KERNEL_IDXBLK_XDIM

Page 84: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 84 Network Based Computing Laboratory

Stencil3D communication kernel on 2 GPUs with various X, Y, Z dimensions using MPI_Isend/Irecv

• DT: Direct Transfer, TR: Targeted Kernel

• Optimized design gains up to 15%, 15% and 22% compared to TR, and more than 86% compared to DT on X, Y and Z respectively

0

0.5

1

1.5

2

2.5

1 2 4 8 16 32 64 128 256

Late

ncy

(m

s)

Size of DimZ, [16,16,z]

Performance of Stencil3D (3D subarray)

0

0.5

1

1.5

2

2.5

1 2 4 8 16 32 64 128 256

Late

ncy

(m

s)

Size of DimY, [16,y,16]

0

0.5

1

1.5

2

2.5

1 2 4 8 16 32 64 128 256

Late

ncy

(m

s)

Size of DimX, [x,16,16]

DT TR Enhanced

86%

Page 85: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 85 Network Based Computing Laboratory

CPU

Progress

GPU

Time

Initi

ate

Kern

el

Star

t Se

nd

Isend(1)

Initi

ate

Kern

el

Star

t Se

nd

Init

iate

Ke

rnel

GPU

CPU

Initi

ate

Kern

el

Star

tSe

nd

Wait For Kernel(WFK)

Kernel on Stream

Isend(1)

Existing Design

Proposed Design

Kernel on Stream

Kernel on Stream

Isend(2)Isend(3)

Kernel on Stream

Init

iate

Ke

rnel

Star

t Se

nd

Wait For Kernel(WFK)

Kernel on Stream

Isend(1)

Init

iate

Ke

rnel

Star

t Se

nd

Wait For Kernel(WFK)

Kernel on Stream

Isend(1) Wait

WFK

Star

t Se

nd

Wait

Progress

Start Finish Proposed Finish Existing

WFK

WFK

Expected Benefits

MPI Datatype Processing (Communication Optimization )

Waste of computing resources on CPU and GPU Common Scenario

*A, B…contain non-contiguous MPI Datatype

MPI_Isend (A,.. Datatype,…) MPI_Isend (B,.. Datatype,…) MPI_Isend (C,.. Datatype,…) MPI_Isend (D,.. Datatype,…) … MPI_Waitall (…);

Page 86: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 86 Network Based Computing Laboratory

Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland

0

0.2

0.4

0.6

0.8

1

1.2

16 32 64 96N

orm

aliz

ed E

xecu

tio

n T

ime

Number of GPUs

CSCS GPU cluster

Default Callback-based Event-based

0

0.2

0.4

0.6

0.8

1

1.2

4 8 16 32

No

rmal

ized

Exe

cuti

on

Tim

e

Number of GPUs

Wilkes GPU Cluster

Default Callback-based Event-based

• 2X improvement on 32 GPUs nodes • 30% improvement on 96 GPU nodes (8 GPUs/node)

C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data

Movement Processing on Modern GPU-enabled Systems, IPDPS’16

On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application

Cosmo model: http://www2.cosmo-model.org/content

/tasks/operational/meteoSwiss/

Page 87: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 87 Network Based Computing Laboratory

Enhanced Support for GPU Managed Memory

● CUDA Managed => no memory pin down

● No IPC support for intranode communication

● No GDR support for Internode communication

● Significant productivity benefits due to abstraction of explicit

allocation and cudaMemcpy()

● Initial and basic support in MVAPICH2-GDR

● For both intra- and inter-nodes use “pipeline through”

host memory

● Enhance intranode managed memory to use IPC

● Double buffering pair-wise IPC-based scheme

● Brings IPC performance to Managed memory

● High performance and high productivity

● 2.5 X improvement in bandwidth

● OMB extended to evaluate the performance of point-to-point

and collective communications using managed buffers

0100020003000400050006000700080009000

10000

32K 128K 512K 2M

Enhanced

MV2-GDR 2.2b

Message Size (bytes)

Ban

dw

idth

(M

B/s

)

2.5X

D. S. Banerjee, K Hamidouche, and D. K Panda, Designing High Performance Communication Runtime for GPUManaged Memory: Early Experiences, GPGPU-9 Workshop, held in conjunction with PPoPP ‘16

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

1 2 4 8 16 32 64 128 256 1K 4K 8K 16K

Hal

o E

xch

ange

Tim

e (m

s)

Total Dimension Size (Bytes)

2D Stencil Performance for Halowidth=1

Device

Managed

Page 88: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 88 Network Based Computing Laboratory

• Support for Efficient Small Message Communication with GPUDirect RDMA

• Multi-rail Support

• Support for Efficient Intra-node Communication using CUDA IPC

• MPI Datatype Support

• On-going Work

Presentation Overview

Page 89: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 89 Network Based Computing Laboratory

Overview of GPUDirect aSync (GDS) Feature: Current MPI+CUDA interaction

CUDA_Kernel_a<<<>>>(A…., stream1) cudaStreamSynchronize(stream1) MPI_Isend (A,…., req1) MPI_Wait (req1) CUDA_Kernel_b<<<>>>(B…., stream1)

GPU CPU HCA

100% CPU control • Limit the throughput of a GPU • Limit the asynchronous progress • Waste GPU cycles

Page 90: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 90 Network Based Computing Laboratory

MVAPICH2-GDS: Decouple GPU Control Flow from CPU

CUDA_Kernel_a<<<>>>(A…., stream1) MPI_Isend (A,…., req1, stream1) MPI_Wait (req1, stream1) (non-blocking from CPU) CUDA_Kernel_b<<<>>>(B…., stream1)

GPU CPU HCA

CPU offloads the compute, communication and synchronization tasks to GPU • CPU is out of the critical path • Tight interaction between GPU and HCA • Hide the overhead of kernel launch • Requires MPI semantics extensions

• All operations are asynchronous from CPU • Extend MPI semantics with Stream-based semantics

Kernel Launch overhead hidden

Page 91: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 91 Network Based Computing Laboratory

Latency oriented: Send+kernel and Recv+kernel

MVAPICH2-GDS: Preliminary Results

0

5

10

15

20

25

30

8 32 128 512

Default Enhanced-GDS

Message Size (bytes)

Late

ncy

(u

s)

0

5

10

15

20

25

30

35

16 64 256 1024

Default Enhanced-GDS

Message Size (bytes)

Late

ncy

(u

s)

Throughput Oriented: back-to-back

• Latency Oriented: Able to hide the kernel launch overhead

– 25% improvement at 256 Bytes compared to default behavior

• Throughput Oriented: Asynchronously to offload queue the Communication and computation tasks

– 14% improvement at 1KB message size

– Requires some tuning and expect better performance for Application with different Kernels

Intel Sandy Bridge, NVIDIA K20 and Mellanox FDR HCA

Page 92: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 92 Network Based Computing Laboratory

MVAPICH2 Software Family

Requirements Library

MPI with IB, iWARP and RoCE MVAPICH2

Advanced MPI, OSU INAM, PGAS and MPI+PGAS with IB and RoCE MVAPICH2-X

MPI with IB & GPU MVAPICH2-GDR

MPI with IB & MIC MVAPICH2-MIC

HPC Cloud with MPI & IB MVAPICH2-Virt

Energy-aware MPI with IB, iWARP and RoCE MVAPICH2-EA

MPI Energy Monitoring Tool OEMT

InfiniBand Network Analysis and Monitoring OSU INAM

Microbenchmarks for Measuring MPI and PGAS Performance OMB

Page 93: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 93 Network Based Computing Laboratory

• MVAPICH2-EA 2.1 (Energy-Aware)

• A white-box approach

• New Energy-Efficient communication protocols for pt-pt and collective operations

• Intelligently apply the appropriate Energy saving techniques

• Application oblivious energy saving

• OEMT

• A library utility to measure energy consumption for MPI applications

• Works with all MPI runtimes

• PRELOAD option for precompiled applications

• Does not require ROOT permission:

• A safe kernel module to read only a subset of MSRs

Energy-Aware MVAPICH2 & OSU Energy Management Tool (OEMT)

Page 94: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 94 Network Based Computing Laboratory

Designing Energy-Aware (EA) MPI Runtime

Energy Spent in Communication

Routines

Energy Spent in Computation

Routines

Overall application Energy

Expenditure

Point-to-point Routines Collective

Routines RMA Routines

MVAPICH2-EA Designs

MPI Two-sided and collectives (ex: MVAPICH2)

Other PGAS Implementations (ex: OSHMPI) One-sided runtimes (ex: ComEx)

Impact MPI-3 RMA Implementations (ex: MVAPICH2)

Page 95: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 95 Network Based Computing Laboratory

• An energy efficient runtime that

provides energy savings without

application knowledge

• Uses automatically and

transparently the best energy

lever

• Provides guarantees on

maximum degradation with 5-

41% savings at <= 5%

degradation

• Pessimistic MPI applies energy

reduction lever to each MPI call

MVAPICH2-EA: Application Oblivious Energy-Aware-MPI (EAM)

A Case for Application-Oblivious Energy-Efficient MPI Runtime A. Venkatesh, A. Vishnu, K. Hamidouche, N. Tallent, D.

K. Panda, D. Kerbyson, and A. Hoise, Supercomputing ‘15, Nov 2015 [Best Student Paper Finalist]

1

Page 96: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 96 Network Based Computing Laboratory

MPI-3 RMA Energy Savings with Proxy-Applications

0

10

20

30

40

50

60

512 256 128

Sec

ond

s

#Processes

Graph500 (Execution Time)

optimistic

pessimistic

EAM-RMA

0

50000

100000

150000

200000

250000

300000

350000

512 256 128

Joule

s

#Processes

Graph500 (Energy Usage)

optimistic pessimistic

EAM-RMA

46%

• MPI_Win_fence dominates application execution time in graph500

• Between 128 and 512 processes, EAM-RMA yields between 31% and 46% savings with no

degradation in execution time in comparison with the default optimistic MPI runtime

Page 97: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 97 Network Based Computing Laboratory

0

500000

1000000

1500000

2000000

2500000

3000000

512 256 128

Joule

s

#Processes

SCF (Energy Usage)

optimistic

pessimistic

EAM-RMA

0

100

200

300

400

500

600

512 256 128

Sec

ond

s

#Processes

SCF (Execution Time)

optimistic

pessimistic

EAM-RMA

MPI-3 RMA Energy Savings with Proxy-Applications

42%

• SCF (self-consistent field) calculation spends nearly 75% total time in MPI_Win_unlock call

• With 256 and 512 processes, EAM-RMA yields 42% and 36% savings at 11% degradation (close to

permitted degradation ρ = 10%)

• 128 processes is an exception due 2-sided and 1-sided interaction

• MPI-3 RMA Energy-efficient support will be available in upcoming MVAPICH2-EA release

Page 98: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 98 Network Based Computing Laboratory

MVAPICH2 Software Family

Requirements Library

MPI with IB, iWARP and RoCE MVAPICH2

Advanced MPI, OSU INAM, PGAS and MPI+PGAS with IB and RoCE MVAPICH2-X

MPI with IB & GPU MVAPICH2-GDR

MPI with IB & MIC MVAPICH2-MIC

HPC Cloud with MPI & IB MVAPICH2-Virt

Energy-aware MPI with IB, iWARP and RoCE MVAPICH2-EA

MPI Energy Monitoring Tool OEMT

InfiniBand Network Analysis and Monitoring OSU INAM

Microbenchmarks for Measuring MPI and PGAS Performance OMB

Page 99: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 99 Network Based Computing Laboratory

Overview of OSU INAM

• A network monitoring and analysis tool that is capable of analyzing traffic on the InfiniBand network with inputs from the MPI

runtime

– http://mvapich.cse.ohio-state.edu/tools/osu-inam/

• Monitors IB clusters in real time by querying various subnet management entities and gathering input from the MPI runtimes

• OSU INAM v0.9.1 released on 05/13/16

• Significant enhancements to user interface to enable scaling to clusters with thousands of nodes

• Improve database insert times by using 'bulk inserts‘

• Capability to look up list of nodes communicating through a network link

• Capability to classify data flowing over a network link at job level and process level granularity in conjunction with MVAPICH2-X

2.2rc1

• “Best practices “ guidelines for deploying OSU INAM on different clusters

• Capability to analyze and profile node-level, job-level and process-level activities for MPI communication

– Point-to-Point, Collectives and RMA

• Ability to filter data based on type of counters using “drop down” list

• Remotely monitor various metrics of MPI processes at user specified granularity

• "Job Page" to display jobs in ascending/descending order of various performance metrics in conjunction with MVAPICH2-X

• Visualize the data transfer happening in a “live” or “historical” fashion for entire network, job or set of nodes

Page 100: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 100 Network Based Computing Laboratory

OSU INAM Features

• Show network topology of large clusters

• Visualize traffic pattern on different links

• Quickly identify congested links/links in error state

• See the history unfold – play back historical state of the network

Comet@SDSC --- Clustered View

(1,879 nodes, 212 switches, 4,377 network links)

Finding Routes Between Nodes

Page 101: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 101 Network Based Computing Laboratory

OSU INAM Features (Cont.)

Visualizing a Job (5 Nodes)

• Job level view • Show different network metrics (load, error, etc.) for any live job

• Play back historical data for completed jobs to identify bottlenecks

• Node level view - details per process or per node • CPU utilization for each rank/node

• Bytes sent/received for MPI operations (pt-to-pt, collective, RMA)

• Network metrics (e.g. XmitDiscard, RcvError) per rank/node

Estimated Process Level Link Utilization

• Estimated Link Utilization view • Classify data flowing over a network link at

different granularity in conjunction with

MVAPICH2-X 2.2rc1

• Job level and

• Process level

More Details in Tutorial/Demo

Session Tomorrow

Page 102: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 102 Network Based Computing Laboratory

• Available since 2004

• Suite of microbenchmarks to study communication performance of various programming models

• Benchmarks available for the following programming models

– Message Passing Interface (MPI)

– Partitioned Global Address Space (PGAS)

• Unified Parallel C (UPC)

• Unified Parallel C++ (UPC++)

• OpenSHMEM

• Benchmarks available for multiple accelerator based architectures

– Compute Unified Device Architecture (CUDA)

– OpenACC Application Program Interface

• Part of various national resource procurement suites like NERSC-8 / Trinity Benchmarks

• Continuing to add support for newer primitives and features

• Please visit the following link for more information

– http://mvapich.cse.ohio-state.edu/benchmarks/

OSU Microbenchmarks

Page 103: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 103 Network Based Computing Laboratory

• MPI runtime has many parameters

• Tuning a set of parameters can help you to extract higher performance

• Compiled a list of such contributions through the MVAPICH Website – http://mvapich.cse.ohio-state.edu/best_practices/

• Initial list of applications – Amber

– HoomDBlue

– HPCG

– Lulesh

– MILC

– Neuron

– SMG2000

• Soliciting additional contributions, send your results to mvapich-help at cse.ohio-state.edu.

• We will link these results with credits to you.

Applications-Level Tuning: Compilation of Best Practices

Page 104: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 104 Network Based Computing Laboratory

Amber: Impact of Tuning Eager Threshold

0

100

200

300

400

500

64 128 256

Exec

uti

on

Tim

e (s

)

Number of Processes

Default Tuned

19%

• Tuning the Eager threshold has a significant

impact on application performance by avoiding

the synchronization of rendezvous protocol

and thus yielding better communication

computation overlap

• 19% improvement in overall execution time at

256 processes

• Library Version: MVAPICH2 2.2

• MVAPICH Flags used

– MV2_IBA_EAGER_THRESHOLD=131072

– MV2_VBUF_TOTAL_SIZE=131072

• Input files used

– Small: MDIN

– Large: PMTOP

Data Submitted by: Dong Ju Choi @ UCSD

Page 105: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 105 Network Based Computing Laboratory

MiniAMR: Impact of Tuning Eager Threshold

• Tuning the Eager threshold has a significant

impact on application performance by avoiding

the synchronization of rendezvous protocol

and thus yielding better communication

computation overlap

• 8% percent reduction in total communication

time

• Library Version: MVAPICH2 2.2

• MVAPICH Flags used

– MV2_IBA_EAGER_THRESHOLD=32768

– MV2_VBUF_TOTAL_SIZE=32768

175

180

185

190

195

200

205

12

8

51

2

1K

2K

4K

8K

16

K

32

K

64

K

12

8K

25

6K

51

2K

1M

Co

mm

un

icat

ion

Tim

e (

sec)

Eager Threshold (Bytes)

MiniAMR

8%

Data Submitted by Karen Tomko @ OSC and Dong Ju Choi @ UCSD

Page 106: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 106 Network Based Computing Laboratory

• UD-based transport protocol selection

benefits the SMG2000 application

• 22% and 6% on 1,024 and 4,096 cores,

respectively

• Library Version: MVAPICH2 2.1

• MVAPICH Flags used

– MV2_USE_ONLY_UD=1

• System Details

– Stampede@ TACC

– Sandybridge architecture with dual 8-cores

nodes and ConnectX-3 FDR network

SMG2000: Impact of Tuning Transport Protocol

0

10

20

30

40

50

60

70

80

1024 2048 4096

Exec

uti

on

Tim

e (s

)

Number of Processes

Default Tuned

22%

Data Submitted by Jerome Vienne @ TACC

Page 107: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 107 Network Based Computing Laboratory

• UD-based transport protocol selection

benefits the SMG2000 application

• 15% and 27% improvement is seen for 768 and

1,024 processes respectively

• Library Version: MVAPICH2 2.2

• MVAPICH Flags used

– MV2_USE_ONLY_UD=1

• Input File

– YuEtAl2012

• System Details

– Comet@SDSC

– Haswell nodes with dual 12-cores socket per

node and Mellanox FDR (56 Gbps) network.

Neuron: Impact of Tuning Transport Protocol

0

20

40

60

80

100

120

140

384 512 768 1024

Exec

uti

on

Tim

e (s

)

Number of Processes

Default Tuned

27%

Data Submitted by Mahidhar Tatineni @ SDSC

Page 108: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 108 Network Based Computing Laboratory

0

0.2

0.4

0.6

0.8

1

1.2

HPCG

No

rmal

ize

d E

xecu

tio

n T

ime

Default Tuned• Partial subscription nature of hybrid MPI+OpenMP

programming requires a new level of collective tuning

– For PPN=2 (Processes Per Node), the tuned version of MPI_Reduce

shows 51% improvement on 2,048 cores

• 24% improvement on 512 cores

– 8 OpenMP threads per MPI processes

• Library Version: MVAPICH2 2.1

• MVAPICH Flags used

– The tuning parameters for hybrid MPI+OpenMP

programming models is on by default from MVAPICH2-2.1

onward

• System Details

– Stampede@ TACC

– Sandybridge architecture with dual 8-cores nodes and

ConnectX-3 FDR network

HPCG: Impact of Collective Tuning for MPI+OpenMP Programming Model

24%

Data Submitted by Jerome Vienne and Carlos Rosales-Fernandez @ TACC

Page 109: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 109 Network Based Computing Laboratory

• Partial subscription nature of hybrid MPI+OpenMP

programming requires a new level of collective tuning

– For PPN=2 (Processes Per Node), the tuned version of MPI_Reduce

shows 51% improvement on 2,048 cores

• 4% improvement on 512 cores

– 8 OpenMP threads per MPI processes

• Library Version: MVAPICH2 2.1

• MVAPICH Flags used

– The tuning parameters for hybrid MPI+OpenMP

programming models is on by default from MVAPICH2-2.1

onward

• System Details

– Stampede@ TACC

– Sandybridge architecture with dual 8-cores nodes and

ConnectX-3 FDR network

LULESH: Impact of Collective Tuning for MPI+OpenMP Programming Model

0

0.2

0.4

0.6

0.8

1

1.2

Lulesh

No

rmal

ize

d E

xecu

tio

n T

ime

Default Tuned4%

Data Submitted by Jerome Vienne and Carlos Rosales-Fernandez @ TACC

Page 110: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 110 Network Based Computing Laboratory

• Non-contiguous data processing is very common on HPC

applications. MVAPICH2 offers efficient designs for MPI

Datatype support using novel hardware features such as

UMR

• UMR-based protocol selection benefits the MILC

application.

– 4% and 6% improvement in execution time at 512 and 640

processors, respectively

• Library Version: MVAPICH2-X 2.2

• MVAPICH Flags used

– MV2_USE_UMR=1

• System Details

– The experimental cluster consists of 32 Ivy Bridge Compute nodes

interconnected by Mellanox FDR.

– The Intel Ivy Bridge processors consist of Xeon dual ten-core

sockets operating at 2.80GHz with 32GB RAM and Mellanox OFED

version 3.2-1.0.1.1.

MILC: Impact of User-mode Memory Registration (UMR) based tuning

0

10

20

30

40

50

60

70

128 256 512 640

Exec

uti

on

Tim

e (s

)

Number of Processes

Default Tuned

6%

Data Submitted by Mingzhe Li @ OSU

Page 111: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 111 Network Based Computing Laboratory

• HOOMD-blue is a Molecular Dynamics

simulation using a custom force field.

• GPUDirect specific features selection and

tuning significantly benefit the HOOMD-blue

application. We observe a factor of 2X

improvement on 32 GPU nodes, with both 64K

and 256K particles

• Library Version: MVAPICH2-GDR 2.2

• MVAPICH-GDR Flags used

– MV2_USE_CUDA=1

– MV2_USE_GPUDIRECT=1

– MV2_GPUDIRECT_GDRCOPY=1

• System Details

– Wilkes@Cambridge

– 128 Ivybridge nodes, each node is a dual 6-

cores socket with Mellanox FDR

HOOMD-blue: Impact of GPUDirect RDMA Based Tuning

0

1000

2000

3000

4 8 16 32

Ave

rage

Tim

e S

tep

s p

er

seco

nd

(TP

S)

Number of Processes

256K Particles MV2 MV2+GDR

0

1000

2000

3000

4000

4 8 16 32

Ave

rage

Tim

e S

tep

s p

er

seco

nd

(TP

S)

Number of Processes

64K Particles Default Tuned

2X

2X

Data Submitted by Khaled Hamidouche @ OSU

Page 112: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 112 Network Based Computing Laboratory

MVAPICH2 – Plans for Exascale

• Performance and Memory scalability toward 1-10M cores

• Hybrid programming (MPI + OpenSHMEM, MPI + UPC, MPI + CAF …) • MPI + Task*

• Enhanced Optimization for GPU Support and Accelerators

• Taking advantage of advanced features of Mellanox InfiniBand • Tag Matching*

• Adapter Memory*

• Enhanced communication schemes for upcoming architectures • Knights Landing with MCDRAM*

• NVLINK*

• CAPI*

• Extended topology-aware collectives

• Extended Energy-aware designs and Virtualization Support

• Extended Support for MPI Tools Interface (as in MPI 3.0)

• Extended FT support

• Support for * features will be available in future MVAPICH2 Releases

Page 113: How to Boost the Performance of Your MPI and PGAS ...mug.mvapich.cse.ohio-state.edu/.../presentations/...IOV Rail Multi Transport Mechanisms Shared Memory CMA IVSHMEM Modern Features

MVAPICH User Group Meeting (MUG) 2017 113 Network Based Computing Laboratory

Thank You!

Network-Based Computing Laboratory http://nowlab.cse.ohio-state.edu/

The MVAPICH2 Project http://mvapich.cse.ohio-state.edu/