overview of the mvapich project: latest status and future...
Post on 25-May-2020
8 Views
Preview:
TRANSCRIPT
Overview of the MVAPICH Project:Latest Status and Future Roadmap
MVAPICH2 User Group (MUG) Meeting
by
Dhabaleswar K. (DK) Panda
The Ohio State University
E-mail: panda@cse.ohio-state.edu
http://www.cse.ohio-state.edu/~panda
MVAPICH User Group Meeting (MUG) 2018 2Network Based Computing Laboratory
Parallel Programming Models Overview
P1 P2 P3
Shared Memory
P1 P2 P3
Memory Memory Memory
P1 P2 P3
Memory Memory MemoryLogical shared memory
Shared Memory Model
SHMEM, DSMDistributed Memory Model
MPI (Message Passing Interface)
Partitioned Global Address Space (PGAS)
Global Arrays, UPC, UPC++, Chapel, , CAF, …
• Programming models provide abstract machine models
• Models can be mapped on different types of systems– e.g. Distributed Shared Memory (DSM), MPI within a node, etc.
• PGAS models and Hybrid MPI+PGAS models are gradually receiving importance
MVAPICH User Group Meeting (MUG) 2018 3Network Based Computing Laboratory
Supporting Programming Models for Multi-Petaflop and Exaflop Systems: Challenges
Programming ModelsMPI, PGAS (UPC, Global Arrays, OpenSHMEM), CUDA, OpenMP,
OpenACC, Cilk, Hadoop (MapReduce), Spark (RDD, DAG), etc.
Application Kernels/Applications
Networking Technologies(InfiniBand, 40/100GigE,
Aries, and Omni-Path)
Multi-/Many-coreArchitectures
Accelerators(GPU and MIC)
Middleware Co-Design Opportunities and Challenges across Various
Layers
PerformanceScalabilityResilience
Communication Library or Runtime for Programming ModelsPoint-to-point
CommunicationCollective
CommunicationEnergy-
AwarenessSynchronization
and LocksI/O and
File SystemsFault
Tolerance
MVAPICH User Group Meeting (MUG) 2018 4Network Based Computing Laboratory
• Scalability for million to billion processors– Support for highly-efficient inter-node and intra-node communication (both two-sided and one-sided)– Scalable job start-up– Low memory footprint
• Scalable Collective communication– Offload– Non-blocking– Topology-aware
• Balancing intra-node and inter-node communication for next generation nodes (128-1024 cores)– Multiple end-points per node
• Support for efficient multi-threading• Integrated Support for Accelerators (GPGPUs and FPGAs)• Fault-tolerance/resiliency• QoS support for communication and I/O• Support for Hybrid MPI+PGAS programming (MPI + OpenMP, MPI + UPC, MPI + OpenSHMEM,
MPI+UPC++, CAF, …)• Virtualization • Energy-Awareness
Designing (MPI+X) at Exascale
MVAPICH User Group Meeting (MUG) 2018 5Network Based Computing Laboratory
Overview of the MVAPICH2 Project• High Performance open-source MPI Library for InfiniBand, Omni-Path, Ethernet/iWARP, and RDMA over Converged Ethernet (RoCE)
– MVAPICH (MPI-1), MVAPICH2 (MPI-2.2 and MPI-3.1), Started in 2001, First version available in 2002
– MVAPICH2-X (MPI + PGAS), Available since 2011
– Support for GPGPUs (MVAPICH2-GDR) and MIC (MVAPICH2-MIC), Available since 2014
– Support for Virtualization (MVAPICH2-Virt), Available since 2015
– Support for Energy-Awareness (MVAPICH2-EA), Available since 2015
– Support for InfiniBand Network Analysis and Monitoring (OSU INAM) since 2015
– Used by more than 2,925 organizations in 86 countries
– More than 485,000 (> 0.48 million) downloads from the OSU site directly
– Empowering many TOP500 clusters (Jul ‘18 ranking)
• 2nd ranked 10,649,640-core cluster (Sunway TaihuLight) at NSC, Wuxi, China
• 12th, 556,104 cores (Oakforest-PACS) in Japan
• 15th, 367,024 cores (Stampede2) at TACC
• 24th, 241,108-core (Pleiades) at NASA and many others
– Available with software stacks of many vendors and Linux Distros (RedHat and SuSE)
– http://mvapich.cse.ohio-state.edu
• Empowering Top500 systems for over a decade
MVAPICH User Group Meeting (MUG) 2018 6Network Based Computing Laboratory
TimelineJan-
04
Jan-
10
Nov
-12
MVAPICH2-X
OMB
MVAPICH2
MVAPICH
Oct
-02
Nov
-04
Apr
-15
EOL
MVAPICH2-GDR
MVAPICH2-MIC
MVAPICH Project Timeline
Jul-
15
MVAPICH2-Virt
Aug
-14
Aug
-15
Sep-
15
MVAPICH2-EA
OSU-INAM
MVAPICH User Group Meeting (MUG) 2018 7Network Based Computing Laboratory
0
100000
200000
300000
400000
500000
600000
Sep-
04
Jan-
05
May
-05
Sep-
05
Jan-
06
May
-06
Sep-
06
Jan-
07
May
-07
Sep-
07
Jan-
08
May
-08
Sep-
08
Jan-
09
May
-09
Sep-
09
Jan-
10
May
-10
Sep-
10
Jan-
11
May
-11
Sep-
11
Jan-
12
May
-12
Sep-
12
Jan-
13
May
-13
Sep-
13
Jan-
14
May
-14
Sep-
14
Jan-
15
May
-15
Sep-
15
Jan-
16
May
-16
Sep-
16
Jan-
17
May
-17
Sep-
17
Jan-
18
May
-18
Num
ber o
f Dow
nloa
ds
Timeline
MV
0.9.
4
MV2
0.9
.0
MV2
0.9
.8
MV2
1.0
MV
1.0
MV2
1.0.
3
MV
1.1
MV2
1.4
MV2
1.5
MV2
1.6
MV2
1.7
MV2
1.8
MV2
1.9 M
V2-G
DR
2.0b
MV2
-MIC
2.0
MV2
-GD
R 2
.3a
MV2
-X2.
3b
MV2
Virt
2.2
MV2
2.3
OSU
INAM
0.9
.3
MVAPICH2 Release Timeline and Downloads
MVAPICH User Group Meeting (MUG) 2018 8Network Based Computing Laboratory
Architecture of MVAPICH2 Software Family
High Performance Parallel Programming Models
Message Passing Interface(MPI)
PGAS(UPC, OpenSHMEM, CAF, UPC++)
Hybrid --- MPI + X(MPI + PGAS + OpenMP/Cilk)
High Performance and Scalable Communication RuntimeDiverse APIs and Mechanisms
Point-to-point
Primitives
Collectives Algorithms
Energy-Awareness
Remote Memory Access
I/O andFile Systems
FaultTolerance
Virtualization Active Messages
Job StartupIntrospection & Analysis
Support for Modern Networking Technology(InfiniBand, iWARP, RoCE, Omni-Path)
Support for Modern Multi-/Many-core Architectures(Intel-Xeon, OpenPOWER, Xeon-Phi (MIC, KNL), NVIDIA GPGPU)
Transport Protocols Modern Features
RC XRC UD DC UMR ODPSR-IOV
Multi Rail
Transport MechanismsShared
Memory CMA IVSHMEM
Modern Features
NVLink* CAPI*
* Upcoming
XPMEM*
MVAPICH User Group Meeting (MUG) 2018 9Network Based Computing Laboratory
• Research is done for exploring new designs
• Designs are first presented to conference/journal publications
• Best performing designs are incorporated into the codebase
• Rigorous Q&A procedure before making a release
– Exhaustive unit testing
– Various test procedures on diverse range of platforms and interconnects
– Test 19 different benchmarks and applications including, but not limited to
• OMB, IMB, MPICH Test Suite, Intel Test Suite, NAS, ScalaPak, and SPEC
– Spend about 18,000 core hours per commit
– Performance regression and tuning
– Applications-based evaluation
– Evaluation on large-scale systems
• All versions (alpha, beta, RC1 and RC2) go through the above testing
Strong Procedure for Design, Development and Release
MVAPICH User Group Meeting (MUG) 2018 10Network Based Computing Laboratory
MVAPICH2 Software Family Requirements Library
MPI with IB, iWARP, Omni-Path, and RoCE MVAPICH2
Advanced MPI Features/Support, OSU INAM, PGAS and MPI+PGAS with IB, Omni-Path, and RoCE
MVAPICH2-X
MPI with IB, RoCE & GPU and Support for Deep Learning MVAPICH2-GDR
HPC Cloud with MPI & IB MVAPICH2-Virt
Energy-aware MPI with IB, iWARP and RoCE MVAPICH2-EA
MPI Energy Monitoring Tool OEMT
InfiniBand Network Analysis and Monitoring OSU INAM
Microbenchmarks for Measuring MPI and PGAS Performance OMB
MVAPICH User Group Meeting (MUG) 2018 11Network Based Computing Laboratory
• Released on 07/23/2018
• Major Features and Enhancements– Based on MPICH v3.2.1
– Introduce basic support for executing MPI jobs in Singularity
– Improve performance for MPI-3 RMA operations
– Enhancements for Job Startup
• Improved job startup time for OFA-IB-CH3, PSM-CH3, and PSM2-CH3
• On-demand connection management for PSM-CH3 and PSM2-CH3 channels
• Enhance PSM-CH3 and PSM2-CH3 job startup to use non-blocking PMI calls
• Introduce capability to run MPI jobs across multiple InfiniBand subnets
– Enhancements to point-to-point operations
• Enhance performance of point-to-point operations for CH3-Gen2 (InfiniBand), CH3-PSM, and CH3-PSM2 (Omni-Path) channels
• Improve performance for Intra- and Inter-node communication for OpenPOWER architecture
• Enhanced tuning for OpenPOWER, Intel Skylake and Cavium ARM (ThunderX) systems
• Improve performance for host-based transfers when CUDA is enabled
• Improve support for large processes per node and hugepages on SMP systems
– Enhancements to collective operations
• Enhanced performance for Allreduce, Reduce_scatter_block, Allgather, Allgatherv
– Thanks to Danielle Sikich and Adam Moody @ LLNL for the patch
• Add support for non-blocking Allreduce using Mellanox SHARP
– Enhance tuning framework for Allreduce using SHArP
• Enhanced collective tuning for IBM POWER8, IBM POWER9, Intel Skylake, Intel KNL, Intel Broadwell
MVAPICH2 2.3-GA
– Enhancements to process mapping strategies and automatic architecture/network detection
• Improve performance of architecture detection on high core-count systems
• Enhanced architecture detection for OpenPOWER, Intel Skylake and Cavium ARM (ThunderX) systems
• New environment variable MV2_THREADS_BINDING_POLICY for multi-threaded MPI and MPI+OpenMP applications
• Support 'spread', 'bunch', 'scatter', 'linear' and 'compact' placement of threads
– Warn user if oversubscription of core is detected
• Enhance MV2_SHOW_CPU_BINDING to enable display of CPU bindings on all nodes
• Added support for MV2_SHOW_CPU_BINDING to display number of OMP threads
• Added logic to detect heterogeneous CPU/HFI configurations in PSM-CH3 and PSM2-CH3 channels
– Thanks to Matias Cabral@Intel for the report
• Enhanced HFI selection logic for systems with multiple Omni-Path HFIs
• Introduce run time parameter MV2_SHOW_HCA_BINDING to show process to HCA bindings
– Miscellaneous enhancements and improved debugging and tools support
• Enhance support for MPI_T PVARs and CVARs
• Enhance debugging support for PSM-CH3 and PSM2-CH3 channels
• Update to hwloc version 1.11.9
• Tested with CLANG v5.0.0
MVAPICH User Group Meeting (MUG) 2018 12Network Based Computing Laboratory
• Support for highly-efficient inter-node and intra-node communication• Scalable Start-up• Optimized Collectives using SHArP and Multi-Leaders• Support for OpenPOWER and ARM architectures• Performance Engineering with MPI-T• Application Scalability and Best Practices
Highlights of MVAPICH2 2.3-GA Release
MVAPICH User Group Meeting (MUG) 2018 13Network Based Computing Laboratory
One-way Latency: MPI over IB with MVAPICH2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2Small Message Latency
Message Size (bytes)
Late
ncy
(us)
1.11
1.19
0.98
1.15
1.04
TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch
ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-5-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB Switch
Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch
0
20
40
60
80
100
120TrueScale-QDR
ConnectX-3-FDR
ConnectIB-DualFDR
ConnectX-5-EDR
Omni-Path
Large Message Latency
Message Size (bytes)
Late
ncy
(us)
MVAPICH User Group Meeting (MUG) 2018 14Network Based Computing Laboratory
TrueScale-QDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-3-FDR - 2.8 GHz Deca-core (IvyBridge) Intel PCI Gen3 with IB switch
ConnectIB-Dual FDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with IB switchConnectX-5-EDR - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 IB switch
Omni-Path - 3.1 GHz Deca-core (Haswell) Intel PCI Gen3 with Omni-Path switch
Bandwidth: MPI over IB with MVAPICH2
0
5000
10000
15000
20000
25000
4 16 64 256 1024 4K 16K 64K 256K 1M
TrueScale-QDR
ConnectX-3-FDR
ConnectIB-DualFDR
ConnectX-5-EDR
Omni-Path
Bidirectional Bandwidth
Band
wid
th (M
Byte
s/se
c)
Message Size (bytes)
22,564
12,161
21,983
6,228
24,136
0
2000
4000
6000
8000
10000
12000
14000
4 16 64 256 1024 4K 16K 64K 256K 1M
Unidirectional Bandwidth
Band
wid
th (M
Byte
s/se
c)
Message Size (bytes)
12,590
3,373
6,356
12,35812,366
MVAPICH User Group Meeting (MUG) 2018 15Network Based Computing Laboratory
• Near-constant MPI and OpenSHMEM initialization time at any process count
• 10x and 30x improvement in startup time of MPI and OpenSHMEM respectively at 16,384 processes
• Memory consumption reduced for remote endpoint information by O(processes per node)
• 1GB Memory saved per node with 1M processes and 16 processes per node
Towards High Performance and Scalable Startup at Exascale
P M
O
Job Startup Performance
Mem
ory
Requ
ired
to S
tore
En
dpoi
nt In
form
atio
na b c d
eP
M
PGAS – State of the art
MPI – State of the art
O PGAS/MPI – Optimized
PMIX_Ring
PMIX_Ibarrier
PMIX_Iallgather
Shmem based PMI
b
c
d
e
aOn-demand Connection
On-demand Connection Management for OpenSHMEM and OpenSHMEM+MPI. S. Chakraborty, H. Subramoni, J. Perkins, A. A. Awan, and D K Panda, 20th International Workshop on High-level Parallel Programming Models and Supportive Environments (HIPS ’15)
PMI Extensions for Scalable MPI Startup. S. Chakraborty, H. Subramoni, A. Moody, J. Perkins, M. Arnold, and D K Panda, Proceedings of the 21st European MPI Users' Group Meeting (EuroMPI/Asia ’14)
Non-blocking PMI Extensions for Fast MPI Startup. S. Chakraborty, H. Subramoni, A. Moody, A. Venkatesh, J. Perkins, and D K Panda, 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid ’15)
SHMEMPMI – Shared Memory based PMI for Improved Performance and Scalability. S. Chakraborty, H. Subramoni, J. Perkins, and D K Panda, 16th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid ’16)
a
b
c d
e
MVAPICH User Group Meeting (MUG) 2018 16Network Based Computing Laboratory
Startup Performance on KNL + Omni-Path
0
5
10
15
20
25
64 128
256
512 1K 2K 4K 8K 16K
32K
64K
Tim
e Ta
ken
(Sec
onds
)
Number of Processes
MPI_Init & Hello World - Oakforest-PACS
Hello World (MVAPICH2-2.3a)
MPI_Init (MVAPICH2-2.3a)
• MPI_Init takes 22 seconds on 229,376 processes on 3,584 KNL nodes (Stampede2 – Full scale)• 8.8 times faster than Intel MPI at 128K processes (Courtesy: TACC)• At 64K processes, MPI_Init and Hello World takes 5.8s and 21s respectively (Oakforest-PACS)• All numbers reported with 64 processes per node
5.8s
21s
22s
New designs available since MVAPICH2-2.3a and as patch for SLURM-15.08.8 and SLURM-16.05.1
MVAPICH User Group Meeting (MUG) 2018 17Network Based Computing Laboratory
0
0.05
0.1
0.15
0.2
(4,28) (8,28) (16,28)
Late
ncy
(sec
onds
)
(Number of Nodes, PPN)
MVAPICH2 13%Mesh Refinement Time of MiniAMR
Advanced Allreduce Collective Designs Using SHArP
12%
00.05
0.10.15
0.20.25
0.30.35
(4,28) (8,28) (16,28)
Late
ncy
(sec
onds
)
(Number of Nodes, PPN)
MVAPICH2
Avg DDOT Allreduce time of HPCG
M. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda, Scalable Reduction Collectives with Data Partitioning-based Multi-Leader Design, Supercomputing '17.
SHArP Support (blocking and non-blocking) is available
since MVAPICH2 2.3a
MVAPICH User Group Meeting (MUG) 2018 18Network Based Computing Laboratory
0
0.2
0.4
0.6
0.8
1
1.2
0 1 2 4 8 16 32 64 128 256 512 1K 2K
Late
ncy
(us)
MVAPICH2-2.3
SpectrumMPI-10.1.0.2
OpenMPI-3.0.0
Intra-node Point-to-Point Performance on OpenPOWER
Platform: Two nodes of OpenPOWER (Power8-ppc64le) CPU using Mellanox EDR (MT4115) HCA
Intra-Socket Small Message Latency Intra-Socket Large Message Latency
Intra-Socket Bi-directional BandwidthIntra-Socket Bandwidth
0.30us0
10
20
30
40
50
60
70
80
4K 8K 16K 32K 64K 128K 256K 512K 1M 2M
Late
ncy
(us)
MVAPICH2-2.3
SpectrumMPI-10.1.0.2
OpenMPI-3.0.0
0
5000
10000
15000
20000
25000
30000
35000
1 8 64 512 4K 32K 256K 2M
Band
wid
th (M
B/s)
MVAPICH2-2.3
SpectrumMPI-10.1.0.2
OpenMPI-3.0.0
0
10000
20000
30000
40000
50000
60000
70000
1 8 64 512 4K 32K 256K 2MBa
ndw
idth
(MB/
s)
MVAPICH2-2.3
SpectrumMPI-10.1.0.2
OpenMPI-3.0.0
MVAPICH User Group Meeting (MUG) 2018 19Network Based Computing Laboratory
0
0.5
1
1.5
2
2.5
3
3.5
0 1 2 4 8 16 32 64 128 256 512 1K 2K
Late
ncy
(us)
MVAPICH2-2.3
SpectrumMPI-10.1.0.2
OpenMPI-3.0.0
Inter-node Point-to-Point Performance on OpenPOWER
Platform: Two nodes of OpenPOWER (Power8-ppc64le) CPU using Mellanox EDR (MT4115) HCA
Small Message Latency Large Message Latency
Bi-directional BandwidthBandwidth
0
50
100
150
200
4K 8K 16K 32K 64K 128K 256K 512K 1M 2M
Late
ncy
(us)
MVAPICH2-2.3
SpectrumMPI-10.1.0.2
OpenMPI-3.0.0
0
2000
4000
6000
8000
10000
12000
14000
1 8 64 512 4K 32K 256K 2M
Band
wid
th (M
B/s)
MVAPICH2-2.3
SpectrumMPI-10.1.0.2
OpenMPI-3.0.0
0
5000
10000
15000
20000
25000
30000
1 8 64 512 4K 32K 256K 2MBa
ndw
idth
(MB/
s)
MVAPICH2-2.3
SpectrumMPI-10.1.0.2
OpenMPI-3.0.0
MVAPICH User Group Meeting (MUG) 2018 20Network Based Computing Laboratory
0
0.2
0.4
0.6
0.8
1
1.2
0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K
Late
ncy
(us)
MVAPICH2-2.3
Intra-node Point-to-point Performance on ARM Cortex-A72
0
2000
4000
6000
8000
10000
1 2 4 8 16 32 64 128
256
512 1K 2K 4K 8K 32K
64K
128K
256K
512K 1M 2M 4M
Band
wid
th (M
B/s) MVAPICH2-.3
02000400060008000
10000120001400016000
1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M
Bidi
rect
iona
l Ban
dwid
th MVAPICH2-2.3
Platform: ARM Cortex A72 (aarch64) MIPS processor with 64 cores dual-socket CPU. Each socket contains 32 cores.
Small Message Latency Large Message Latency
Bi-directional BandwidthBandwidth
0.27 micro-second (1 bytes)
0
100
200
300
400
500
600
700
8K 16K 32K 64K 128K 256K 512K 1M 2M 4M
Late
ncy
(us)
MVAPICH2-2.3
MVAPICH User Group Meeting (MUG) 2018 21Network Based Computing Laboratory
● Enhance existing support for MPI_T in MVAPICH2 to expose a richer set of performance and control variables
● Get and display MPI Performance Variables (PVARs) made available by the runtime in TAU
● Control the runtime’s behavior via MPI Control Variables (CVARs)● Introduced support for new MPI_T based CVARs to MVAPICH2
○ MPIR_CVAR_MAX_INLINE_MSG_SZ, MPIR_CVAR_VBUF_POOL_SIZE, MPIR_CVAR_VBUF_SECONDARY_POOL_SIZE
● TAU enhanced with support for setting MPI_T CVARs in a non-interactive mode for uninstrumented applications
● S. Ramesh, A. Maheo, S. Shende, A. Malony, H. Subramoni, and D. K. Panda, MPI Performance Engineering with the MPI Tool Interface: the Integration of MVAPICH and TAU, EuroMPI/USA ‘17, Best Paper Finalist
● More details in Sameer Shende’s talk today and poster presentations
Performance Engineering Applications using MVAPICH2 and TAU
VBUF usage without CVAR based tuning as displayed by ParaProf VBUF usage with CVAR based tuning as displayed by ParaProf
MVAPICH User Group Meeting (MUG) 2018 22Network Based Computing Laboratory
0
50
100
150
200
250
300
350
400
milc leslie3d pop2 lammps wrf2 tera_tf lu
Exec
utio
n Ti
me
in (s
)
Intel MPI 18.0.0
MVAPICH2 2.3
2%
6%
1%
6%
Performance of SPEC MPI 2007 Benchmarks (KNL + Omni-Path)
MVAPICH2 outperforms Intel MPI by up to 10%
448 processeson 7 KNL nodes of TACC Stampede2
(64 ppn)
10%2%
4%
MVAPICH User Group Meeting (MUG) 2018 23Network Based Computing Laboratory
0
20
40
60
80
100
120
milc leslie3d pop2 lammps wrf2 GaP tera_tf lu
Exec
utio
n Ti
me
in (s
)
Intel MPI 18.0.0
MVAPICH2 2.3
2%
4%
Performance of SPEC MPI 2007 Benchmarks (Skylake + Omni-Path)
MVAPICH2 outperforms Intel MPI by up to 38%
480 processeson 10 Skylake nodes of TACC Stampede2
(48 ppn)
0% 1%
0%
-4%38%
-3%
MVAPICH User Group Meeting (MUG) 2018 24Network Based Computing Laboratory
Application Scalability on Skylake and KNL
MiniFE (1300x1300x1300 ~ 910 GB)
Runtime parameters: MV2_SMPI_LENGTH_QUEUE=524288 PSM2_MQ_RNDV_SHM_THRESH=128K PSM2_MQ_RNDV_HFI_THRESH=128K
0
20
40
60
80
100
120
140
2048 4096 8192
Exec
utio
n Ti
me
(s)
No. of Processes (KNL: 64ppn)
MVAPICH2
0
10
20
30
40
50
60
2048 4096 8192
Exec
utio
n Ti
me
(s)
No. of Processes (Skylake: 48ppn)
MVAPICH2
0
200
400
600
800
1000
1200
48 96 192 384 768
No. of Processes (Skylake: 48ppn)
MVAPICH2
NEURON (YuEtAl2012)
Courtesy: Mahidhar Tatineni @SDSC, Dong Ju (DJ) Choi@SDSC, and Samuel Khuvis@OSC ---- Testbed: TACC Stampede2 using MVAPICH2-2.3b
0
500
1000
1500
2000
2500
3000
3500
64 128 256 512 1024 2048 4096
No. of Processes (KNL: 64ppn)
MVAPICH2
0
200
400
600
800
1000
1200
1400
68 136 272 544 1088 2176 4352
No. of Processes (KNL: 68ppn)
MVAPICH2
0
500
1000
1500
2000
48 96 192 384 768 1536 3072
No. of Processes (Skylake: 48ppn)
MVAPICH2
Cloverleaf (bm64) MPI+OpenMP, NUM_OMP_THREADS = 2
MVAPICH User Group Meeting (MUG) 2018 25Network Based Computing Laboratory
• Dynamic and Adaptive Tag Matching
• Dynamic and Adaptive Communication Protocols
• Support for FPGA-based Accelerators
MVAPICH2 Upcoming Features
MVAPICH User Group Meeting (MUG) 2018 26Network Based Computing Laboratory
Dynamic and Adaptive Tag Matching
Normalized Total Tag Matching Time at 512 ProcessesNormalized to Default (Lower is Better)
Normalized Memory Overhead per Process at 512 ProcessesCompared to Default (Lower is Better)
Adaptive and Dynamic Design for MPI Tag Matching; M. Bayatpour, H. Subramoni, S. Chakraborty, and D. K. Panda; IEEE Cluster 2016. [Best Paper Nominee]
Chal
leng
e Tag matching is a significant overhead for receiversExisting Solutions are- Static and do not adapt dynamically to communication pattern- Do not consider memory overhead
Solu
tion A new tag matching design
- Dynamically adapt to communication patterns- Use different strategies for different ranks - Decisions are based on the number of request object that must be traversed before hitting on the required one
Resu
lts Better performance than other state-of-the art tag-matching schemesMinimum memory consumption
Will be available in future MVAPICH2 releases
MVAPICH User Group Meeting (MUG) 2018 27Network Based Computing Laboratory
Dynamic and Adaptive MPI Point-to-point Communication Protocols
Process on Node 1 Process on Node 2
Eager Threshold for Example Communication Pattern with Different Designs
0 1 2 3
4 5 6 7
Default
16 KB 16 KB 16 KB 16 KB
0 1 2 3
4 5 6 7
Manually Tuned
128 KB 128 KB 128 KB 128 KB
0 1 2 3
4 5 6 7
Dynamic + Adaptive
32 KB 64 KB 128 KB 32 KB
H. Subramoni, S. Chakraborty, D. K. Panda, Designing Dynamic & Adaptive MPI Point-to-Point Communication Protocols for Efficient Overlap of Computation & Communication, ISC'17 - Best Paper
0
100
200
300
400
500
600
128 256 512 1K
Wal
l Clo
ck T
ime
(sec
onds
)
Number of Processes
Execution Time of Amber
Default Threshold=17K Threshold=64K Threshold=128K Dynamic Threshold
0123456789
128 256 512 1KRela
tive
Mem
ory
Cons
umpt
ion
Number of Processes
Relative Memory Consumption of Amber
Default Threshold=17K Threshold=64K Threshold=128K Dynamic Threshold
Default Poor overlap; Low memory requirement Low Performance; High Productivity
Manually Tuned Good overlap; High memory requirement High Performance; Low Productivity
Dynamic + Adaptive Good overlap; Optimal memory requirement High Performance; High Productivity
Process Pair Eager Threshold (KB)
0 – 4 32
1 – 5 64
2 – 6 128
3 – 7 32
Desired Eager Threshold
MVAPICH User Group Meeting (MUG) 2018 28Network Based Computing Laboratory
• FPGAs are emerging in the market for HPC and Deep Learning
• How to exploit FPGA functionalities to accelerate MPI library?
• Funded Collaboration with Pattern Computers
Support for FPGA-Based Accelerators
MVAPICH User Group Meeting (MUG) 2018 29Network Based Computing Laboratory
MVAPICH2 Software Family Requirements Library
MPI with IB, iWARP, Omni-Path, and RoCE MVAPICH2
Advanced MPI Features/Support, OSU INAM, PGAS and MPI+PGAS with IB, Omni-Path, and RoCE
MVAPICH2-X
MPI with IB, RoCE & GPU and Support for Deep Learning MVAPICH2-GDR
HPC Cloud with MPI & IB MVAPICH2-Virt
Energy-aware MPI with IB, iWARP and RoCE MVAPICH2-EA
MPI Energy Monitoring Tool OEMT
InfiniBand Network Analysis and Monitoring OSU INAM
Microbenchmarks for Measuring MPI and PGAS Performance OMB
MVAPICH User Group Meeting (MUG) 2018 30Network Based Computing Laboratory
MVAPICH2-X for Hybrid MPI + PGAS Applications
• Current Model – Separate Runtimes for OpenSHMEM/UPC/UPC++/CAF and MPI– Possible deadlock if both runtimes are not progressed
– Consumes more network resource
• Unified communication runtime for MPI, UPC, UPC++, OpenSHMEM, CAF– Available with since 2012 (starting with MVAPICH2-X 1.9) – http://mvapich.cse.ohio-state.edu
High Performance and Scalable Unified Communication Runtime
Support for Modern Multi-/Many-core Architectures(Intel-Xeon, Intel-Xeon Phi, OpenPOWER, ARM…)
High Performance Parallel Programming ModelsMPI
Message Passing InterfacePGAS
(UPC, OpenSHMEM, CAF, UPC++)Hybrid --- MPI + X
(MPI + PGAS + OpenMP/Cilk)
Diverse APIs and MechanismsOptimized Point-to-
point Primitives
Collectives Algorithms(Blocking and Non-
Blocking)
Remote Memory Access
FaultTolerance
Active Messages Scalable Job Startup
Introspection & Analysis with OSU
INAM
Support for Modern Networking Technologies(InfiniBand, iWARP, RoCE, Omni-Path…)
Support for Efficient Intra-node Communication(POSIX SHMEM, CMA, LiMIC, XPMEM…)
MVAPICH User Group Meeting (MUG) 2018 31Network Based Computing Laboratory
• Advanced MPI Features/Support – InfiniBand Features (DC, UMR, ODP, SHArP, and Core-Direct)
– Kernel-assisted collectives
• Hybrid MPI+PGAS (OpenSHMEM, UPC, UPC++, and CAF)
• Allows to combine programming models (Pure MPI, Pure MPI+OpenMP, MPI+OpenSHMEM, MPI+UPC, MPI+UPC++, ..) in the same program to get best performance and scalability
Major Features of MVAPICH2-X
MVAPICH User Group Meeting (MUG) 2018 32Network Based Computing Laboratory
Optimized CMA-based Collectives for Large Messages
1
10
100
1000
10000
100000
1000000
1K 4K 16K 64K 256K 1M 4M
Message Size
KNL (2 Nodes, 128 Procs)
MVAPICH2-2.3a
Intel MPI 2017
OpenMPI 2.1.0
MVAPICH2-X
Late
ncy (
us)
1
10
100
1000
10000
100000
1000000
1K 4K 16K 64K 256K 1M
Message Size
KNL (4 Nodes, 256 Procs)
MVAPICH2-2.3a
Intel MPI 2017
OpenMPI 2.1.0
MVAPICH2-X1
10
100
1000
10000
100000
1000000
1K 4K 16K 64K 256K 1M
Message Size
KNL (8 Nodes, 512 Procs)
MVAPICH2-2.3a
Intel MPI 2017
OpenMPI 2.1.0
MVAPICH2-X
• Significant improvement over existing implementation for Scatter/Gather with 1MB messages (up to 4x on KNL, 2x on Broadwell, 14x on OpenPOWER)
• New two-level algorithms for better scalability• Improved performance for other collectives (Bcast, Allgather, and Alltoall)
~ 2.5xBetter
~ 3.2xBetter
~ 4xBetter
~ 17xBetter
S. Chakraborty, H. Subramoni, and D. K. Panda, Contention Aware Kernel-Assisted MPI Collectives for Multi/Many-core Systems, IEEE Cluster ’17, BEST Paper Finalist
Performance of MPI_Gather on KNL nodes (64PPN)
Available since MVAPICH2-X 2.3b
MVAPICH User Group Meeting (MUG) 2018 33Network Based Computing Laboratory
Advanced Allreduce Collective Designs Using SHArP and Multi-Leaders
• Socket-based design can reduce the communication latency by 23% and 40% on Broadwell + IB-EDR nodes
• Support is available since MVAPICH2-X 2.3b
HPCG (28 PPN)
0
0.1
0.2
0.3
0.4
0.5
0.6
56 224 448
Com
mun
icat
ion
Late
ncy
(Sec
onds
)
Number of ProcessesMVAPICH2 Proposed-Socket-Based MVAPICH2+SHArP
0
10
20
30
40
50
60
4 8 16 32 64 128 256 512 1K 2K 4K
Late
ncy
(us)
Message Size (Byte)MVAPICH2 Proposed-Socket-Based MVAPICH2+SHArP
OSU Micro Benchmark (16 Nodes, 28 PPN)
23%
40%
Lower is better
M. Bayatpour, S. Chakraborty, H. Subramoni, X. Lu, and D. K. Panda, Scalable Reduction Collectives with Data Partitioning-based Multi-Leader Design, Supercomputing '17.
MVAPICH User Group Meeting (MUG) 2018 34Network Based Computing Laboratory
Performance of MPI_Allreduce On Stampede2 (10,240 Processes)
0
50
100
150
200
250
300
4 8 16 32 64 128 256 512 1024 2048 4096
Late
ncy
(us)
Message Size
MVAPICH2 MVAPICH2-OPT IMPI
0
200
400
600
800
1000
1200
1400
1600
1800
2000
8K 16K 32K 64K 128K 256KMessage Size
MVAPICH2 MVAPICH2-OPT IMPIOSU Micro Benchmark 64 PPN
2.4X
• MPI_Allreduce latency with 32K bytes reduced by 2.4X
MVAPICH User Group Meeting (MUG) 2018 35Network Based Computing Laboratory
Application Level Performance with Graph500 and SortGraph500 Execution Time
J. Jose, S. Potluri, K. Tomko and D. K. Panda, Designing Scalable Graph500 Benchmark with Hybrid MPI+OpenSHMEM Programming Models, International Supercomputing Conference (ISC’13), June 2013
J. Jose, K. Kandalla, M. Luo and D. K. Panda, Supporting Hybrid MPI and OpenSHMEM over InfiniBand: Design and Performance Evaluation, Int'l Conference on Parallel Processing (ICPP '12), September 2012
05
101520253035
4K 8K 16K
Tim
e (s
)
No. of Processes
MPI-Simple
MPI-CSC
MPI-CSR
Hybrid (MPI+OpenSHMEM) 13X
7.6X
• Performance of Hybrid (MPI+ OpenSHMEM) Graph500 Design• 8,192 processes
- 2.4X improvement over MPI-CSR- 7.6X improvement over MPI-Simple
• 16,384 processes- 1.5X improvement over MPI-CSR- 13X improvement over MPI-Simple
J. Jose, K. Kandalla, S. Potluri, J. Zhang and D. K. Panda, Optimizing Collective Communication in OpenSHMEM, Int'l Conference on Partitioned Global Address Space Programming Models (PGAS '13), October 2013.
Sort Execution Time
0
500
1000
1500
2000
2500
3000
500GB-512 1TB-1K 2TB-2K 4TB-4K
Tim
e (s
econ
ds)
Input Data - No. of Processes
MPI Hybrid
51%
• Performance of Hybrid (MPI+OpenSHMEM) Sort Application
• 4,096 processes, 4 TB Input Size- MPI – 2408 sec; 0.16 TB/min- Hybrid – 1172 sec; 0.36 TB/min- 51% improvement over MPI-design
MVAPICH User Group Meeting (MUG) 2018 36Network Based Computing Laboratory
• MVAPICH2-X 2.3RC1 will be released soon with the following features– Support for XPMEM (Point-to-point, Reduce, and All-reduce)
– Efficient Asynchronous Progress with Full Subscription
– Contention-aware CMA-based collective support for OpenSHMEM, UPC, and UPC++
• Implicit On-Demand Paging (ODP) Support
• Other collectives with XPMEM-based Support
MVAPICH2-X Upcoming Features
MVAPICH User Group Meeting (MUG) 2018 37Network Based Computing Laboratory
Shared Address Space (XPMEM)-based Collectives Design
1
10
100
1000
10000
100000
16K 32K 64K 128K 256K 512K 1M 2M 4M
Late
ncy
(us)
Message Size
MVAPICH2-2.3bIMPI-2017v1.132MVAPICH2-X
OSU_Allreduce (Broadwell 256 procs)
• “Shared Address Space”-based true zero-copy Reduction collective designs in MVAPICH2
• Offloaded computation/communication to peers ranks in reduction collective operation
• Up to 4X improvement for 4MB Reduce and up to 1.8X improvement for 4M AllReduce
73.2
1.8X
1
10
100
1000
10000
100000
16K 32K 64K 128K 256K 512K 1M 2M 4M
Message Size
MVAPICH2-2.3b
IMPI-2017v1.132
MVAPICH2-X
OSU_Reduce (Broadwell 256 procs)
4X
36.1
37.9
16.8
J. Hashmi, S. Chakraborty, M. Bayatpour, H. Subramoni, and D. Panda, Designing Efficient Shared Address Space Reduction Collectives for Multi-/Many-cores, International Parallel & Distributed Processing Symposium (IPDPS '18), May 2018.
MVAPICH User Group Meeting (MUG) 2018 38Network Based Computing Laboratory
0
1000
2000
16K 32K 64K 128K 256K 512K 1M 2M
Late
ncy
(us)
MVAPICH2-X
SpectrumMPI-10.1.0
OpenMPI-3.0.0
3X0
2000
4000
16K 32K 64K 128K 256K 512K 1M 2M
Late
ncy
(us)
Message Size
MVAPICH2-X
SpectrumMPI-10.1.0
OpenMPI-3.0.0
34%
0
1000
2000
3000
4000
16K 32K 64K 128K 256K 512K 1M 2M
Late
ncy
(us)
Message Size
MVAPICH2-X
SpectrumMPI-10.1.0
OpenMPI-3.0.0
0
1000
2000
16K 32K 64K 128K 256K 512K 1M 2M
Late
ncy
(us)
MVAPICH2-X
SpectrumMPI-10.1.0
OpenMPI-3.0.0
Optimized All-Reduce with XPMEM on OpenPOWER(N
odes
=1, P
PN=2
0)
Optimized Runtime Parameters: MV2_CPU_BINDING_POLICY=hybrid MV2_HYBRID_BINDING_POLICY=bunch• Optimized MPI All-Reduce Design in MVAPICH2– Up to 2X performance improvement over Spectrum MPI and 4X over OpenMPI for intra-node
2X
(Nod
es=2
, PPN
=20)
4X48%
3.3X
2X
2X
More Details in Poster: Designing Shared Address Space MPI libraries in the Many-core Era - Jahanzeb Hashmi, The Ohio State University
MVAPICH User Group Meeting (MUG) 2018 39Network Based Computing Laboratory
Enhanced Asynchronous Progress Communication
38% performance improvement for SPECMPI applications on 384 processes 44% performance improvement with the P3DFFT application on 448 processes.A. Ruhela, H. Subramoni, S. Chakraborty, M. Bayatpour, P. Kousha, and D. K. Panda, Efficient Asynchronous Communication Progress for MPI withoutDedicated Resources, EuroMPI 2018
• Achieve overlaps without using specialized hardware or software resources• Can work with MPI tasks in full-subscribed mode
SPEC MPI : KNL + Omni-Path SPEC MPI : Skylake + Omni-Path P3DFFT : Broadwell + InfiniBand
15%
18%
29%12%10%
38%
13%
25% 12% 6%
44%33%
Efficient Asynchronous Communication Progress for MPI without Dedicated Resources - Amit Ruhela, The Ohio State University
MVAPICH User Group Meeting (MUG) 2018 40Network Based Computing Laboratory
0
20
40
60
80
100
120
140
160
MILC Leslie3D POP2 LAMMPS WRF2 LU
Exec
utio
n Ti
me
in (s
)
Intel MPI 18.1.163
MVAPICH2-X
31%
SPEC MPI 2007 Benchmarks: Broadwell + InfiniBand
MVAPICH2-X outperforms Intel MPI by up to 31%
Configuration: 448 processes on 16 Intel E5-2680v4 (Broadwell) nodes having 28 PPN and interconnected with 100Gbps Mellanox MT4115 EDR ConnectX-4 HCA
29% 5%
-12%1%
11%
MVAPICH User Group Meeting (MUG) 2018 41Network Based Computing Laboratory
0
100
200
300
400
500
600
700
MILC Leslie3D POP2 LAMMPS WRF2 GAP Lu
Exec
utio
n Ti
me
in (s
)
Intel 18.0.2
MVAPICH2-X
-7%
10%
SPEC MPI 2007 Benchmarks: KNL + Omni-Path
MVAPICH2-X outperforms Intel MPI by up to 22%
Configuration : 384 processes on 8 nodes of Intel Xeon Phi 7250(KNL) with 48 processes per node. KNL contains 68 cores on a single socket and interconnects with 100Gb/sec Intel Omni-Path network.
-2%22%
15%
4%
10%
MVAPICH User Group Meeting (MUG) 2018 42Network Based Computing Laboratory
• Introduced by Mellanox to avoid pinning the pages of registered memory regions
• ODP-aware runtime could reduce the size of pin-down buffers while maintaining performance
Implicit On-Demand Paging (ODP)
M. Li, X. Lu, H. Subramoni, and D. K. Panda, “Designing Registration Caching Free High-Performance MPI Library with Implicit On-Demand Paging (ODP) of InfiniBand”, HiPC ‘17
0.1
1
10
100
CG EP FT IS MG LU SP AWP-ODC Graph500
Exec
utio
nTi
me
(s)
Applications (256 Processes)
Pin-down Explicit-ODP Implicit-ODP
MVAPICH User Group Meeting (MUG) 2018 43Network Based Computing Laboratory
osu_bcast
• 28 MPI Processes on single dual-socket Broadwell E5-2680v4, 2x14 core processor
1
10
100
1000
100004K 8K 16
K
32K
64K
128K
256K
512K 1M 2M 4M
Intel MPI 2018OpenMPI 3.0.1MV2-2.3Xrc1 (CMA Coll)MV2-XPMEM
Late
ncy
(us)
5X overOpen MPI
Message Size (bytes)
1
10
100
1000
10000
100000
512 1K 2K 4K 8K 16K
32K
64K
128K
256K
512K 1M 2M 4M
Intel MPI 2018OpenMPI 3.0.1MV2-2.3Xrc1 (CMA Coll)MV2-XPMEM
2.4X over MV2 CMA-coll
Late
ncy
(us)
Message Size (bytes)
osu_scatter
XPMEM Collectives: osu_bcast and osu_scatter
More Details in Poster: Designing Shared Address Space MPI libraries in the Many-core Era - Jahanzeb Hashmi, The Ohio State University
MVAPICH User Group Meeting (MUG) 2018 44Network Based Computing Laboratory
1
10
100
1000
10000
1000004K 8K 16
K
32K
64K
128K
256K
512K 1M 2M 4M
Intel MPI 2018
OpenMPI 3.0.1
MV2X-2.3rc1 (CMA Coll)
MV2-XPMEM
osu_gather
• 28 MPI Processes on single dual-socket Broadwell E5-2680v4, 2x14 core processor
Late
ncy
(us)
Message Size (bytes)
Late
ncy
(us)
Message Size (bytes)
osu_alltoall
XPMEM Collectives: osu_gather and osu_alltoall
5X over Open MPI
1
10
100
1000
10000
100000
128
256
512 1K 2K 4K 8K 16K
32K
64K
128K
256K
512K 1M 2M 4M
Intel MPI 2018
OpenMPI 3.0.1
MV2X-2.3rc1 (CMA Coll)
MV2-XPMEMData does not fit in shared L3 for 28ppn
Alltoall, leads to same performance as existing designs
Up to 6X over IMPI and OMPI
More Details in Poster: Designing Shared Address Space MPI libraries in the Many-core Era - Jahanzeb Hashmi, The Ohio State University
MVAPICH User Group Meeting (MUG) 2018 45Network Based Computing Laboratory
MVAPICH2 Software Family Requirements Library
MPI with IB, iWARP, Omni-Path, and RoCE MVAPICH2
Advanced MPI Features/Support, OSU INAM, PGAS and MPI+PGAS with IB, Omni-Path, and RoCE
MVAPICH2-X
MPI with IB, RoCE & GPU and Support for Deep Learning MVAPICH2-GDR
HPC Cloud with MPI & IB MVAPICH2-Virt
Energy-aware MPI with IB, iWARP and RoCE MVAPICH2-EA
MPI Energy Monitoring Tool OEMT
InfiniBand Network Analysis and Monitoring OSU INAM
Microbenchmarks for Measuring MPI and PGAS Performance OMB
MVAPICH User Group Meeting (MUG) 2018 46Network Based Computing Laboratory
CUDA-Aware MPI: MVAPICH2-GDR 1.8-2.3 Releases
• Support for MPI communication from NVIDIA GPU device memory• High performance RDMA-based inter-node point-to-point
communication (GPU-GPU, GPU-Host and Host-GPU)• High performance intra-node point-to-point communication for multi-
GPU adapters/node (GPU-GPU, GPU-Host and Host-GPU)• Taking advantage of CUDA IPC (available since CUDA 4.1) in intra-node
communication for multiple GPU adapters/node• Optimized and tuned collectives for GPU device buffers• MPI datatype support for point-to-point and collective communication
from GPU device buffers• Unified memory
MVAPICH User Group Meeting (MUG) 2018 47Network Based Computing Laboratory
• Released on 11/09/2017
• Major Features and Enhancements
– Based on MVAPICH2 2.2
– Support for CUDA 9.0
– Add support for Volta (V100) GPU
– Support for OpenPOWER with NVLink
– Efficient Multiple CUDA stream-based IPC communication for multi-GPU systems with and without NVLink
– Enhanced performance of GPU-based point-to-point communication
– Leverage Linux Cross Memory Attach (CMA) feature for enhanced host-based communication
– Enhanced performance of MPI_Allreduce for GPU-resident data
– InfiniBand Multicast (IB-MCAST) based designs for GPU-based broadcast and streaming applications• Basic support for IB-MCAST designs with GPUDirect RDMA
• Advanced support for zero-copy IB-MCAST designs with GPUDirect RDMA
• Advanced reliability support for IB-MCAST designs
– Efficient broadcast designs for Deep Learning applications
– Enhanced collective tuning on Xeon, OpenPOWER, and NVIDIA DGX-1 systems
MVAPICH2-GDR 2.3a
MVAPICH User Group Meeting (MUG) 2018 48Network Based Computing Laboratory
0100020003000400050006000
1 2 4 8 16 32 64 128
256
512 1K 2K 4K
Band
wid
th (M
B/s)
Message Size (Bytes)
GPU-GPU Inter-node Bi-Bandwidth
MV2-(NO-GDR) MV2-GDR-2.3a
0500
100015002000250030003500
1 2 4 8 16 32 64 128 256 512 1K 2K 4K
Band
wid
th (M
B/s)
Message Size (Bytes)
GPU-GPU Inter-node Bandwidth
MV2-(NO-GDR) MV2-GDR-2.3a
05
1015202530
0 1 2 4 8 16 32 64 128 256 512 1K 2K 4K 8K
Late
ncy
(us)
Message Size (Bytes)
GPU-GPU Inter-node Latency
MV2-(NO-GDR) MV2-GDR-2.3a
MVAPICH2-GDR-2.3aIntel Haswell (E5-2687W @ 3.10 GHz) node - 20 cores
NVIDIA Volta V100 GPUMellanox Connect-X4 EDR HCA
CUDA 9.0Mellanox OFED 4.0 with GPU-Direct-RDMA
10x
9x
Optimized MVAPICH2-GDR Design
1.88us11X
MVAPICH User Group Meeting (MUG) 2018 49Network Based Computing Laboratory
• Platform: Wilkes (Intel Ivy Bridge + NVIDIA Tesla K20c + Mellanox Connect-IB)• HoomdBlue Version 1.0.5
• GDRCOPY enabled: MV2_USE_CUDA=1 MV2_IBA_HCA=mlx5_0 MV2_IBA_EAGER_THRESHOLD=32768 MV2_VBUF_TOTAL_SIZE=32768 MV2_USE_GPUDIRECT_LOOPBACK_LIMIT=32768 MV2_USE_GPUDIRECT_GDRCOPY=1 MV2_USE_GPUDIRECT_GDRCOPY_LIMIT=16384
Application-Level Evaluation (HOOMD-blue)
0
500
1000
1500
2000
2500
4 8 16 32Aver
age
Tim
e St
eps p
er se
cond
(TPS
)
Number of Processes
MV2 MV2+GDR
0
500
1000
1500
2000
2500
3000
3500
4 8 16 32
Aver
age
Tim
e St
eps p
er se
cond
(T
PS)
Number of Processes
64K Particles 256K Particles
2X2X
MVAPICH User Group Meeting (MUG) 2018 50Network Based Computing Laboratory
Application-Level Evaluation (Cosmo) and Weather Forecasting in Switzerland
0
0.2
0.4
0.6
0.8
1
1.2
16 32 64 96N
orm
alize
d Ex
ecut
ion
Tim
e
Number of GPUs
CSCS GPU cluster
Default Callback-based Event-based
0
0.2
0.4
0.6
0.8
1
1.2
4 8 16 32Nor
mal
ized
Exec
utio
n Ti
me
Number of GPUs
Wilkes GPU Cluster
Default Callback-based Event-based
• 2X improvement on 32 GPUs nodes• 30% improvement on 96 GPU nodes (8 GPUs/node)
C. Chu, K. Hamidouche, A. Venkatesh, D. Banerjee , H. Subramoni, and D. K. Panda, Exploiting Maximal Overlap for Non-Contiguous Data Movement Processing on Modern GPU-enabled Systems, IPDPS’16
On-going collaboration with CSCS and MeteoSwiss (Switzerland) in co-designing MV2-GDR and Cosmo Application
Cosmo model: http://www2.cosmo-model.org/content/tasks/operational/meteoSwiss/
MVAPICH User Group Meeting (MUG) 2018 51Network Based Computing Laboratory
Multi-stream Communication using CUDA IPC on OpenPOWER and DGX-1
• Up to 16% higher Device to Device (D2D) bandwidth on OpenPOWER + NVLink inter-connect
• Up to 30% higher D2D bandwidth on DGX-1 with NVLink
•
0
5000
10000
15000
20000
25000
30000
35000
40000
128K 256K 512K 1M 2M 4M
Mill
ion
Byte
s (M
B)/s
econ
d
Message Size (Bytes)
Pt-to-pt (D-D) Bandwidth:Benefits of Multi-stream CUDA IPC Design
1-stream 4-streams
16% better
02000400060008000
100001200014000160001800020000
16K 32K 64K 128K 256K 512K 1M 2M 4M
Mill
ion
Byte
s (M
B)/s
econ
d
Message Size (Bytes)
Pt-to-pt (D-D) Bandwidth:Benefits of Multi-stream CUDA IPC Design
1-stream 4-streams
30% better
Available since MVAPICH2-GDR-2.3a
MVAPICH User Group Meeting (MUG) 2018 52Network Based Computing Laboratory
02000400060008000
10000120001400016000
1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M
Band
wid
th (M
Bps)
Message Size (Bytes)
INTRA-NODE Pt-to-Pt (H2H) BANDWIDTH
MV2-GDR (w/out CMA) MV2-GDR (w/ CMA)
CMA-based Intra-node Communication Support
0
100
200
300
400
500
600
0 2 8 32 128 512 2K 8K 32K 128K 512K 2M
Late
ncy
(us)
Message Size (Bytes)
INTRA-NODE Pt-to-Pt (H2H) LATENCY
MV2-GDR (w/out CMA) MV2-GDR (w/ CMA)
MVAPICH2-GDR-2.3aIntel Broadwell (E5-2680 v4 @ 3240 GHz) node – 28 cores
NVIDIA Tesla K-80 GPU, and Mellanox Connect-X4 EDR HCACUDA 8.0, Mellanox OFED 4.0 with GPU-Direct-RDMA
• Up to 30% lower Host-to-Host (H2H) latency and 30% higher H2H Bandwidth
30% better30% better
MVAPICH User Group Meeting (MUG) 2018 53Network Based Computing Laboratory
0
5
10
15
20
1 2 4 8 16 32 64 128256512 1K 2K 4K 8K
Late
ncy
(us)
Message Size (Bytes)
INTRA-NODE LATENCY (SMALL)
INTRA-SOCKET(NVLINK) INTER-SOCKET
MVAPICH2-GDR: Performance on OpenPOWER (NVLink + Pascal)
0
10
20
30
40
1 8 64 512 4K 32K 256K 2M
Band
wid
th (G
B/se
c)
Message Size (Bytes)
INTRA-NODE BANDWIDTH
INTRA-SOCKET(NVLINK) INTER-SOCKET
01234567
1 8 64 512 4K 32K 256K 2M
Band
wid
th (G
B/se
c)
Message Size (Bytes)
INTER-NODE BANDWIDTH
Platform: OpenPOWER (ppc64le) nodes equipped with a dual-socket CPU, 4 Pascal P100-SXM GPUs, and 4X-FDR InfiniBand Inter-connect
0
100
200
300
400
500
16K 32K 64K 128K 256K 512K 1M 2M 4M
Late
ncy
(us)
Message Size (Bytes)
INTRA-NODE LATENCY (LARGE)
INTRA-SOCKET(NVLINK) INTER-SOCKET
05
101520253035
1 2 4 8 16 32 64 128256512 1K 2K 4K 8K
Late
ncy
(us)
Message Size (Bytes)
INTER-NODE LATENCY (SMALL)
0100200300400500600700800900
16K 32K 64K 128K 256K 512K 1M 2M 4M
Late
ncy
(us)
Message Size (Bytes)
INTER-NODE LATENCY (LARGE)
Intra-node Bandwidth: 32 GB/sec (NVLINK)Intra-node Latency: 16.8 us (without GPUDirectRDMA)
Inter-node Latency: 22 us (without GPUDirectRDMA) Inter-node Bandwidth: 6 GB/sec (FDR)
MVAPICH User Group Meeting (MUG) 2018 54Network Based Computing Laboratory
• Deep Learning frameworks are a different game altogether
– Unusually large message sizes (order of megabytes)
– Most communication based on GPU buffers
• Existing State-of-the-art– cuDNN, cuBLAS, NCCL --> scale-up performance
– NCCL2, CUDA-Aware MPI --> scale-out performance• For small and medium message sizes only!
• Proposed: Can we co-design the MPI runtime (MVAPICH2-GDR) and the DL framework (Caffe) to achieve both?
– Efficient Overlap of Computation and Communication
– Efficient Large-Message Communication (Reductions)
– What application co-designs are needed to exploit communication-runtime co-designs?
Deep Learning: New Challenges for MPI Runtimes
Scal
e-up
Per
form
ance
Scale-out Performance
cuDNN
NCCL
gRPC
Hadoop
ProposedCo-
Designs
MPIcuBLAS
A. A. Awan, K. Hamidouche, J. M. Hashmi, and D. K. Panda, S-Caffe: Co-designing MPI Runtimes and Caffe for Scalable Deep Learning on Modern GPU Clusters. In Proceedings of the 22nd ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (PPoPP '17)
NCCL2
MVAPICH User Group Meeting (MUG) 2018 55Network Based Computing Laboratory
050
100150200250
2 4 8 16 32 64 128
Late
ncy
(ms)
Message Size (MB)
Reduce – 192 GPUs
Large Message Optimized Collectives for Deep Learning
0
50
100
150
200
128 160 192
Late
ncy
(ms)
No. of GPUs
Reduce – 64 MB
0
100
200
300
16 32 64
Late
ncy
(ms)
No. of GPUs
Allreduce - 128 MB
0
20
40
60
80
2 4 8 16 32 64 128
Late
ncy
(ms)
Message Size (MB)
Bcast – 64 GPUs
0
20
40
60
80
16 32 64
Late
ncy
(ms)
No. of GPUs
Bcast 128 MB
• MV2-GDR provides optimized collectives for large message sizes
• Optimized Reduce, Allreduce, and Bcast
• Good scaling with large number of GPUs
• Available since MVAPICH2-GDR 2.2GA
0
100
200
300
2 4 8 16 32 64 128La
tenc
y (m
s)Message Size (MB)
Allreduce – 64 GPUs
MVAPICH User Group Meeting (MUG) 2018 56Network Based Computing Laboratory
• MVAPICH2-GDR offers excellent performance via advanced designs for MPI_Allreduce.
• Up to 22% better performance on Wilkes2 cluster (16 GPUs)
Exploiting CUDA-Aware MPI for TensorFlow (Horovod)
0
500
1000
1500
2000
2500
3000
3500
1 2 4 8 16
Imag
es/s
ec (h
ighe
r is b
ette
r)
No. of GPUs (4GPUs/node)
MVAPICH2 MVAPICH2-GDR
MVAPICH2-GDR is up to 22% faster than MVAPICH2
MVAPICH User Group Meeting (MUG) 2018 57Network Based Computing Laboratory
0
5000
10000
15000
20000
1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M
Band
wid
th (M
B/s)
Message Size (Bytes)
GPU-GPU Inter-node Bi-Bandwidth
Docker Native
02000400060008000
1000012000
1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M
Band
wid
th (M
B/s)
Message Size (Bytes)
GPU-GPU Inter-node Bandwidth
Docker Native
1
10
100
1000
1 4 16 64 256 1K 4K 16K 64K 256K 1M 4M
Late
ncy
(us)
Message Size (Bytes)
GPU-GPU Inter-node Latency
Docker Native
MVAPICH2-GDR-2.3aIntel Haswell (E5-2687W @ 3.10 GHz) node - 20 cores
NVIDIA Volta V100 GPUMellanox Connect-X4 EDR HCA
CUDA 9.0Mellanox OFED 4.0 with GPU-Direct-RDMA
MVAPICH2-GDR on Container with Negligible Overhead
MVAPICH User Group Meeting (MUG) 2018 58Network Based Computing Laboratory
• MVAPICH2-GDR 2.3b will be released soon– Scalable Host-based Collectives
– Optimized Support for Deep Learning (Caffe and CNTK)
– Support for Streaming Applications
• Optimized Collectives for Multi-Rails (Sierra)
• Integrated Collective Support with SHArP
MVAPICH2-GDR Upcoming Features
MVAPICH User Group Meeting (MUG) 2018 59Network Based Computing Laboratory
0
10
20
30
40
4 8 16 32 64 128 256 512 1K 2K 4K
Late
ncy
(us)
MVAPICH2-GDRSpectrumMPI-10.1.0.2OpenMPI-3.0.0
Scalable Host-based Collectives on OpenPOWER (Intra-node Reduce & AlltoAll)(N
odes
=1, P
PN=2
0)
0
50
100
150
200
4 8 16 32 64 128 256 512 1K 2K 4K
Late
ncy
(us)
MVAPICH2-GDR
SpectrumMPI-10.1.0.2
OpenMPI-3.0.0
(Nod
es=1
, PPN
=20)
Up to 5X and 3x performance improvement by MVAPICH2 for small and large messages respectively
3.6X
Alltoall
Reduce
5.2X
3.2X3.3X
0
400
800
1200
1600
2000
8K 16K 32K 64K 128K 256K 512K 1M
Late
ncy
(us)
MVAPICH2-GDRSpectrumMPI-10.1.0.2OpenMPI-3.0.0
(Nod
es=1
, PPN
=20)
0
2500
5000
7500
10000
8K 16K 32K 64K 128K 256K 512K 1M
Late
ncy
(us)
MVAPICH2-GDR
SpectrumMPI-10.1.0.2
OpenMPI-3.0.0
(Nod
es=1
, PPN
=20)
3.2X
1.4X
1.3X1.2X
MVAPICH User Group Meeting (MUG) 2018 60Network Based Computing Laboratory
05000
100001500020000250003000035000400004500050000
512K 1M 2M 4M
Late
ncy
(us)
Message Size (Bytes)
MVAPICH2 BAIDU OPENMPI
0
1000000
2000000
3000000
4000000
5000000
6000000
8M 32M 128M 512M
Late
ncy
(us)
Message Size (Bytes)
MVAPICH2 BAIDU OPENMPI
1
10
100
1000
10000
100000
4 32 256 2K 16K 128K
Late
ncy
(us)
Message Size (Bytes)
MVAPICH2 BAIDU OPENMPI
• 16 GPUs (4 nodes) MVAPICH2-GDR vs. Baidu-Allreduce and OpenMPI 3.0
MVAPICH2-GDR: Allreduce Comparison with Baidu and OpenMPI
*Available since MVAPICH2-GDR 2.3a
~30X betterMV2 is ~2X better
than Baidu
~10X better OpenMPI is ~5X slower than Baidu
~4X better
MVAPICH User Group Meeting (MUG) 2018 61Network Based Computing Laboratory
1
10
100
1000
10000
100000
4 32 256 2K 16K 128K 1M 8M 64M
Late
ncy
(us)
Message Size (Bytes)
NCCL2 MVAPICH2-GDR
MVAPICH2-GDR vs. NCCL2 – Broadcast Operation
• Optimized designs in MVAPICH2-GDR 2.3b* offer better/comparable performance for most cases
• MPI_Bcast (MVAPICH2-GDR) vs. ncclBcast (NCCL2) on 16 K-80 GPUs
*Will be available with upcoming MVAPICH2-GDR 2.3b
1
10
100
1000
10000
100000
4 32 256 2K 16K 128K 1M 8M 64M
Late
ncy
(us)
Message Size (Bytes)
NCCL2 MVAPICH2-GDR
~10X better
~4X better
1 GPU/node 2 GPUs/node
Platform: Intel Xeon (Broadwell) nodes equipped with a dual-socket CPU, 2 K-80 GPUs, and EDR InfiniBand Inter-connect
MVAPICH User Group Meeting (MUG) 2018 62Network Based Computing Laboratory
MVAPICH2-GDR vs. NCCL2 – Reduce Operation
• Optimized designs in MVAPICH2-GDR 2.3b* offer better/comparable performance for most cases
• MPI_Reduce (MVAPICH2-GDR) vs. ncclReduce (NCCL2) on 16 GPUs
*Will be available with upcoming MVAPICH2-GDR 2.3b
1
10
100
1000
10000
100000
Late
ncy
(us)
Message Size (Bytes)
MVAPICH2-GDR NCCL2
~5X better
Platform: Intel Xeon (Broadwell) nodes equipped with a dual-socket CPU, 1 K-80 GPUs, and EDR InfiniBand Inter-connect
1
10
100
1000
4 8 16 32 64 128 256 512 1K 2K 4K 8K 16K 32K 64K
Late
ncy
(us)
Message Size (Bytes)
MVAPICH2-GDR NCCL2
~2.5X better
MVAPICH User Group Meeting (MUG) 2018 63Network Based Computing Laboratory
MVAPICH2-GDR vs. NCCL2 – Allreduce Operation
• Optimized designs in MVAPICH2-GDR 2.3b* offer better/comparable performance for most cases
• MPI_Allreduce (MVAPICH2-GDR) vs. ncclAllreduce (NCCL2) on 16 GPUs
*Will be available with upcoming MVAPICH2-GDR 2.3b
1
10
100
1000
10000
100000
128K 512K 2M 8M 32M 128M
Late
ncy
(us)
Message Size (Bytes)
MVAPICH2-GDR NCCL2
~1.2X better
Platform: Intel Xeon (Broadwell) nodes equipped with a dual-socket CPU, 1 K-80 GPUs, and EDR InfiniBand Inter-connect
1
10
100
1000
4 8 16 32 64 128 256 512 1K 2K 4K 8K 16K 32K 64K
Late
ncy
(us)
Message Size (Bytes)
MVAPICH2-GDR NCCL2
~3X better
MVAPICH User Group Meeting (MUG) 2018 64Network Based Computing Laboratory
• Combining MCAST+GDR hardware features for heterogeneous configurations: – Source on the Host and destination on Device
– SL design: Scatter at destination• Source: Data and Control on Host
• Destinations: Data on Device and Control on Host
– Combines IB MCAST and GDR features at receivers
– CUDA IPC-based topology-aware intra-node broadcast
– Minimize use of PCIe resources (Maximizing availability of PCIe Host-Device Resources)
Streaming Support (Combining GDR and IB-Mcast)
Node NIB
HCA
IB HCA
CPU
GPU
Source
IB Switch
GPU
CPU
Node 1
Multicast steps
C Data
C
IB SL step
Data
IB HCA
GPU
CPU
Data
C
C.-H. Chu, K. Hamidouche, H. Subramoni, A. Venkatesh, B. Elton, and D. K. Panda. “Designing High Performance Heterogeneous Broadcast for Streaming Applications on GPU Clusters, “ SBAC-PAD’16, Oct 2016.
MVAPICH User Group Meeting (MUG) 2018 65Network Based Computing Laboratory
• Optimizing MCAST+GDR Broadcast for deep learning: – Source and destination buffers are on GPU Device
• Typically very large messages (>1MB)
– Pipelining data from Device to Host
• Avoid GDR read limit
• Leverage high-performance SL design– Combines IB MCAST and GDR features
– Minimize use of PCIe resources on the receiver side• Maximizing availability of PCIe Host-Device Resources
Exploiting GDR+IB-Mcast Design for Deep Learning Applications
IB HCA
CPU
GPU
Source
IB Switch
Header
Data
IB HCA
CPU
GPU
Node 1Header
Data
IB HCA
CPU
GPU
Node NHeader
Data
1. Pipeline Data movement2. IB Gather 3. IB Hardware Multicast4. IB Scatter + GDR Write
Data
Ching-Hsiang Chu, Xiaoyi Lu, Ammar A. Awan, Hari Subramoni, Jahanzeb Hashmi, Bracy Elton, and DhabaleswarK. Panda, "Efficient and Scalable Multi-Source Streaming Broadcast on GPU Clusters for Deep Learning , ”ICPP’17.
MVAPICH User Group Meeting (MUG) 2018 66Network Based Computing Laboratory
• @ RI2 cluster, 16 GPUs, 1 GPU/node– Microsoft Cognitive Toolkit (CNTK) [https://github.com/Microsoft/CNTK]
Application Evaluation: Deep Learning Frameworks
050
100150200250300
8 16
Trai
ning
Tim
e (s
)
Number of GPU nodes
AlexNet modelMV2-GDR-Knomial MV2-GDR-Ring MCAST-GDR-Opt
0
500
1000
1500
2000
2500
8 16
Trai
ning
Tim
e (s
)
Number of GPU nodes
VGG modelMV2-GDR-Knomial MV2-GDR-Ring MCAST-GDR-Opt
Lower is better
15% 24% 6%15%
Ching-Hsiang Chu, Xiaoyi Lu, Ammar A. Awan, Hari Subramoni, Jahanzeb Hashmi, Bracy Elton, and Dhabaleswar K. Panda, "Efficient and Scalable Multi-Source Streaming Broadcast on GPU Clusters for Deep Learning , " ICPP’17.
• Reduces up to 24% and 15% of latency for AlexNet and VGG models• Higher improvement can be observed for larger system sizes
MVAPICH User Group Meeting (MUG) 2018 67Network Based Computing Laboratory
MVAPICH2 Software Family Requirements Library
MPI with IB, iWARP, Omni-Path, and RoCE MVAPICH2
Advanced MPI Features/Support, OSU INAM, PGAS and MPI+PGAS with IB, Omni-Path, and RoCE
MVAPICH2-X
MPI with IB, RoCE & GPU and Support for Deep Learning MVAPICH2-GDR
HPC Cloud with MPI & IB MVAPICH2-Virt
Energy-aware MPI with IB, iWARP and RoCE MVAPICH2-EA
MPI Energy Monitoring Tool OEMT
InfiniBand Network Analysis and Monitoring OSU INAM
Microbenchmarks for Measuring MPI and PGAS Performance OMB
MVAPICH User Group Meeting (MUG) 2018 68Network Based Computing Laboratory
• Virtualization has many benefits– Fault-tolerance– Job migration– Compaction
• Have not been very popular in HPC due to overhead associated with Virtualization• New SR-IOV (Single Root – IO Virtualization) support available with Mellanox
InfiniBand adapters changes the field• Enhanced MVAPICH2 support for SR-IOV• MVAPICH2-Virt 2.2 supports:
– Efficient MPI communication over SR-IOV enabled InfiniBand networks– High-performance and locality-aware MPI communication for VMs and containers– Automatic communication channel selection for VMs (SR-IOV, IVSHMEM, and CMA/LiMIC2)
and containers (IPC-SHM, CMA, and HCA)– OpenStack, Docker, and Singularity
Can HPC and Virtualization be Combined?
J. Zhang, X. Lu, J. Jose, R. Shi and D. K. Panda, Can Inter-VM Shmem Benefit MPI Applications on SR-IOV based Virtualized InfiniBand Clusters? EuroPar'14
J. Zhang, X. Lu, J. Jose, M. Li, R. Shi and D.K. Panda, High Performance MPI Libray over SR-IOV enabled InfiniBand Clusters, HiPC’14
J. Zhang, X .Lu, M. Arnold and D. K. Panda, MVAPICH2 Over OpenStack with SR-IOV: an Efficient Approach to build HPC Clouds, CCGrid’15
MVAPICH User Group Meeting (MUG) 2018 69Network Based Computing Laboratory
0
50
100
150
200
250
300
350
400
milc leslie3d pop2 GAPgeofem zeusmp2 lu
Exec
utio
n Ti
me
(s)
MV2-SR-IOV-Def
MV2-SR-IOV-Opt
MV2-Native
1%9.5%
0
1000
2000
3000
4000
5000
6000
22,20 24,10 24,16 24,20 26,10 26,16
Exec
utio
n Ti
me
(ms)
Problem Size (Scale, Edgefactor)
MV2-SR-IOV-Def
MV2-SR-IOV-Opt
MV2-Native2%
• 32 VMs, 6 Core/VM
• Compared to Native, 2-5% overhead for Graph500 with 128 Procs
• Compared to Native, 1-9.5% overhead for SPEC MPI2007 with 128 Procs
Application-Level Performance on Chameleon (SR-IOV Support)
SPEC MPI2007Graph500
5%
MVAPICH User Group Meeting (MUG) 2018 70Network Based Computing Laboratory
0
500
1000
1500
2000
2500
3000
3500
4000
22, 16 22, 20 24, 16 24, 20 26, 16 26, 20
Exec
utio
n Ti
me
(ms)
Problem Size (Scale, Edgefactor)
Container-Def
Container-Opt
Native
0
10
20
30
40
50
60
70
80
90
100
MG.D FT.D EP.D LU.D CG.D
Exec
utio
n Ti
me
(s)
Container-Def
Container-Opt
Native
• 64 Containers across 16 nodes, pining 4 Cores per Container
• Compared to Container-Def, up to 11% and 16% of execution time reduction for NAS and Graph 500
• Compared to Native, less than 9 % and 4% overhead for NAS and Graph 500
Application-Level Performance on Chameleon (Containers Support)
Graph 500NAS
11%
16%
MVAPICH User Group Meeting (MUG) 2018 71Network Based Computing Laboratory
0
500
1000
1500
2000
2500
3000
22,16 22,20 24,16 24,20 26,16 26,20
BFS
Exec
utio
n Ti
me
(ms)
Problem Size (Scale, Edgefactor)
Graph500
Singularity
Native
0
50
100
150
200
250
300
CG EP FT IS LU MG
Exec
utio
n Ti
me
(s)
NPB Class D
Singularity
Native
• 512 Processes across 32 nodes
• Less than 7% and 6% overhead for NPB and Graph500, respectively
Application-Level Performance on Singularity with MVAPICH2
7%
6%
More Details were presented
in yesterday’s Tutorial Session
MVAPICH User Group Meeting (MUG) 2018 72Network Based Computing Laboratory
• Support for SR-IOV Enabled VM Migration
• SR-IOV and IVSHMEM Support in SLURM
• Support for Microsoft Azure Platform
MVAPICH2-Virt Upcoming Features
MVAPICH User Group Meeting (MUG) 2018 73Network Based Computing Laboratory
High Performance SR-IOV enabled VM Migration Support in MVAPICH2
J. Zhang, X. Lu, D. K. Panda. High-Performance Virtual Machine Migration Framework for MPI Applications on SR-IOV enabled InfiniBand Clusters. IPDPS, 2017
• Migration with SR-IOV device has to handle the challenges of detachment/re-attachment of virtualized IB device and IB connection
• Consist of SR-IOV enabled IB Cluster and External Migration Controller
• Multiple parallel libraries to notify MPI applications during migration (detach/reattach SR-IOV/IVShmem, migrate VMs, migration status)
• Handle the IB connection suspending and reactivating
• Propose Progress engine (PE) and migration thread based (MT) design to optimize VM migration and MPI application performance
MVAPICH User Group Meeting (MUG) 2018 74Network Based Computing Laboratory
• Requirement of managing and isolating virtualized resources of SR-IOV and IVSHMEM
• Such kind of management and isolation is hard to be achieved by MPI library alone, but much easier with SLURM
• Efficient running MPI applications on HPC Clouds needs SLURM to support managing SR-IOV and IVSHMEM – Can critical HPC resources be efficiently shared among users by extending SLURM with
support for SR-IOV and IVSHMEM based virtualization?
– Can SR-IOV and IVSHMEM enabled SLURM and MPI library provide bare-metal performance for end applications on HPC Clouds?
SR-IOV and IVSHMEM in SLURM
J. Zhang, X. Lu, S. Chakraborty, and D. K. Panda. SLURM-V: Extending SLURM for Building Efficient HPC Cloud with SR-IOV and IVShmem. The 22nd International European Conference on Parallel Processing (Euro-Par), 2016.
MVAPICH User Group Meeting (MUG) 2018 75Network Based Computing Laboratory
• NPB Class B with 16 Processes on 2 Azure A8 instances
• NPB Class C with 32 Processes on 4 Azure A8 instances
• Comparable performance between MVAPICH2 and Intel MPI
Application-Level Performance on Azure
0
5
10
15
20
25
30
bt.B cg.B ep.B ft.B is.B lu.B mg.B sp.B
Exec
utio
n Ti
me
(s)
NPB Class B
MVAPICH2 IntelMPI
0
5
10
15
20
25
30
35
40
45
cg.C ep.C ft.C is.C lu.C mg.C
Exec
utio
n Ti
me
(s)
NPB Class C
MVAPICH2 IntelMPI
Will be available publicly for
Azure soon
MVAPICH User Group Meeting (MUG) 2018 76Network Based Computing Laboratory
MVAPICH2 Software Family Requirements Library
MPI with IB, iWARP, Omni-Path, and RoCE MVAPICH2
Advanced MPI Features/Support, OSU INAM, PGAS and MPI+PGAS with IB, Omni-Path, and RoCE
MVAPICH2-X
MPI with IB, RoCE & GPU and Support for Deep Learning MVAPICH2-GDR
HPC Cloud with MPI & IB MVAPICH2-Virt
Energy-aware MPI with IB, iWARP and RoCE MVAPICH2-EA
MPI Energy Monitoring Tool OEMT
InfiniBand Network Analysis and Monitoring OSU INAM
Microbenchmarks for Measuring MPI and PGAS Performance OMB
MVAPICH User Group Meeting (MUG) 2018 77Network Based Computing Laboratory
• MVAPICH2-EA 2.1 (Energy-Aware)• A white-box approach• New Energy-Efficient communication protocols for pt-pt and collective operations• Intelligently apply the appropriate Energy saving techniques• Application oblivious energy saving
• OEMT• A library utility to measure energy consumption for MPI applications• Works with all MPI runtimes• PRELOAD option for precompiled applications • Does not require ROOT permission:
• A safe kernel module to read only a subset of MSRs
Energy-Aware MVAPICH2 & OSU Energy Management Tool (OEMT)
MVAPICH User Group Meeting (MUG) 2018 78Network Based Computing Laboratory
• An energy efficient runtime that provides energy savings without application knowledge
• Uses automatically and transparently the best energy lever
• Provides guarantees on maximum degradation with 5-41% savings at <= 5% degradation
• Pessimistic MPI applies energy reduction lever to each MPI call
MVAPICH2-EA: Application Oblivious Energy-Aware-MPI (EAM)
A Case for Application-Oblivious Energy-Efficient MPI Runtime A. Venkatesh, A. Vishnu, K. Hamidouche, N. Tallent, D.
K. Panda, D. Kerbyson, and A. Hoise, Supercomputing ‘15, Nov 2015 [Best Student Paper Finalist]
1
MVAPICH User Group Meeting (MUG) 2018 79Network Based Computing Laboratory
MPI-3 RMA Energy Savings with Proxy-Applications
0
10
20
30
40
50
60
512 256 128
Seco
nds
#Processes
Graph500 (Execution Time)
optimistic
pessimistic
EAM-RMA
0
50000
100000
150000
200000
250000
300000
350000
512 256 128
Joul
es
#Processes
Graph500 (Energy Usage)
optimisticpessimisticEAM-RMA
46%
• MPI_Win_fence dominates application execution time in graph500
• Between 128 and 512 processes, EAM-RMA yields between 31% and 46% savings with no degradation in execution time in comparison with the default optimistic MPI runtime
• MPI-3 RMA Energy-efficient support will be available in upcoming MVAPICH2-EA release
MVAPICH User Group Meeting (MUG) 2018 80Network Based Computing Laboratory
MVAPICH2 Software Family Requirements Library
MPI with IB, iWARP, Omni-Path, and RoCE MVAPICH2
Advanced MPI Features/Support, OSU INAM, PGAS and MPI+PGAS with IB, Omni-Path, and RoCE
MVAPICH2-X
MPI with IB, RoCE & GPU and Support for Deep Learning MVAPICH2-GDR
HPC Cloud with MPI & IB MVAPICH2-Virt
Energy-aware MPI with IB, iWARP and RoCE MVAPICH2-EA
MPI Energy Monitoring Tool OEMT
InfiniBand Network Analysis and Monitoring OSU INAM
Microbenchmarks for Measuring MPI and PGAS Performance OMB
MVAPICH User Group Meeting (MUG) 2018 81Network Based Computing Laboratory
Overview of OSU INAM• A network monitoring and analysis tool that is capable of analyzing traffic on the InfiniBand network with inputs from the MPI
runtime– http://mvapich.cse.ohio-state.edu/tools/osu-inam/
• Monitors IB clusters in real time by querying various subnet management entities and gathering input from the MPI runtimes
• Capability to analyze and profile node-level, job-level and process-level activities for MPI communication– Point-to-Point, Collectives and RMA
• Ability to filter data based on type of counters using “drop down” list
• Remotely monitor various metrics of MPI processes at user specified granularity
• "Job Page" to display jobs in ascending/descending order of various performance metrics in conjunction with MVAPICH2-X
• Visualize the data transfer happening in a “live” or “historical” fashion for entire network, job or set of nodes
• OSU INAM v0.9.3 released on 03/16/2018– Enhance INAMD to query end nodes based on command line option
– Add a web page to display size of the database in real-time
– Enhance interaction between the web application and SLURM job launcher for increased portability– Improve packaging of web application and daemon to ease installation
MVAPICH User Group Meeting (MUG) 2018 82Network Based Computing Laboratory
OSU INAM Features
• Show network topology of large clusters• Visualize traffic pattern on different links• Quickly identify congested links/links in error state• See the history unfold – play back historical state of the network
Comet@SDSC --- Clustered View
(1,879 nodes, 212 switches, 4,377 network links)Finding Routes Between Nodes
MVAPICH User Group Meeting (MUG) 2018 83Network Based Computing Laboratory
OSU INAM Features (Cont.)
Visualizing a Job (5 Nodes)
• Job level view• Show different network metrics (load, error, etc.) for any live job• Play back historical data for completed jobs to identify bottlenecks
• Node level view - details per process or per node• CPU utilization for each rank/node• Bytes sent/received for MPI operations (pt-to-pt, collective, RMA)• Network metrics (e.g. XmitDiscard, RcvError) per rank/node
Estimated Process Level Link Utilization
• Estimated Link Utilization view• Classify data flowing over a network link at
different granularity in conjunction with MVAPICH2-X 2.2rc1• Job level and• Process level
More Details in Poster: High-Performance and Scalable Fabric Analysis, Monitoring and Introspection Infrastructure for HPC - Hari Subramoni
MVAPICH User Group Meeting (MUG) 2018 84Network Based Computing Laboratory
• Available since 2004
• Suite of microbenchmarks to study communication performance of various programming models
• Benchmarks available for the following programming models– Message Passing Interface (MPI)
– Partitioned Global Address Space (PGAS)
• Unified Parallel C (UPC)
• Unified Parallel C++ (UPC++)
• OpenSHMEM
• Benchmarks available for multiple accelerator based architectures– Compute Unified Device Architecture (CUDA)
– OpenACC Application Program Interface
• Part of various national resource procurement suites like NERSC-8 / Trinity Benchmarks
• Continuing to add support for newer primitives and features
• Please visit the following link for more information– http://mvapich.cse.ohio-state.edu/benchmarks/
OSU Microbenchmarks
MVAPICH User Group Meeting (MUG) 2018 85Network Based Computing Laboratory
• MPI runtime has many parameters• Tuning a set of parameters can help you to extract higher performance• Compiled a list of such contributions through the MVAPICH Website
– http://mvapich.cse.ohio-state.edu/best_practices/
• Initial list of applications– Amber– HoomDBlue– HPCG– Lulesh– MILC– Neuron– SMG2000
• Soliciting additional contributions, send your results to mvapich-help at cse.ohio-state.edu.• We will link these results with credits to you.
Applications-Level Tuning: Compilation of Best Practices
MVAPICH User Group Meeting (MUG) 2018 86Network Based Computing Laboratory
• TACC and MVAPICH partner to win new NSF Tier-1 System– https://www.hpcwire.com/2018/07/30/tacc-wins-next-nsf-funded-major-supercomputer/
• The MVAPICH team will be an integral part of the effort
• An overview of the recently awarded NSF Tier 1 System at TACC will be presented by Dan Stanzione – The presentation will also include discussion on MVAPICH collaboration on past systems and
this upcoming system at TACC
MVAPICH Team Part of New NSF Tier-1 System
MVAPICH User Group Meeting (MUG) 2018 87Network Based Computing Laboratory
• Supported through X-ScaleSolutions (http://x-scalesolutions.com)• Benefits:
– Help and guidance with installation of the library
– Platform-specific optimizations and tuning
– Timely support for operational issues encountered with the library
– Web portal interface to submit issues and tracking their progress
– Advanced debugging techniques
– Application-specific optimizations and tuning
– Obtaining guidelines on best practices
– Periodic information on major fixes and updates
– Information on major releases
– Help with upgrading to the latest release
– Flexible Service Level Agreements • Support provided to Lawrence Livermore National Laboratory (LLNL) this year
Commercial Support for MVAPICH2 Libraries
MVAPICH User Group Meeting (MUG) 2018 88Network Based Computing Laboratory
MVAPICH2 – Plans for Exascale• Performance and Memory scalability toward 1M-10M cores• Hybrid programming (MPI + OpenSHMEM, MPI + UPC, MPI + CAF …)
• MPI + Task*• Enhanced Optimization for GPUs and FPGAs*• Taking advantage of advanced features of Mellanox InfiniBand
• Tag Matching*• Adapter Memory*
• Enhanced communication schemes for upcoming architectures• NVLINK*• CAPI*
• Extended topology-aware collectives• Extended Energy-aware designs and Virtualization Support• Extended Support for MPI Tools Interface (as in MPI 3.0)• Extended FT support• Support for * features will be available in future MVAPICH2 Releases
MVAPICH User Group Meeting (MUG) 2018 89Network Based Computing Laboratory
Thank You!
Network-Based Computing Laboratoryhttp://nowlab.cse.ohio-state.edu/
The MVAPICH2 Projecthttp://mvapich.cse.ohio-state.edu/
top related