© 2012 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 1
Paving The Road to Exascale Computing
March 2012, [email protected]
© 2012 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 2
Connectivity Solutions for Efficient Computing
Enterprise HPC
High-end HPC
Leading Connectivity Solution Provider For Servers and Storage
HPC Clouds
Host/Fabric
Software
Mellanox Interconnect Networking Solutions
ICs Switches/Gateways Adapter Cards Cables
© 2012 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 3
Mellanox Complete End-to-End Connectivity
- Collectives Accelerations (FCA/CORE-Direct)
- GPU Accelerations (GPUDirect)
- MPI/SHMEM/PGAS
- RDMA
- Quality of Service
- Adaptive Routing
- Congestion Management
- Fat-Tree, 3D Torus
- UFM, Mellanox OS
- Integration with job schedulers
- Inbox Drivers
Server and Storage High-Speed Connectivity
Networking Efficiency/Scalability
Application Accelerations
Host/Fabric Software Management
- Latency
- Bandwidth
- CPU Utilization
- Message rate
© 2012 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 4
Paving The Road to Exascale Computing
Mellanox InfiniBand is the interconnect of choice for PetaScale computing
• Accelerating 50% of the sustained PetaScale systems (5 systems out of 10)
Dawning (China) TSUBAME (Japan) NASA (USA)
>11K nodes
CEA (France) LANL (USA)
© 2012 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 5
ORNL “Spider” System – Lustre File System
Oak Ridge Nation Lab central storage system • 13400 drives
• 192 Lustre OSS
• 240GB/s bandwidth
• Mellanox InfiniBand interconnect
• 10PB capacity
World leading high- performance InfiniBand storage system
© 2012 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 6
Introducing FDR InfiniBand 56Gb/s Solutions
2002
10Gb/s
2005
20Gb/s
2008
40Gb/s
2011
56Gb/s
3.0
Highest Performance, Reliability, Scalability, Efficiency
© 2012 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 7
Mellanox Virtual Protocol Interconnect Solutions with PCIe 3.0
Industry leader • PCI Express 3.0 • Dual-port FDR IB or 40GbE • Native RDMA • CORE-Direct
Industry leader
• 36 x FDR IB or 40GE 64 x 10GbE
• Integrated routers and bridges
• 4Tbit switching capacity
• Ultra-low latency
• Switch systems: from 36-port to 648-port
Networking Storage Clustering Management
Applications
Adapter Card
Mezzanine Card
Switch OS Layer
Unified Fabric Manager
SW Acceleration Products
LOM
© 2012 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 8
Reliability / Efficiency
• Link bit encoding – 64/66
• Forward Error Correction
• Lower power consumption
Performance / Scalability
• >12GB/s bandwidth, <0.7usec latency
• PCI Express 3.0
• InfiniBand router and IB-Eth bridges
FDR InfiniBand New Features and Capabilities
© 2012 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 9
Modular Switch
Edge Switch
SX6500
Up to 648 ports FDR
FDR 56Gb/s InfiniBand Solutions Portfolio
Management
Feature License
Keys
Cables
Mellanox M-1
E-2-E Cluster
Support Services
UFM Diagnostics
Note: Please check availability with your Mellanox representative
Adapters
SX6025 – 36 ports externally managed
SX6036 – 36 ports managed
SX60XX
1U 36 port
Single and dual port
FDR adapter card
Virtual Protocol Interconnect
648p 324p 108p 216p
© 2012 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 10
Double the Bandwidth, Half the Latency
120% Higher Application ROI
FDR InfiniBand PCIe 3.0 vs QDR InfiniBand PCIe 2.0
© 2012 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 11
FDR InfiniBand with PCIe 3.0 In Multiple Deployments
FDR InfiniBand is The New Deal!
187Tflops with Only 648 Nodes
#54 on the TOP500 list
And
More!
© 2012 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 12
Mellanox offers the most scalable and complete parallel programming solution • ScalableMPI
• ScalableSHMEM
• ScalableUPC
Mellanox Fabric Collective Accelerations (FCA) • Topology aware collectives take advantage of optimized message coalescing
• Makes use of powerful multicast capabilities in network for one-to-many communication
• Run collectives on separate service level so no interference with other communications
• Utilizes Mellanox CoreDirect collective hardware offload to minimize system noise
Mellanox Message Accelerations (MXM) • Delivers highest performing and scalable parallel programming interface in the industry
• Implements reliable messaging optimized for Mellanox HCAs
• Hybrid transport services
• Supports one-sided communications
• Intra-node shared memory
• Efficient memory registration and management
Mellanox ScalableHPC Programming Libraries
© 2012 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 13
Mellanox MPI Accelerations
© 2012 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 14
Mellanox MPI Scalability
© 2012 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 15
Mellanox MPI optimization enable linear strong scaling for LLNL application
World Leading Performance and Scalability
Mellanox MPI Optimization – Highest Scalability at LLNL
© 2012 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 16
Mellanox ScalableSHMEM Collective Scalability
© 2012 MELLANOX TECHNOLOGIES - MELLANOX CONFIDENTIAL - 17
Thank You [email protected]