petascale challenges and solutionsmpi artifacts. mpi artifacts . listing. source markers for....

28
10/23/2008 1 Petascale Challenges and Solutions Kevin Gildea [email protected] IBM High Performance Computing

Upload: others

Post on 04-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 1

Petascale Challenges and Solutions

Kevin [email protected]

IBM High Performance Computing

Page 2: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 2

Agenda

• Petascale Landscape and Challenges• HPCS and PERCS Solution• PERCS Hardware Innovations• PERCS Software and Productivity

Page 3: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 3

The current architectural landscape

• • •(100’s of suchcluster nodes)

I/Ogatewaynodes

“Scalable Unit” Cluster Interconnect Switch/Fabric

Road Runner: Cell-accelerated OpteronMulti-core w/ accelerators (IXP 2850)

Blue GenePower6 Clusters

. . .

Memory

PEs,

SMP NodePEs,

. . . . . .

Memory

PEs,

SMP NodePEs,

Interconnect

Page 4: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 4

0.0001

0.001

0.01

0.1

1

10

100

1000

10000

100000

93/06

93/11

94/06

94/11

95/06

95/11

96/06

96/11

97/06

97/11

98/06

98/11

99/06

99/11

00/06

00/07

'01/06

01/11

02/06

02/11

03/06

03/11

04/06

04/11

05/06

05/11

06/06

06/11

07/06

07/11

08/06

Rm

ax P

erfo

rman

ce (G

Flop

s)TOP500 Performance Trend

Even though there is some stepping of the performance of the #1 system. The #500 clip level, #10 clip level and Total Aggregate performance all are virtual straight line trends when plotted on log scale (~ 96% CGR)

Page 5: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 5

How are all these systems programmed?–Automatic parallelization of sequential codes

• Polaris, xlc –qsmp=auto, etc.• Successful for a limited application domain and relatively small scale

–MPI (+ OpenMP or pthreads)• The dominant model today, scales well to large numbers of processors• Increasingly considered too complex to program

–Parallel libraries• Parallel ESSL, PLAPACK, ScaLAPACK, STAPL, HTA, Intel TBB• Composability

–Explicit parallel languages or parallel language extensions• OpenMP – small scale (hundreds of threads)• PGAS: UPC, CoArray Fortran, Titanium, X10, Chapel• Fortress

Page 6: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 6

Programmer Productivity

Key Problem: Frequency Improvements Do Not Match App Needs

Increasing Burden On The Application Design

Objective: Provide Tools to allow Scientists to Bridge the Gap

TIME

PER

FOR

MA

NC

E

Application Scaling Needs

Single Core Performance Growth

GAP

Page 7: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 7

What are the key challenges to advancing Technical Computing?

Productivity: How do make this massive compute power more consumable and reduce time-to-insight?

Performance: How do we, at the same time, provide sustained growth in application level performance in the face of a technology discontinuity?

Page 8: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 8

Phase III Vendors:

Mission Partners:

Impact:• Performance (time-to-solution): speedup by 10X to 40X

• Programmability (idea-to-first-solution): dramatically reduce cost and development time

• Portability (transparency): insulate software from system

• Robustness (reliability): continue operating in the presence of localized hardware failure, contain the impact of software defects, and minimize likelihood of operator error

Critical to National Security• Develop a new generation of economically viable high productivity computing systems for national

security and industrial user communities (2011)• Ensure U.S. lead, dominance, and control in this critical technology

Applications:

Ocean/wave ForecastingWeather Prediction Ship DesignClimate

ModelingNuclear Stockpile

StewardshipWeapons

Integration

High Productivity Computing Systems Overview

PERCS – Productive, Easy-to-use, Reliable Computing System is IBM’s response to DARPA’s HPCS Program

Page 9: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 9

PERCS Productivity Domains

Programmer System Operational Efficiency

Administrator Reliability and Serviceability

Develop Applications

Debug Applications

Tune Applications

Maximize System throughput

Maximize Enterprise Efficiency

Ensure System Balance

Storage Management

Network Management

Install, Upgrades

System Monitoring

Continuous Operation

Problem Isolation

First Failure Data Capture

Serviceability

Page 10: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 10

PERCS Productivity Solutions

Programmer System Operational Efficiency

Administrator Reliability and Serviceability

Eclipse IDE

Compiler Enhancements

UPC and X10 Languages

Automated Performance Tuning

Resource & Workload Management

Protocol Optimization and Acceleration

Co-scheduling

Dynamic Page Size Assignments

Automated Discovery

Automated Configuration

Diskless Boot

Rolling Updates

Concurrent and Rolling Update

Checkpoint/Restart

Server, Network, & Storage Monitoring

Declustered RAID

Page 11: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 11

Compiler Focus• Performance

– Automatically exploit POWER 7 hardware characteristics– Address key memory wall issues– Automatically exploit SIMDization (double precision).– Effectively handle parallelization and scaling issues

• Productivity– Hide system complexity from programmers– Automatically fine tune optimizations for the applications

using profile feedback information. – Generate transformation reports to help programmers fine tune

their source code.– Support for legacy applications on new hardware

Page 12: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 12

XL C,C++,Fortran Compilers• Advanced Memory Optimizations

– Address memory wall issues, hide system complexity by tuning and improving memory sub-system performance automatically

• XL Compilers Transformation Reports– Generate XML enabled reports to help users fine tune their

applications.• Polyhedral framework for Automatic Parallelization

– Help scaling to large number of threads– Exploit multi-level parallelism provided by POWER 7

hardware• Assist Threads

– Deploy the available multiple SMT threads and cores to increase single thread performance.

Page 13: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 1313

Parallel Tools Platform Tools to assist new breed of programmers

to develop parallel programs

Best practice tools for experienced parallel

programmers

Improve parallel tools and the productivity of

tool developers

Leverage Eclipse ecosystem and community for

development and support

Provide focal point for parallel tool development for a broad

range of architectures

Parallel Tools StrategyEclipse-based Parallel Tools Platform

• Bring richness of commercial IDEs to the HPC programmer –Grow HPC ecosystem around common IDE–Address the needs of HPC users ranging from novice to expert parallel programmers

• Open and extensible to encourage further development by IBM and others

Page 14: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 14

Parallel Runtime

Parallel Language Development Tools

(PLDT)

Parallel Tools StrategyEclipse Parallel Tools Platform (PTP)

www.eclipse.org/ptp

Managed Build System

Launch system

Eclipse IDE

PTP: Unifying Parallel Tools Platform for the Parallel Programmer

Base Tools Platform provided by Eclipse and CDT

Language-sensitive

editor

Tool

laye

r spe

cific

to

Par

alle

lism

Bas

e To

ol la

yers

CDT C/C++ Development

Tools

Parallel Monitoring

Parallel Debugger

Performance Tools

Fortran Development

Tools

Page 15: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 15

Application Development in PTP

15

Launching & Monitoring Tools

Debugging Tools

Coding & Analysis Tools

Performance Tuning Tools

Page 16: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 16

Parallel Language Development Tools:MPI Assistance Tools (similar Tools available for OpenMP, and UPC)

Mouse hover Help

Context SensitiveHelp:

(F1) provides API info

Content Assist: Ctrl-space

suggest completions

Actions to find MPI Artifacts

MPI Artifacts listing

Source Markers forNavigation & ID

Page 17: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 17Contact: Evelyn Duesterwald, Yuan Zhang

Verify barrier synchronization in C/MPI programs

Synchronization errors lead to deadlocks and stalls.

Programmers may have to spend hours trying to find the source of a deadlock

The MPI Barrier Verification Tool detects potential barrier deadlocks/stalls before the program executes

Parallel Language Development Tools:Advanced Static Analysis: MPI Barrier Verification Tool

Action to run Barrier Verifier

Page 18: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 18

Parallel Debugger Architecture

Parallel

Job Debug

Manager

Eclipse Debug

Adaptor

Eclipse PTP

Debugger User

Interface

May run on local laptop

TCP/IP connection

Page 19: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 19

PTP Performance Tools Framework

1919

TAU

ParaProf

HPCT

Integration Framework:Facilitate integration of existing performance tools into PTPProvide consistent & uniform user interfaces to simplify tool operation Reduce the “Eclipse plumbing” necessary to integrate these tools

Provide Eclipse integration for instrumentation, measurement, andanalysisTools and tool workflows are specified in an XML fileTools are selected and configured by users in the launch configuration windowOutput is generated, managed and analyzed as specified in the workflow

Integration of HPCS ToolkitAutomated rules-based perf analysis

Page 20: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 20

What is Partitioned Global Address Space (PGAS)?

• Computation is performed in multiple places.

• A place contains data that can be operated on remotely.

• Data lives in the place it was created, for its lifetime.

• A datum in one place may reference a datum in another place.

• Data-structures (e.g. arrays) may be distributed across many places.

• Places may have different computational properties

Address Space

Shared MemoryOpenMP

PGASUPC, CAF, X10Message passing

MPI

Process/Thread

Page 21: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 21

Asynchronous PGAS

• Asynchrony– Simple explicitly concurrent

model for the user: async (p) S runs statement S “in parallel” at place p

– Controlled through finish, and local (conditional) atomic

• Used for active messaging (remote asyncs), DMAs, fine-grained concurrency, fork/join concurrency, do-all/do-across parallelism– SPMD is a special case

Concurrency is made explicit and programmable.

Page 22: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 22

UPC Performance Gaps

• Data distributions

– Express data locality and distribution

• Efficient single thread performance

– Exploit existing, optimized serial libraries

– Compiler optimizations: parallel loop, privatization

• Efficient and scalable communication

– Collective operations

– Compiler optimizations: communication scheduling and aggregation, hw exploit

• Fine grain threading for load balancing

• Synchronization

• Parallel I/O

Combination of system, runtime and compiler opts.

Page 23: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 23

Constant Propagation

Copy Propagation

Dead store elimination

Dead Code Elimination

Data and Control Flow Optimizer

Expression simplification

Backward and Forward store motionLoop

NormalizationLoop

Unrolling Redundant Condition Elimination

Loop Unswitching

UPC Transformations

Thread Local Storage Transformations

Loop Optimizer

UPC Forall Versioning

UPC Privatization

UPC Remote Update

UPC Forall Loop Reshape

Traditional Loop Optimizations

(subset)

UPC Locality Analysis

Optimizer infrastructure applicable to other PGAS languages (Co-Array Fortran)

UPC Compiler Optimizations

• Remove overhead– Forall loop reshape– Strength reduction for shared indexing

• Exploit locality– Analysis and privatization– Loop versioning

• Exploit hardware assist– GSM for remote update– Collectives hardware assist

• Reduce communication– Comm. aggregation and scheduling

Page 24: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 24

PERCS Hardware Innovations• General Purpose POWER7

– Common with commercial systems

• Integrated Storage and Networking– SAS2 disk enclosures and links– 10GigE links for direct connection to IP backbones

• Advanced HPC Inteconnect– Low diameter fabric with ultra low latency and high bi-section

bandwidth• Single hop between groups of 1024 cores• Three hop routes between all 512K cores

– Collective acceleration– Global shared memory access and atomics

Page 25: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 25

GSM Overview

Shared MemoryNode 0

Fabric

Shared MemoryNode 1

Fabric

Shared MemoryNode N

Fabric

Interconnect upto 512K Cores

Global Shared MemoryCommon name space, get/put/atomics

Job

Page 26: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 26

Protocol Enhancements for Sustained Performance

• Communication latency: – Burst MMIO – Cache injection– Lock overhead reduction – lock-free option– Exploitation of Global Shared Memory

• Collective Communication overheads: – Collective Acceleration Unit– RDMA exploitation

• Memory latency: – Drive towards zero cache miss execution in the latency critical paths

• OS Jitter minimization: – Exploitation of Global Counters– OS hooks for scheduling low-priority threads and interrupts on secondary SMT

threads– Co-scheduler to synchronize high-priority and low-priority windows

Page 27: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 27

Communication Protocol Layers

Hardware Abstraction Layer

UD/FIFO

RDMA

Global Shared Memory/Collective Acceleration Unit/Atomics pass-thru to HW

LAPI Active Messages

End-to-end acknowledgements and retransmission

End-to-end flow control

Fragmentation and Reassembly

MPI

Task grouping

Message matching

Collectives

UPC + X10

Async PGAS

Collectives

SHMEM

Thread-safe*

Lock-free

* Lock-free and semi-reliable options under investigation

Page 28: Petascale Challenges and SolutionsMPI Artifacts. MPI Artifacts . listing. Source Markers for. Navigation & ID. 10/23/2008. 17. Contact: Evelyn Duesterwald, Yuan Zhang. Verify barrier

10/23/2008 28

OS Enhancements for Sustained Performance• Dynamic Variable Page Size Support:

– OS support for multiple page sizes– Dynamically change page size for a running application’s need

• APIs to Control System Resources– Control application memory usage– Control CPU allocation

• 64-bit I-node: – Enable OS to support trillions of files per file system

• OS Jitter Minimization: – OS hooks to scheduling non critical threads to secondary SMT threads

• Checkpoint/Restart Support:– Creating lightweight container technology

• Called WPAR in AIX• Working on adding virtualization hooks into Linux kernel

• Help Define/Configure Lightweight Compute OS– Provide a list of non essential daemon/services to turn off on compute nodes

• APIs to Hardware Counters– System– Network