the influence of system calls and interrupts on the performances of a pc cluster using a remote dma...

21
The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte Alain Greiner Univ. Paris 6, France http://mpc.lip6.fr Olivier.Gluck@lip6 .fr

Upload: evelyn-berry

Post on 13-Dec-2015

215 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte

The influence of system calls and interrupts on the performances of a PC

cluster using a Remote DMA communication primitive

Olivier Glück

Jean-Luc Lamotte

Alain Greiner

Univ. Paris 6, France

http://mpc.lip6.fr

[email protected]

Page 2: The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte

2/21

Outline

1. Introduction

2. The MPC parallel computer

3. MPI-MPC1: the first implementation of MPICH on MPC

4. MPI-MPC2: user-level communications

5. Comparison of both implementations

6. A realistic application

7. Conclusion

Page 3: The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte

3/21

Introduction

Very low cost and high performance parallel computer

PC cluster using optimized interconnection network

A PCI network board (FastHSL) developed at LIP6:

High speed communication network (HSL,1 Gbit/s)

RCUBE: router (8x8 crossbar, 8 HSL ports)

PCIDDC: PCI network controller (a specific communication

protocol)

Goal: supply efficient software layers

A specific high-performance implementation of MPICH

Page 4: The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte

4/21

The MPC computer architectureHSL links

CPU 1

PC

I con

necto

r

RCube

PCIDDC

CPU 3

PC

I con

necto

r

RCube

PCIDDC

ETHERNET Network

CPU 2

PC

I con

necto

r

RCube

PCIDDC

Front End

The MPC parallel computer

Page 5: The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte

5/21

Our MPC parallel computer

The MPC parallel computer

Page 6: The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte

6/21

The FastHSL PCI board

Hardware performances:

latency: 2 µs

Maximum throughput on the link: 1 Gbits/s

Maximum useful throughput: 512 Mbits/s

The MPC parallel computer

Page 7: The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte

7/21The MPC parallel computer

The remote write primitive (RDMA)Sender

M emory A

CPU A

LM E

Data

NIC A

Receiver

M emory B

CPU B

LM R

Data

NIC B

1

2

3

4

5

6Interrupt

Interrupt

HSL link

Page 8: The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte

8/21

PUT: the lowest level software API Unix based layer: FreeBSD or Linux

Provides a basic kernel API using the PCI-DDC remote write

Implemented as a module

Handles interrupts

Zero-copy strategy using physical memory addresses

Parameters of 1 PUT call:

remote node identifier,

local physical address,

remote physical address,

data length, …

Performances:

5 µs one-way latency

494 Mbits/s

The MPC parallel computer

Page 9: The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte

9/21MPI-MPC1: the first implementation of MPICH on MPC

MPI-MPC1 architecture

Point-to-point operations

PUT API

Netw ork interface

Collective operationsMPI API

ADI

RDMA : remote w rite primitive

CH_RDMA : MPI on MPC

NIC

User space

Kernel space

Hardw are

MPIDRIVER

Page 10: The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte

10/21MPI-MPC1: the first implementation of MPICH on MPC

MPICH on MPC: 2 main problems

Virtual/physical address translation?

Where to write data in remote physical memory?

S en d er n o d e R ece iv er n o d e

V irtu a lm em o ry

P h y sica lm em o ry

use

r b

uff

er

use

r b

uff

er

1

2

3

P h y sica lm em o ry

V irtu a lm em o ry

Page 11: The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte

11/21

MPICH requirements

Two kinds of messages:

CTRL messages: control information or limited size user-data

DATA messages: user-data only

Services to supply:

Transmission of CTRL messages

Transmission of DATA messages

Network event signaling

Flow control for CTRL messages

Optimal maximum size of CTRL messages?

Match the Send/Receive semantic of MPICH to the remote

write semantic

MPI-MPC1: the first implementation of MPICH on MPC

Page 12: The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte

12/21

MPI-MPC1 implementation (1)

CTRL messages: pre-allocated buffers, contiguous in physical memory,

mapped in virtual process memory

an intermediate copy on both sender & receiver

4 types: SHORT: user-data encapsulated in a CTRL message

REQ: request of DATA message transmission

RSP: reply to a request

CRDT: credits, used for flow control

DATA messages: zero-copy transfer mode

rendezvous protocol using REQ & RSP messages

physical memory description of remote user buffer in RSP

MPI-MPC1: the first implementation of MPICH on MPC

Page 13: The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte

13/21

MPI-MPC1 implementation (2)

MPI-MPC1: the first implementation of MPICH on MPC

ReceiverSender

user data

REQ

RSP

DATA

MPI_Send / MPI_Recv

case 2 : user data size > 16Kbytescase 1 : user data size < 16Kbytes

user data

SHORT

Sender Receiver

2 copies, 1 message 0 copie, 3 messages

Page 14: The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte

14/21MPI-MPC1: the first implementation of MPICH on MPC

MPI-MPC1 performances

Each call to the PUT layer = 1 system call

Network event signaling uses hardware interrupts

Performances of MPI-MPC1:

benchmark: MPI ping-pong

platform: 2 MPC nodes with PII-350

one-way latency: 26 µs

throughput: 419 Mbits/s

Avoid system calls and interrupts

Page 15: The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte

15/21MPI-MPC2: user-level communications

Processus inuser space

Processus inuser space

Driver foraccessing

the networkinterface

Kernel space

Network interface

M PI-M PC 1 M PI-M PC 2

MPI-MPC1 & MPI-MPC2

Post remote write orders in user mode

Replace interrupts by a polling strategy

Page 16: The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte

16/21MPI-MPC2: user-level communications

MPI-MPC2 implementation

Network interface registers are accessed in user mode

Exclusive access to shared network resources:

shared objects are kept in the kernel and mapped in user

space at starting time

atomic locks are provided to avoid possible competing

accesses

Efficient polling policy:

polling on the last modified entries of the LME/LMR lists

all the completed communications are acknowledged at once

Page 17: The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte

17/21Comparison of both implementations

MPI-MPC1&MPI-MPC2 performances

0

50

100

150

200

250

300

350

400

4501 4 16 64 256

1K 4K 16K

64K

256K

1024

K

4096

K

Data size (bytes)

Th

rou

gh

pu

t (M

bit

s/s) MPI-MPC1

MPI-MPC2

421

MPI-MPC1

MPI-MPC2

MPI-MPC implementation

26 µs

15 µs

One-way latency

419 Mbits/s

421 Mbits/s

Throughput

Page 18: The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte

18/21Comparison of both implementations

MPI-MPC2 latency speed-up

Page 19: The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte

19/21A realistic application

The CADNA software

CADNA: Control of Accuracy and Debugging for Numerical

Applications

developed in the LIP6 laboratory

control and estimate the round-off error propagation

C o m p u tin g R 1 C o m p u tin g R 2 C o m p u tin g R 3

S en d in g R 1 S en d in g R 2 S en d in g R 3

R eceiv in g R 2 & R 3 R eceiv in g R 1 & R 2R eceiv in g R 1 & R 3

C o m p u tin g 2 C o m p u tin g 2 C o m p u tin g 2

P 1 P 2 P 3

Page 20: The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte

20/21A realistic application

MPI-MPC performances with CADNA

Application: solving a linear system using Gauss method

without pivoting: no communication

with pivoting: a lot of short communications

Systemsize

Number ofexchanges

Communication time(sec.)

One exchange time

(µs)MPI-MPC1 MPI-MPC2 MPI-MPC1 MPI-MPC2

800 646682 51 31 79 481200 1450450 101 66 70 461600 2574140 191 128 74 502000 4018285 288 177 72 44

Mean value (µs) 74 47

MPI-MPC2 speed-up = 36%

Page 21: The influence of system calls and interrupts on the performances of a PC cluster using a Remote DMA communication primitive Olivier Glück Jean-Luc Lamotte

21/21Conclusion

Conclusions & perspectives 2 implementations of MPICH on a remote write primitive

MPI-MPC1:

system calls during communication phases

interrupts for network event signaling

MPI-MPC2:

user-level communications

signaling by polling

latency speed-up greater than 40% for short messages

What about maximum throughput?

Locking user buffers in memory and address translations are

very expansive

MPI-MPC3 avoid address translations by mapping the virtual

process memory in a contiguous space of physical memory at

application starting time