swapping to remote memory over infiniband: an approach...

30
Swapping to Remote Memory over InfiniBand: An Approach using a High Performance Network Block Device Shuang Liang, Ranjit Noronha, D.K. Panda Department of Computer Science and Engineering The Ohio State University Email: {liangs,noronha,panda}@cse.ohio-state.edu

Upload: others

Post on 18-Oct-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Swapping to Remote Memory over InfiniBand: An Approach using a High Performance

Network Block Device

Shuang Liang, Ranjit Noronha, D.K. Panda

Department of Computer Science and EngineeringThe Ohio State University

Email: {liangs,noronha,panda}@cse.ohio-state.edu

Page 2: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Presentation Outline

• Introduction• Problem Statement• Design Issues• Performance Evaluation• Conclusions and Future Work

Page 3: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Application Trends

• Applications are becoming increasingly data intensive with high memory demand– Larger working set for a single application, such

as data warehouse application, scientific simulation, etc.

– Memory resources in a single node of a cluster system may not be able to accommodate the working set in memory, while some other node may host plenty of memory unused

Page 4: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Utilizing Remote Memory

• Can we utilize those remote memory to improve the applications performance?

Network

Node 1 Node 2CPU

MEM

NICpage

CPU

MEM

NICpage

Page 5: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Motivation• Emergence of commodity high performance

network such as InfiniBand– Offloaded transport protocol stack– User-level communication bypass OS– Low latency – High bandwidth comparable to local memcpy performance– Novel hardware features such as Remote Direct Memory

Access (RDMA) with minimal host overhead• Can we utilize these features to boost sequential

data intensive applications? And how?

Page 6: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Approaches• Global memory management [Feeley95]

– Close integration with virtual memory management

– Implementation Complexity and poor portability• User level run-time libraries [Koussih95]

– Application aware interface– Additional management layer in user space

• Remote paging[Markatos96]– Flexible– Moderate implementation effort

Page 7: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Presentation Outline

• Background• Problem Statement• Design Issues• Performance Evaluation• Conclusions and Future Work

Page 8: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Problem Statement• Enable InfiniBand cluster to take

advantage of remote memory by remote paging– Enhance the local memory hierarchy

performance – Deliver high performance– Enable application to benefit transparently

• Evaluate the network performance impact– Comparisons of remote paging with GigE, IPoIB

and InfiniBand native communication

Page 9: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Presentation Outline

• Background• Problem Statement• Design Issues• Performance Evaluation• Conclusion and Future Work

Page 10: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Design Choices• Kernel Level Design

– Pros: • Transparency to

applications• Beneficial to processes

in the system• Take advantage of

virtual memory system management for page management

– Cons:• Dependency on OS• Not easy to debug

• User Level Design– Pros:

• Portable across different OSes

• Easier to debug

– Cons• Not completely

transparent to application

• Beneficial only to application using the user-level library

• High overhead with user-level signal handling

Page 11: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Network Block Device• A software mechanism to utilize remote block

based resources over network– Examples: NBD, ENBD, DRBD, GNBD, etc.– Often used to export remote disk resources to provide

storage, such as RAID device, mirror device, etc.• Use Ramdisk based Network Block Device as

swapping device – Seamless integration with VM for remote paging– NBD — a TCP implementation of Network Block Device

within default kernel source tree can be used for comparison study

– An InfiniBand based Network Block Device needs to designed

Page 12: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Architecture of the remote paging system

Application Application

Virtual Memory Management

Swap device

Local Disk HPBD

User space

Kernel space

InfiniBand Network

Memory ServerMemory ServerMemory Server

HCA HCA

Page 13: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Design Issues

• Memory registration and buffer management– Registration is a costly

operation compared with memcpy for small buffers

– Pre-registration out of the critical path needs registration for all memory pages

– Paging messages are upper bounded by 128KB in Linux

0

50

100

150

200

250

300

350

400

1 2 4 8 1 6 32 6 4 1 2 82 565 1 210 242 0484 0 968 1921 6 38 432 7686 553 61 E+05

MemcpyRegistration-KernelRegistration-UserM

icro second

Size in BytesMemory copy is more than 12 times faster

than memory registration for one page

Page 14: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Design Issues (cont’d)

• Dealing with message completions– Polling based synchronous completion wastes CPU

cycles – None preemptive in kernel mode– InfiniBand supports event based completion by

registering asynchronous event handler• Thread safety

– There could be multiple instances of the driver running, mutual exclusion is needed for shared data structures

• Reliability issues

Page 15: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Our Design

Client Memory Server

Page in request

RDMA Write

Page out request

RDMA Read

Request Ack

Request Ack

• RDMA based server design

Page 16: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Our Design (cont’d)• Registered buffer pool management

– Use pre-register a buffer pool for page copy before communication

• Hybrid completion handling– Register an event handler with InfiniBand

transport– Both client and server block, when there is no

traffic– Use polling scheme for bursty incoming

requests

Page 17: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Our Design (cont’d)• Reliable communication

– Using RC services

• Flow control– Use credit based flow

control

• Multiple server support– Distribute block across

multiple servers in linear mode

Server 1 Server 2 Server 3

Swap area

1 n 2n ……

Page 18: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Presentation Outline

• Background• Problem Statement• Design Issues• Performance Evaluation• Conclusions and Future Work

Page 19: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Experiment Setup

• Xeon 2.66GHZ Cluster with 2G DDR Memory; 40GB ST340014A Hard disk; InfiniBand Mellanox MT23108 HCA

• Memory size configuration:– 2G for local memory test scenario– 512M for swapping scenario

• Swapping area setup– Use Ram disk on memory server as swap area

Page 20: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Latency Comparison

0200400600800

100012001400

1 4 16 64 256

1024

4096

16384

65536Memcpy

IB

IPoIB

GigE

Micro second

Size in Bytes

InfiniBand native communication latency for one page is 4 times faster than IPoIB and 8 times faster than GigE and 2.4

times slower than memcpy

Page 21: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Micro-benchmark: Execution Time

0

5

10

15

20

Memory

HPBD

NBD-IPoIB

NBD-GigE

Local-Disk

second

Network Overhead is approximately 30% for IPoIB. Using server based RDMA further improves the performance for HPBD

Page 22: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Quicksort – Execution time

0

100

200

300

400

500

600

700

Memory

HPBD

NBD-IPoIB

NBD-GigE

Local-Disk

second

HPBD is 1.4 times slower than enough local memory and 4.7 times faster than swapping to disk

Page 23: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Barnes – Execution time

0100200300400500600700

Memory HPBD NBD-IPoIB

NBD-GigE Local-Disk

second

For slightly oversized working set, HPBD is still 1.4 times faster than swapping to disk

Page 24: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Two processes of Quicksort

02000400060008000

10000120001400016000

process1 process2

512MB memory withdisk swapping

512MB memory with3*512MB HPBDServers

1GB memory with2*512MB HPBDServers

2GB memory

second

Concurrent instances of quicksort run up to 21 times faster than swapping to disk

Page 25: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Quicksort with multiple servers

050

100150200250300350400

1 server-GigE

1 Server-IPoIB

1 Server

2 Server

4 Servers

8 Servers

16 Servers

second

Maintaining multiple connections does not degrade performance up to 12 servers

Page 26: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Presentation Outline

• Background• Problem Statement• Design Issues• Performance Evaluation• Conclusions and Future Work

Page 27: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Conclusions• Remote paging is an efficient way to enable

sequential applications to take advantage of remote memory

• Using InfiniBand for remote paging can improve the performance, compared with GigE and IPoIB. And it is comparable to system with enough local memory

• As network speed increase, host overhead becomes more critical for further performance improvement

Page 28: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Future Work

• Achieve zero copy along the communication path to reduce host overhead along the critical path

• Dynamic management of idle cluster memory for swap area allocation

Page 29: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

Acknowledgements

Our research is supported by the following organizations

• Current Funding support by

• Current Equipment donations by

Page 30: Swapping to Remote Memory over InfiniBand: An Approach ...mvapich.cse.ohio-state.edu/static/media/publications/slide/liang... · intensive with high memory demand – Larger working

{liangs, noronha, panda}@cse.ohio-state.edu

Network-Based Computing Laboratoryhttp://nowlab.cis.ohio-state.edu/

Thank You!