date: 0 2014/8/26

12
DATE: 06/28/22 Authors : Rock Kuo, Waue Chen, Che-Yuan Tu, Barz Hsu Collaborators / Supervisor : Steven Shiau, Jazz Wang How to deploy GPFS nodes massively using Diskless Remote Boot Linux

Upload: corby

Post on 19-Mar-2016

44 views

Category:

Documents


0 download

DESCRIPTION

How to deploy GPFS nodes massively using Diskless Remote Boot Linux. Authors : Rock Kuo, Waue Chen, Che-Yuan Tu, Barz Hsu Collaborators / Supervisor : Steven Shiau, Jazz Wang. DATE: 0 2014/8/26. Outline. IBM GPFS NCHC DRBL Testbed Architecture Demo Schedule Reference Q&A. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: DATE: 0 2014/8/26

DATE: 04/24/23

Authors : Rock Kuo, Waue Chen, Che-Yuan Tu, Barz Hsu Collaborators / Supervisor : Steven Shiau, Jazz Wang

How to deploy GPFS nodes massively using Diskless Remote

Boot Linux

Page 2: DATE: 0 2014/8/26

22

OutlineIBM GPFSNCHC DRBLTestbed ArchitectureDemo ScheduleReferenceQ&A

Page 3: DATE: 0 2014/8/26

33

GPFS (General Parallel File System)IBM GPFS is High-performance shared-disk file

management solution.Provides fast, reliable access to a common set of file

data from two computers to hundreds of systems.

GPFS Architecture (From IBM)

Page 4: DATE: 0 2014/8/26

44

NCHC DRBLDiskless Remote Boot in Linux (DRBL) provides a

diskless or systemless environment for client machines.

DRBL Logo(From DRBL)

Page 5: DATE: 0 2014/8/26

55

Why we use DRBL to deploy GPFS?Fast and reconfigurable deployment for GPFS nodes

Just install software you need in DRBL server, then client will boot from network using image in server.

Use GPFS to utilize client disks effectively Since DRBL is diskless, you can use entire hard disk for GPFS nodes.

3. Use DRBL command to manage your GPFS enabled Storage Cluster

4. Add and remove storages dynamic with fault tolerance support

Page 6: DATE: 0 2014/8/26

66

Our Testbed

Each node:CPU: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHzRAM: 2GB DDR2 667Disk: sda->160G, sdb->320GNIC:Intel Corporation 82566DM Gigabit Network Connection

Page 7: DATE: 0 2014/8/26

77

Our Testbed Architecture

Total disks: 2.5 T Total disks: 3.1 T

Page 8: DATE: 0 2014/8/26

88

Demo ScheduleInstall DRBL and GPFS

drblsrv -i ; drblpush -i (install and deploy)http://trac.nchc.org.tw/grid/wiki/GPFS_DRBL

Demo 1: Run GPFS in DRBL environmentRun GPFS on 8 nodesUse 11 disks (6*160G + 5*320G=2.5T)

Demo 2: Add new disk (Test Scalability) Dynamic add 3 disksGPFS will merge new and old disks into to one (Total disk

spaces:3.1T)

Page 9: DATE: 0 2014/8/26

99

Demo Schedule (con.)Demo 3: Test fault-tolerance

Use DRBL command to shutdown gpfs07 (GPFS will assume gpfs07 crash)

If you enable GPFS data-replicate option, the GPFS disks will still be available for futural operation

Page 10: DATE: 0 2014/8/26

1010

Future WorkWrite booting scripts for DRBL client to

automatically add and remove GPFS nodes

Integrate GPFS deploy function to DRBL management interface

GPFS testbed to support 3D Fly Circit Image Database

Page 11: DATE: 0 2014/8/26

1111

ReferenceDRBL

http://drbl.sourceforge.net/GPFS

http://www-03.ibm.com/systems/clusters/software/gpfs/index.html

Page 12: DATE: 0 2014/8/26

1212

Q&A