gpfs-easy

24
Initial GPFS Cluster Setup This section will provide information on setting up an initial GPFS cluster on an AIX system Installation Setting up SSH Create Cluster Verification Steps: Create the GPFS cluster using mmcrcluster. Before the cluster can be built, GPFS has to be installed on the required nodes. Then enable password less login enabled from each of the nodes to other nodes. Step 1 Installing GPFS fileset is fairly easy for an AIX administrator by using the smitty installp tool, select the fileset or use the installp command depending on your choice. The filesets required are gpfs.base gpfs.msg.en_US gpfs.base gpfs.docs.data Step 2 Next step is to enable the ssh password less login rsh or ssh can be used. Even though rsh can also be used, ssh is the preferred method as it offers more security. On the one of the nodes (preferably the node which you plan as the primary node) generate ssh key and copy the private and the public key to /root/.ssh directory on all the nodes which are part of GPFS cluster. Generate keypair #ssh-keygen -t dsa Verify if key-pair is generated There should be 2 files generated on /root/.ssh: id_dsa and id_dsa.pub Modify parameter to YES Look for PermitRootLogin parameter in /etc/ssh/sshd_config file Before copying the ssh keys to all other nodes make sure that the parameter PermitRootLogin is set to yes on all nodes. If the parameter is not set to yes then change it to 'yes' Then refresh the ssh daemon.

Upload: rajesh-kumar

Post on 28-Aug-2014

84 views

Category:

Documents


10 download

TRANSCRIPT

Page 1: GPFS-EASY

Initial GPFS Cluster Setup

This section will provide information on setting up an initial GPFS cluster on an AIX system

InstallationSetting up SSHCreate ClusterVerificationSteps: Create the GPFS cluster using mmcrcluster.

Before the cluster can be built, GPFS has to be installed on the required nodes. Then enable password less login enabled from each of thenodes to other nodes.

Step 1

Installing GPFS fileset is fairly easy for an AIX administrator by usingthe smitty installp tool, select the fileset or use the installp commanddepending on your choice.

The filesets required are

gpfs.basegpfs.msg.en_USgpfs.basegpfs.docs.data Step 2

Next step is to enable the ssh password less login

rsh or ssh can be used. Even though rsh can also be used, ssh is the preferred method as it offers more security.

On the one of the nodes (preferably the node which you plan as the primary node) generate ssh key and copy the private and the public key to /root/.ssh directory on all the nodes which are part of GPFS cluster.

Generate keypair

#ssh-keygen -t dsa

Verify if key-pair is generated

There should be 2 files generated on /root/.ssh: id_dsa and id_dsa.pub

Modify parameter to YES

Look for PermitRootLogin parameter in /etc/ssh/sshd_config file

Before copying the ssh keys to all other nodes make sure that the parameter PermitRootLogin is set to yes on all nodes.

If the parameter is not set to yes then change it to 'yes'

Then refresh the ssh daemon.

Page 2: GPFS-EASY

Copy the key-pair id_dsa and id_dsa.pub to all other nodes to the same location /root/.ssh

Append the ssh public key to /etc/ssh/authorized_keys on all the nodes including the primary gpfs node .

#cp /root/.ssh/id_dsa.pub >> /etc/ssh/authorized_keys

Step 3

Create a nodelist

Create a nodelist in root home directory which has all the node names (FQDN) which will be part of gpfs cluster

For eg:

If the nodenames are node1.test.com (node1) , node2.test.com (node2) , etc

Create a file /root/nodelist

Add the FQDN’s or shortnames of these nodes in the file one after the other (remember to put these entries in /etc/hosts file as well)

/root/nodelist is input file with a list of node names:designations.

Designations are “manager or client” and “quorum or nonquorum” (Tip: To make a node as quorum node specify quorum alone , to make it a client quorum node specify as quorum-client)

node1:quorum-manager

node2: quorum-manager

Syntax: mmcrcluster -N {NodeDesc[,NodeDesc...] | NodeFile

-p PrimaryServer

[-s SecondaryServer]

[-r RemoteShellCommand]

[-R RemoteFileCopyCommand]

[-C ClusterName] [-U DomainName]

[-A] [-c ConfigFile]

Example:

mmcrcluster -N /root/nodelist -p node1 -s node2 -r /usr/bin/ssh -R

Page 3: GPFS-EASY

/usr/bin/scp -C testcluster -A

where /root/nodelist contains the the list of nodes, node1 the primary configuration serve which is specified by –p optionr, node2 the secondary configuration server which is specified by –s option, ssh is the shell used for GPFS command execution specified by –R option and scpis the copy command used by GPFS to copy in between nodes specified by –r option and testcluster is the cluster name specified by –C option and finally –A option denotes GPFS will auto start during reboot.

Step 4

Verify status of the cluster using the mmlscluster

# mmlscluster

GPFS cluster information

GPFS cluster name: testcluster

GPFS cluster id: 12399838388936568191

GPFS UID domain: testcluster

Remote shell command: /usr/bin/ssh

Remote file copy command: /usr/bin/scp

GPFS cluster configuration servers:

Primary server : node1

Secondary server: node2

Node Daemon node name IP address Admin node name Designation

-----------------------------------------------------------------------------------------------

1 node1 10.10.19.81 node1 quorum-manager

2 node2 10.10.19.82 node2 quorum-manager

===============================================================================================================================================

===============================================================================================================================================

Page 4: GPFS-EASY

Create GPFS Filesystem using Network shared Disk (NSD)

This section will deal with creating the GPFS filesystem using the network shared disk (NSD)

Steps: Creating Network shared Disk (NSD)

mmcrnsd command is used to add disks (NSD) to the cluster

Initially you need to create a disk descriptor file first.

Store this file as /root/disklist (any name can be used for this file and can be stored )

Format for a disk descriptor is as follows:

DiskName:PrimaryServer:BackupServer:DiskUsage:FailureGroup:DesiredName:StoragePool

Example of a descriptor file

# cat disks

hdisk1:node1:node2:dataAndMetadata:0:test_nsd

This means the hdisk1 is the LUN which is to be used for the NSD, node1 is the primary server for this NSD, node2 is the backup server for this NSD, dataAndMetadata indicates it can contain data as well as metadata. 0 is the failure group and test_nsd is the name of the NSD

After the descriptor file is created use the mmcrnsd command to creat the NSD

Usage: mmcrnsd -F DescFile [-v {yes | no}]

Eg:

#mmcrnsd -F /root/disklist

mmcrnsd: Processing disk hdisk1

mmcrnsd: 6027-1371 Propagating the cluster configuration data to all affected nodes. This is an asynchronous process.

Verify the NSD’s were created

Use mmlsnsd to verify the NSD’s were properly created

# mmlsnsd

Page 5: GPFS-EASY

File system Disk name NSD servers

---------------------------------------------------------------------------

(free disk) test_nsd node1

After the NSD has been created the "disk descriptor" file /root/disklistin our case will have been rewritten and now has the NSD disk names. This newly written "disk

descriptor" file now is used as input to the mmcrfs command.

# cat disks

# hdisk1:node1:node2:dataAndMetadata:0:test_nsd

test_nsd:::dataAndMetadata:0::

Creating the GPFS Filesystem

Before creating the gpfs filesystem make sure to create the mount point.

For example if /gpfsFS1 is the filesystem name to be used then create the mount point using

Command mkdir –p /gpfsFS1.

Create the gpfs filesystem using the below command.

# mmcrfs /gpfsFS1 /dev/gpfsFS1 -F /root/disklist -B 64k –m 2 –M 2

This will create a Filesystem gpfsFS1 with device /dev/gpfsFS1 (whose underlying raw device will the disk mentioned in /root/disklist) with block size of 64K and maximum copies if data (m) and metadata (M) set to2. Incase if you add an extra NSD like above and mention the failure group as 1, you can also specify the –r 2 and –R 2 which implies that 2 replicas of data and metadata will be created (like mirroring).

Now the NSD can be viewed by mmlsnsd command

# mmlsnsd

File system Disk name NSD servers

---------------------------------------------------------------------------

gpfs_fs1 test_nsd node1

==========================================================================================================================================

Page 6: GPFS-EASY

==========================================================================================================================================

GPFS startup , shutdown ,Status and add a new node

This section will explain about GPFS operations as below

To Check Status of GPFS ClusterTo Startup GPFS ClusterTo Shutdown GPFS ClusterAdd a new node to a running GPFS ClusterTo change designation of the running GPFS clusterSteps: To Check Status of the GPFS Cluster

# mmgetstate –aLs àTo get the status of GPFS

Node number Node name Quorum Nodes up Total nodes GPFS state Remarks

------------------------------------------------------------------------------------

1 node1 2 2 2 active quorum node

2 node2 2 2 2 active quorum node

Summary information

---------------------

Number of nodes defined in the cluster: 2

Number of local nodes active in the cluster: 2

Number of remote nodes joined in this cluster: 0

Number of quorum nodes defined in the cluster: 2

Number of quorum nodes active in the cluster: 2

Quorum = 2, Quorum achieved

To shutdown a single node

# mmshutdown –N <nodename>

# mmshutdown -N node1

Wed Dec 3 00:11:44 CDT 2010: 6027-1341 mmshutdown: Starting force unmount of GPFS file systems

Page 7: GPFS-EASY

Wed Dec 3 00:11:49 CDT 2010: 6027-1344 mmshutdown: Shutting down GPFS daemons

node1: Shutting down!

node1: 'shutdown' command about to kill process 5701702

Wed Dec 3 00:11:56 CDT 2010: 6027-1345 mmshutdown: Finished

To shutdown all the nodes together

# mmshutdown –N all

To check status of the nodes

# mmgetstate -a

Node number Node name GPFS state

------------------------------------------

1 node1 arbitrating

2 node2 down

This shows that node2 is either down or GPFS is not started and hence the node1 is arbitrating to find the quorum. Solution is to start GPFS in node2 which will be described below

Startup a single node

Mmstartup –N <nodename>

# mmstartup -N node2

Wed Nov 3 00:14:09 CDT 2010: 6027-1642 mmstartup: Starting GPFS ...

# mmgetstate -a

Node number Node name GPFS state

Page 8: GPFS-EASY

------------------------------------------

1 node1 active

2 node2 active

To add a new node

mmaddnode -N

{NodeDesc[,NodeDesc...] | NodeFile}

– Must have root authority

– May be run from any node in the GPFS cluster

– Ensure proper authentication (.rhosts or ssh key exchanges)

– Install GPFS onto new node

– Decide designation(s) for new node, for example, Manager |

Quorum

Eg: To add the node, node3 as a non-quorum node and as a client

#mmaddnode -N node3: quorum-manager

Wed Nov 3 01:20:35 CDT 2010: 6027-1664 mmaddnode: Processing node node3

mmaddnode: Command successfully completed

mmaddnode: 6027-1371 Propagating the cluster configuration data to all

affected nodes. This is an asynchronous process.

To Check the Status of the GPFS Cluster

# mmlscluster

GPFS cluster information

========================

GPFS cluster name: testcluster

Page 9: GPFS-EASY

GPFS cluster id: 12399838388936568191

GPFS UID domain: testcluster

Remote shell command: /usr/bin/ssh

Remote file copy command: /usr/bin/scp

GPFS cluster configuration servers:

-----------------------------------

Primary server: node1

Secondary server: node2

Node Daemon node name IP address Admin node name Designation

-----------------------------------------------------------------------------------------------

1 node1 10.10.19.81 node1 quorum-manager

2 node2 10.10.19.82 node2 quorum

3 node3 10.10.19.83 node3 quorum-manager

To change the designation of a node

mmchnode

In our case the node3 was a non-quorum node and a client. To change the designation of node3 to client

# mmchnode --client -N node3

Wed Nov 3 00:29:01 CDT 2010: 6027-1664 mmchnode: Processing node node3

mmchnode: 6027-1371 Propagating the cluster configuration data to all

affected nodes. This is an asynchronous process.

# mmlscluster

Page 10: GPFS-EASY

GPFS cluster information

===========================

GPFS cluster name: testcluster

GPFS cluster id: 12399838388936568191

GPFS UID domain: testcluster

Remote shell command: /usr/bin/ssh

Remote file copy command: /usr/bin/scp

GPFS cluster configuration servers:

-----------------------------------

Primary server: node1

Secondary server: node2

Node Daemon node name IP address Admin node name Designation

-----------------------------------------------------------------------------------------------

1 node1 10.10.19.81 node1 quorum-manager

2 node2 10.10.19.82 node2 quorum

3 node3 10.10.19.83 node3 quorum

=======================================================================================================================================

=======================================================================================================================================

GPFS NSD and filesystem operations

This Section will explain more in detail of the following actions with examples.

To List Characteristics of the GPFS filesystemTo add a tiebreaker diskTo mount all the GPFS filesystemsTo list all the physical disks which are part of a GPFS filesystemTo display the GPFS filesystemTo Unmount a GPFS filesystem from one nodeTo remove a GPFS filesystemTo remove a disk from the filesystem

Page 11: GPFS-EASY

To remove the NSDTo replace the diskTo add a new disk to the GPFS filesystemTo suspend a diskTo Resume the diskSteps:

To list characteristics of GPFS filesystem

# mmlsfs <GPFS filesystem name>

# mmlsfs gpfs_fs1

flag value description

---- ---------------- -----------------------------------------------------

-f 2048 Minimum fragment size in bytes

-i 512 Inode size in bytes

-I 8192 Indirect block size in bytes

-m 1 Default number of metadata replicas

-M 2 Maximum number of metadata replicas

-r 1 Default number of data replicas

-R 2 Maximum number of data replicas

-j cluster Block allocation type

-D nfs4 File locking semantics in effect

-k all ACL semantics in effect

-a 1048576 Estimated average file size

-n 32 Estimated number of nodes that will mount file system

-B 65536 Block size

-Q none Quotas enforced

none Default quotas enabled

-F 33536 Maximum number of inodes

-V 10.01 (3.2.1.5) File system version

-u yes Support for large LUNs?

Page 12: GPFS-EASY

-z no Is DMAPI enabled?

-L 2097152 Logfile size

-E yes Exact mtime mount option

-S no Suppress atime mount option

-K whenpossible Strict replica allocation option

-P system Disk storage pools in file system

-d test_nsd Disks in file system

-A yes Automatic mount option

-o none Additional mount options

-T /gpfs_fs1 Default mount point

To add a tiebreaker disk

Mmchconfig

Prerequisite : A LUN obtained from SAN should first be added to NSD as mentioned in the section “Creating Network shared Disk (NSD)” and then proceed

Eg : To use the nsd test_nsd as a tiebreaker disk use the following command

mmchconfig tiebreakerDisks="test_nsd"

Eg : To remove a tiebreaker disk use the following command

mmchconfig tiebreakerDisks=no

To mount all the GPFS filesystems

# mmmount all -a

To list all the physical disks which are part of a GPFS filesystem

Page 13: GPFS-EASY

mmlsnsd

To show the node names use the –f and –m option

# mmlsnsd -f gpfs -m

Disk name NSD volume ID Device Node name Remarks

---------------------------------------------------------------------------------------

nsd1 AC1513514CD152BF /dev/hdisk1 node1

nsd2 AC1513514CD152C0 /dev/hdisk2 node2

nsd3 AC1513514CD152C1 /dev/hdisk3 node3

nsd4 AC1513514CD15352 /dev/hdisk4 node4

To show the failure group info and storage pool info

# mmlsdisk gpfs

disk driver sector failure holds holds storage

name type size group metadata data status availability pool

------------ -------- ------ ------- -------- ----- ------------- ------------ ------------

nsd1 nsd 512 -1 yes yes ready up system

nsd2 nsd 512 -1 yes yes ready up system

nsd3 nsd 512 -1 no yes ready up pool1

nsd4 nsd 512 -1 no yes ready up pool1

To display the GPFS filesystem

Page 14: GPFS-EASY

# mmdf gpfs

disk disk size failure holds holds free KB free KB

name in KB group metadata data in full blocks in fragments

--------------- ------------- -------- -------- ----- -------------------- -------------------

Disks in storage pool: system (Maximum disk size allowed is 97 GB)

nsd1 10485760 -1 yes yes 10403328 ( 99%) 960 ( 0%)

nsd2 10485760 -1 yes yes 10402304 ( 99%) 960 ( 0%)

------------- -------------------- -------------------

(pool total) 20971520 20805632 ( 99%) 1920 ( 0%)

To Unmount a GPFS filesystem from one node

mmumount <GPFS filesystem> -N <nodename>

mmunount gpfs_fs1 –N node1

where gpfs_fs1 is the GPFS filesystem and node1 is the name of the node from where it needs to be unmounted

To unmount a GPFS filesystem from all nodes

#mmumount <GPFS filesystem> -a

mmumount gpfs_fs1 -a

Page 15: GPFS-EASY

To remove a GPFS filesystem

1. # mmumount <GPFS filesystem> -a àTo unmount the GPFS filesystem from all nodes

2. # mmdelfs gpfs_fs1 –p àTo remove the filesystem

Steps with example: To remove FS gpfs_fs1

Initial output before removing the filesystem

# mmdf gpfs_fs1

disk disk size failure holds holds free KB free KB

name in KB group metadata data in full blocks in fragments

--------------- ------------- -------- -------- ----- -------------------- -------------------

Disks in storage pool: system (Maximum disk size allowed is 104 GB)

test_nsd 10485760 0 yes yes 10359360 ( 99%) 152 ( 0%)

test_nsd2 10485760 0 yes yes 10483648 (100%) 62 ( 0%)

test_nsd1 10485760 1 yes yes 10359360 ( 99%) 160 ( 0%)

------------- -------------------- -------------------

(pool total) 31457280 31202368 ( 99%) 374 ( 0%)

============= ==================== ===================

(total) 31457280 31202368 ( 99%) 374 ( 0%)

Page 16: GPFS-EASY

Inode Information

-----------------

Number of used inodes: 4042

Number of free inodes: 29494

Number of allocated inodes: 33536

Maximum number of inodes: 33536

---------------------------------------------------------------------------------------------

1. mmumount gpfs_fs1 -a

2. # mmdelfs gpfs_fs1 -p

GPFS: 6027-573 All data on following disks of gpfs_fs1 will be destroyed:

test_nsd

test_nsd1

test_nsd2

GPFS: 6027-574 Completed deletion of file system /dev/gpfs_fs1.

Mmlsnsd output after GPFS was removed which shows all NSD’s as free disks

# mmlsnsd

File system Disk name NSD servers

---------------------------------------------------------------------------

(free disk) test_nsd1 directly Attached

(free disk) test_nsd2 directly Attached

(free disk) test_nsd directly Attached

Page 17: GPFS-EASY

To remove a disk from the filesystem

Remember that you should remove only a disk after confirming that

adequate space is left in the other disks which are part of this filesystem (you can check this by using mmdf <GPFS filesystem name> so that when the disk is removed it will respan and data will then be shared across other available disks.OR

2 data replicas are available which can be checked using the mmlsfs <GPFS filesysem>

Syntax: mmdeldisk <GPFS filesystem> <NSD name> -r

-r option is very important as it will resync the data and balnce the data across other available disks in this filesystem.

# mmdeldisk gpfs_fs1 test_nsd2 -r

Deleting disks ...

Scanning system storage pool

GPFS: 6027-589 Scanning file system metadata, phase 1 ...

GPFS: 6027-552 Scan completed successfully.

GPFS: 6027-589 Scanning file system metadata, phase 2 ...

GPFS: 6027-552 Scan completed successfully.

GPFS: 6027-589 Scanning file system metadata, phase 3 ...

GPFS: 6027-552 Scan completed successfully.

GPFS: 6027-589 Scanning file system metadata, phase 4 ...

GPFS: 6027-552 Scan completed successfully.

GPFS: 6027-565 Scanning user file metadata ...

GPFS: 6027-552 Scan completed successfully.

Checking Allocation Map for storage pool 'system'

GPFS: 6027-370 tsdeldisk64 completed.

mmdeldisk: 6027-1371 Propagating the cluster configuration data to all

affected nodes. This is an asynchronous process.

Page 18: GPFS-EASY

# mmlsnsd

File system Disk name NSD servers

---------------------------------------------------------------------------

gpfs_fs1 test_nsd directly attached

gpfs_fs1 test_nsd1 directly attached

(free disk) test_nsd2 directly attached

To remove the NSD

Remember that you should only remove NSD’s that are free. Steps to remove NSD’s are

#mmdelnsd <NSD name>

# mmlsnsd

File system Disk name NSD servers

---------------------------------------------------------------------------

(free disk) test_nsd1 directly attached

Gpfs_fs1 test_nsd2 directly attached

Gpfs_fs1 test_nsd directly attached

# mmdelnsd test_nsd1

mmdelnsd: Processing disk test_nsd1

mmdelnsd: 6027-1371 Propagating the cluster configuration data to all

affected nodes. This is an asynchronous process.

# mmlsnsd

Page 19: GPFS-EASY

File system Disk name NSD servers

---------------------------------------------------------------------------

Gpfs_fs1 test_nsd2 directly attached

Gpfs_fs1 test_nsd directly attached

To replace the disk

Prerequisites for replacing or adding a new disk. The physical disk / LUN should be added to a NSD as mentioned in the “Creating Network shared Disk (NSD)” section

Syntax : mmrpldisk <GPFS filesystem name> <NSD to be replaced> <new NSD>-v {yes|no}

Yes – checks will be done if any data is there in new NSD

No -- no checks will be done if any data is there in new NSD

In this example you have 3 existing nsd’s nsd2, nsd3 and nsd4 and a newly added nsd nsd1 which is not part of GPFS filesystem. This procedure explains how to replace nsd4 with nsd1

# mmlsnsd

File system Disk name NSD servers

---------------------------------------------------------------------------

Gpfs_fs1 nsd2 (directly attached)

Gpfs_fs1 nsd3 (directly attached)

Gpfs_fs1 nsd4 (directly attached)

Page 20: GPFS-EASY

(free disk) nsd1 (directly attached)

# mmrpldisk gpfs nsd4 nsd1 -v no

Verifying file system configuration information ...

Replacing nsd4 ...

GPFS: 6027-531 The following disks of gpfs will be formatted on node trlpar06_21:

nsd1: size 10485760 KB

Extending Allocation Map

Checking Allocation Map for storage pool 'system'

GPFS: 6027-1503 Completed adding disks to file system gpfs_fs1

Scanning system storage pool

GPFS: 6027-589 Scanning file system metadata, phase 1 ...

GPFS: 6027-552 Scan completed successfully.

GPFS: 6027-589 Scanning file system metadata, phase 2 ...

Scanning file system metadata for pool1 storage pool

GPFS: 6027-552 Scan completed successfully.

GPFS: 6027-589 Scanning file system metadata, phase 3 ...

GPFS: 6027-552 Scan completed successfully.

GPFS: 6027-589 Scanning file system metadata, phase 4 ...

GPFS: 6027-552 Scan completed successfully.

GPFS: 6027-565 Scanning user file metadata ...

100 % complete on Thu Nov 4 02:14:14 2010

GPFS: 6027-552 Scan completed successfully.

Checking Allocation Map for storage pool 'system'

Done

Check the mmlsnsd output after the activity. Notice that nsd4 now becamea part of gpfs_fs1 filesystem and nsd4 became free

Page 21: GPFS-EASY

# mmlsnsd

File system Disk name NSD servers

---------------------------------------------------------------------------

(free disk) nsd2 (directly attached)

Gpfs_fs1 nsd3 (directly attached)

Gpfs_fs1 nsd4 (directly attached)

Gpfs_fs1 nsd1 (directly attached)

To add a new disk to the GPFS filesystem

Prerequisites for adding a new disk. The physical disk / LUN should be added to a NSD as mentioned in the “Creating Network shared Disk (NSD)” section and the same disk descriptor file used for creating the NSD has to be used with –F option

Step 1: create a disk descriptor file with any name. here we use the file name as “disks” and the disk used is hdisk10

# cat disks

hdisk10:::dataAndMetadata:0:test_nsd10

Step 2: Create the NSD with the new disk hdisk10

# /usr/lpp/mmfs/bin/mmcrnsd -F disks

mmcrnsd: Processing disk hdisk10

mmcrnsd: 6027-1371 Propagating the cluster configuration data to all

affected nodes. This is an asynchronous process.

#

#

# mmlsnsd

Page 22: GPFS-EASY

File system Disk name NSD servers

---------------------------------------------------------------------------

Gpfs_fs1 test_nsd2 directly attached

Gpfs_fs1 test_nsd directly attached

(free disk) test_nsd10 directly attached

# cat disks

# hdisk10:::dataAndMetadata:0:test_nsd10

test_nsd2:::dataAndMetadata:0::

Step3 : add the disk to the GPFS filesystem gpfs_fs1

# mmadddisk gpfs_fs1 -F disks

GPFS: 6027-531 The following disks of gpfs_fs1 will be formatted

test_nsd10: size 10485760 KB

Extending Allocation Map

Checking Allocation Map for storage pool 'system'

GPFS: 6027-1503 Completed adding disks to file system gpfs_fs1.

mmadddisk: 6027-1371 Propagating the cluster configuration data to all

affected nodes. This is an asynchronous process.

To suspend a disk

This will be useful if you suspect any problem with existing disk and you want to stop further writes to that disk.

Syntax: mmchdisk <gpfs filesystem name> suspend -d <nsd name>

Page 23: GPFS-EASY

# mmchdisk gpfs_fs1 suspend -d nsd4

# mmlsdisk /dev/gpfs_fs1

disk driver sector failure holds holds storage

name type size group metadata data status availability pool

------------ -------- ------ ------- -------- ----- ------------- ------------ ------------

nsd4 nsd 512 0 yes yes suspended up system

nsd2 nsd 512 1 yes yes ready up system

nsd3 nsd 512 0 no yes ready up pool1

To resume a disk

Syntax: mmchdisk <gpfs filesystem name> resume -d <nsd name>

# mmlsdisk /dev/gpfs_fs1

disk driver sector failure holds holds storage

name type size group metadata data status availability pool

------------ -------- ------ ------- -------- ----- ------------- ------------ ------------

nsd4 nsd 512 0 yes yes suspended up system

nsd2 nsd 512 1 yes yes ready up system

nsd3 nsd 512 0 no yes ready up pool1

# mmchdisk gpfs_fs1 resume -d nsd4

Page 24: GPFS-EASY

# mmlsdisk /dev/gpfs_fs1

disk driver sector failure holds holds storage

name type size group metadata data status availability pool

------------ -------- ------ ------- -------- ----- ------------- ------------ ------------

nsd4 nsd 512 0 yes yes ready up system

nsd2 nsd 512 1 yes yes ready up system

nsd3 nsd 512 0 no yes ready up pool1