managing storage volume manager (raid & svm) · 2013. 2. 11. · raid 1: • raid level uses...

26
Managing Storage Volume manager (RAID & SVM) Study Purpose Only www.networkuser.wordpress.com

Upload: others

Post on 05-Feb-2021

16 views

Category:

Documents


0 download

TRANSCRIPT

  • Managing Storage Volume

    manager (RAID & SVM)

    Study Purpose Only

    www.networkuser.wordpress.com

  • Advantage of Storage Vol. Mgr.

    1. The availability of abundant disk space makes it possible to provide data replication, thereby implementing fault tolerance.

    2. Because a volume can span across multiple disks, we can write to multiple disks simultaneously to improve performance.

    • two features are implemented in a technology called

    Redundant Array of Inexpensive/Independent Disks (RAID).

    Solaris Volume Manager (SVM) [Solaris tool ]

    • disk set is takes the management of disk space from a single disk to multiple disks.

    www.networkuser.wordpress.com

  • Disk Mirroring

    • Disk mirroring is a process of designating a disk drive, say disk B, as a duplicate (mirror) of another disk drive, say disk A.

    • mirroring has a single point of failure—that is, if the disk controller fails, both disks will become inaccessible. The solution to this problem is disk duplexing.

  • Disk Duplexing

    • Disk duplexing solves this problem by providing each disk drive its own disk controller.

    • duplexing improves fault tolerance over mirroring.

  • Disk Striping • Disk striping breaks the data into small pieces called stripes,

    and those stripes are written to multiple disks simultaneously.

    • read/write heads are working simultaneously, striping improves read/write performance.

  • Data Redundancy by Parity

    • If you lose a part of data, you can reconstruct it from the parity information of this data.

    • When a chunk (say a block) of data is stored on a disk, its parity information is calculated and stored on another disk. If this part of the data is lost, it can be reconstructed from the parity information.

    • Note that parity provides fault tolerance at the cost of performance and disk space, because parity calculations take CPU time, and parity information takes disk space.

  • RAID • Redundant Array of Inexpensive/Independent Disks

    • Uses a set of redundant disks to provide fault tolerance.

    • Disks usually reside together in a cabinet and are therefore referred to as an array.

    RAID levels:

    RAID 0:

    • This level of RAID uses striping on multiple disks.

    • Means that a volume contains stripes on multiple disks and the data can be written simultaneously to these stripes.

    • RAID 0 improves performance

    • Does not implement fault tolerance even though it uses multiple disks

    • If one disk fails that contains a stripe of the file, you lose the file.

  • RAID 1:

    • RAID level uses mirroring (or duplexing) on two disks to provide a very basic level of disk fault tolerance

    • Both disks contain the same data—that is, one disk is a mirror image of the other.

    • also provides performance improvement in data reads, because if one disk is busy, the data can be read from the other disk.

    RAID 1 + 0:

    • mirroring is striped to provide performance improvement in addition to fault tolerance.

  • RAID 2:

    • This RAID level implements striping with parity

    • Needs at least three disks because parity information is written on a disk other than the data disks.

    • The striping is done at bit level—that is, bits are striped across multiple disks.

    RAID 3:

    • Similar to RAID 2 (striping with parity) the striping is done at byte level as opposed to bit level

    • Offers improved performance over RAID 2, because more data can be read or written in a single read/write operation.

  • RAID 4:

    • Similar to RAID 2 and RAID 3 (striping with parity).

    • Main difference is that the striping is done at block level as opposed to bit level or byte level

    • Offers improved performance because more data can be read or written in a single read/write operation.

    RAID 5:

    • Implements striping with distributed parity

    • That is, parity is also striped across multiple disks

    • The data and parity can be interleaved on all disks

    • Parity of a given piece of data on one disk must not be written to the same disk, which would defeat the whole purpose of parity.

    • RAID 5 requires three disks at minimum

  • SVM (Solaris Volume Manager)

    • A volume is a named chunk of disk space that can occupy a part of a disk or the whole disk

    • SVM is a software product that helps you manage huge amounts of data spread over a large number of disks.

    ■ Increasing storage capacity

    ■ Increasing data availability

    ■ Easing administration of large storage devices

    important concepts related to SVM:

    Soft partition:

    • Divide a disk slice or a logical volume into as many divisions as needed.

    • Maximum size of a soft partition is limited to the size of the slice or the logical volume of which it is a part.

  • Hot spare :

    • Hot spare is a slice that is reserved for automatic substitution in case the corresponding data slice fails.

    • Using SVM, you can dynamically add, delete, replace, and enable hot spares within the hot spare pool.

    Hot spare pool:

    • Hot spare pool is a collection (an ordered list) of hot spares that SVM uses to provide increased data availability and fault tolerance for a RAID 1 (mirrored) volume and a RAID 5 (striped data with striped parity) volume.

    Disk set:

    • Is a set of disk drives that contain logical volumes and hot spare pools.

  • State database:

    • State database contains the configuration and status information about the disk sets, and about volumes and hot spares in a disk set managed by SVM.

    • SVM also maintains replicas (multiple copies) of a state database to provide fault tolerance.

  • Logical Volumes Supported by SVM

    • Logical volume is a group of physical disk slices that appears to be a single logical device.

    Logical volumes are used to

    increase storage capacity,

    data availability (and hence fault tolerance),

    and performance (possibly including I /O performance).

  • RAID 0 Volume

    • Volumes enable you to dynamically expand disk storage capacity. • Three kinds of RAID 0 volumes

    Stripe volume :

    • Volume that spreads data across two or more components (slices or soft partitions)

    • Size of the data segment called the interlace size (16KB)

    Concatenation volumes:

    • Writes data on multiple components sequentially. That is, it writes the data to the first available component until it is full; then it moves to write to the next component.

    • No parallel access.

    Concatenated stripe volume:

    • Volume is a stripe volume that has been expanded by adding components

  • RAID 1 Volumes

    • Each copy of a RAID 0 volume in a RAID 1 volume (mirror) is called a submirror.

    • For a volume to be a mirror volume it must contain at least two submirrors, but SVM supports up to four submirrors in a RAID 1 volume.

    • The same data will be written to more than one submirror

    • Rise to multiple read/write options for a RAID 1 volume.

  • RAID 5 Volumes

    RAID 5 volume is a striped volume that also provides fault tolerance by using parity information distributed across all components

    • When we lose data we must not lose its parity information (data and its parity must not reside on the same component)

    • A component that already contains a file system must not be included in the creation of a RAID 5 volume, because doing so will erase the data during initialization.

    • Use components of equal size.

    • Set the interlace value (size of data segments); otherwise, it will be set to a default of 16KB.

    RAID 5 uses parity, which slows down performance, it may not be a good solution for write intensive applications.

  • Configuration

    In command mode

    • “metadb” command to create, delete, or check the status of a state database.

    Creating State Databases

    • Default size for a state database replica in SVM is 8192 blocks (4MB)

    Example:

    • To create an initial (first) state database replica on a new system c0t1d0s7

    • # metadb -a -f c0t1d0s7

  • metadb command

    metadb []

    values for :

    -a. Add a database replica. Used with the -f option when no database replica exists.

    -d. Delete all replicas located on the specified slice and update the /kernel/drv/md.conf and /etc/lvm/mddb.cf files accordingly.

    -f. Use with -a to create the database when no replica exists.

    -i. Inquire about the status of the replica.

    value of :

    -c . Specify the number of replicas to be added to a specified size. The default is 1.

    -l . Size in blocks of the replica that is to be created. The default is 8192.

  • Performing Mirroring and Unmirroring

    Building a Mirror:

    Step I : Identify the slice that contains the existing file system to be mirrored, Take example slice c0t0d0s0.

    Step II : Create a new RAID 0 volume on the slice from step 1. metainit -f

    • use the -f option when the slice contains a mounted fi le system.

    ■ . Specify the name of the volume that you are creating.

    ■ . Specify the number of stripes to create in thevolume.

    ■ . Specify the number of components each

    stripe should have.

    ■ . Specify the names of the components that will

    be used to create the volume, c0t0d0s0 in this example.

    Mounted FS -> reboot need

    Unmounted FS -> reboot not necessary

  • STEP III : Create a second RAID 0 volume on an unused slice,

    Example : c1t1d0s0, to act as the second submirror.

    Step IV : Create a one-way mirror

    metainit -m

    -m option means create a mirror

    -> specifies the name of the volume that you want to create

    argument specifies the name of the component that will be the first submirror in the mirror.

    Step V:

    File system you are mirroring is not the root(/) file system, then edit the /etc/vfstab file to make sure that the file system mount instructions refer to the mirror, not to the block device.

    For example, an entry like the following in the /etc/vfstab file would be wrong:

    • /dev/dsk/ /dev/rdsk/ /var ufs 2 yes –

    Change this entry to read:

    • /dev/md/dsk/ /dev/md/rdsk/ /var ufs 2 yes -

  • • Step VI: Remount your newly mirrored file system

    ■ If you are mirroring the root(/) file system, execute the metaroot command to tell the system to boot from the mirror, and then reboot.

    • # metaroot

    • # reboot

    ■ If you are mirroring a file system that is not the root(/) and that cannot be unmounted, just reboot your system.

    • If you are mirroring a file system that can be unmounted, you do not need to reboot. Just unmount and remount the file system:

    • # umount

    • # mount

    Step VII: Attach the second submirror by issuing the metattach command:

    metattach

  • Unmirroring a Mounted File System

    Step 1:

    Become superuser.

    Step 2:

    Issue the following command to verify that at least one submirror is in the Okay state.

    • # metastat

    Step 3:

    Issue the following command to detach the submirror that you want to continue using for the file system:

    • # metadetach

  • Step 4:

    Use one of the following commands depending on the filesystem you want to unmirror:

    ■ For the root(/) file system, execute the metaroot command to tell the system where to boot from:

    • # metaroot

    ■ For the /usr, /opt, or swap file systems, change the file system entry in the /etc/vfstab fi le to use a non-SVM device (slice).

    Step 5:

    Reboot the system:

    • # reboot

    Step 6:

    Clear the remaining mirror and submirrors:

    • # metaclear -r