the linux logical volume manager (lvm)

26
27/02/2012 The LinX[ Logical VolXme ManageU (LVM) 1/26 ZZZ.VofWpanoUama.oUg/CommeUcial_linX[eV/logical_YolXme_manageU.VhWml SofWpanorama Ma\ the source be with \ou, but remember the KISS principle ;-) Search AboXW BXlleWin LaWeVW PaVW monWh Top YiViWed The Linux Logical Volume Manager (LVM) News LinX[ DiVk ManagemenW Recommended Books Recommended Links HOW-TOs Recommended Papers LVM CheaWVheeW RepaUWiWioning SnapVhoWV Operations on Logical Volumes LVM Tools SofWZaUe RAID E[W3 fileV\VWem Loopback filesystem GUXb RecoYeU\ Basic LVM commands Create a new volume Get information about free space C reate and mount a partition Extend the partition ReVi]e Whe file V\VWem> RecoYeU LVM YolXmeV Moving a volume group to another system LinX[ TUoXbleVhooWing modpUobe FATAL coXld noW load modXleV.dep ConWUolling LVM DeYice ScanV ZiWh FilWeUV PaUWiWion labelV Logical VolXme Renaming Humor Etc General concepts of logical volume manager (LVM) stems from the desire to be able to create filesystems that span several physical disks as well as change the size of existing partitions of the fly. Another important feature is the ability to create snapshots. See Snapshots The Linux LVM implementation is similar to the HP-UX LVM implementation although actual code probably has more in common with AIX implementation. Both of them are derivatives of VxFS, the Veritas filesystem). The Veritas filesystem and volume manager have their roots in a fault-tolerant proprietary minicomputer built by Veritas in the 1980s. They have been available for Solaris since at least 1993 and were later integrated into HP-UX, AIX and SCO UNIX. Veritas Volume Manager code has been used (in extensively modified form and without command line tools) in Windows. The quality and architectural integrity of the current Linux Server Management Fully-Managed Linux Servers Package Linux Web Hosting Services ZZZ.nksupport.com Aix Monitoring Monitor AIX performance. CPU, Memory, Disk etc. Try Now! ZZZ.ManageEngine.com Point Of Sales(Singapore) User Friendly POS Software For Retail Outlets And Restaurants. ZZZ.basic.com.sg

Upload: lordndk

Post on 14-Oct-2014

116 views

Category:

Documents


12 download

TRANSCRIPT

Page 1: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

1/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

SoftpanoramaMay the source be with you, but remember the KISS

pr inciple ; -)

Search

A bout Bulletin Latest Past month Top visited

The Linux Logical Volume Manager (LVM)

NewsLinux Disk

Management

Recommended

Books

Recommended

LinksHOW-TOs

Recommended

Papers

LVM

CheatsheetRepartitioning

Snapshots

Operations on

Logical

Volumes

LVM ToolsSoftware

RA ID

Ext3

fi lesystem

Loopback

filesystemGrub Recovery

Basic LVM

commands

Create a new

volume

Get

information

about free

space

Create and

mount a

partition

Extend the

partition

Resize the

fi le system>

Recover

LVM

volumes

Moving a

volume group

to another

system

Linux

Troubleshooting

modprobe

FA TA L could

not load

modules.dep

Control l ing

LVM Device

Scans with

Fi lters

Partition

labels

Logical

Volume

Renaming

Humor Etc

General concepts of logical volume manager (LVM) stems from the desire to be able to create

filesystems that span several physical disks as well as change the size of existing partitions of

the fly. Another important feature is the ability to create snapshots. See Snapshots

The Linux LVM implementation is similar to the

HP-UX LVM implementation although actual code

probably has more in common with AIX

implementation. Both of them are derivatives of

VxFS, the Veritas filesystem). The Veritas

filesystem and volume manager have their roots in

a fault-tolerant proprietary minicomputer built by

Veritas in the 1980s. They have been available for

Solaris since at least 1993 and were later

integrated into HP-UX, AIX and SCO UNIX. Veritas

Volume Manager code has been used (in

extensively modified form and without command

line tools) in Windows. The quality and

architectural integrity of the current

Linux Server Management Fully-Managed Linux Servers Package Linux Web Hosting Services www.nksupport.com

Aix Monitoring Monitor AIX performance. CPU, Memory, Disk etc. Try Now! www.ManageEngine.com

Point Of Sales(Singapore) User Friendly POS Software For Retail Outlets And Restaurants. www.basic.com.sg

Page 2: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

2/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

implementation of LVM is low. Recovery tools in

case of severe malfunction are limited. LVM is the source of many difficult to resolve problem,

including, but not limited to situation when production server became unbootable after regular

updates.

LVM adds additional layer of complexity and my recommendation is to avoid it unless you need

some of the functions provided.

Recovery of LVM-controlled partition is more complex and time consuming. It helps if

installation DVD rescue mode automatically recognizes LVM group. This is the case for Suse 10.

See

Recovering a Lost LVM Volume Disk Novell User Communities

Recovery of LVM partitions a good explanation of basic mechanism.

Putting the root partition under LVM is a risky decision and you may pay the price unless you

are an LVM expert. In large enterprise environment if the partition is not on SAN or NAS

usually extension of partition means adding a new pair of hardrives. In such cases creating cpio archive of the partition,

recreation of partitions and restoring the content is a better deal as such cases happen once in several years. Another

path to avoid LVM is to start with it, optimize size of the partitions based on actual usage, and then move all content to

the second pair of drives without LVM, modify /etc/fstab and replace original drives with the new.

Among cases when LVM is essential are the following:

You need Snapshots.

Some of your partitions are so dynamic that can span several drives during lifetime. In such cases SAN or NAS is a

better solution then using LVM.

All-in-all current LVM is a pretty convoluted implementation of three tier storage hierarchy (physical volumes, logical

volumes, partitions). Such an implementation both from architectural and from efficiency standpoints is somewhat

inferior to integrated solutions like ZFS.

For a regular sysadmin who does not have much LVM experience, the sense of desperation and cold down the spine in

case LVM-based partition goes south dampen all advantages that LVM provides. You can find pretty interesting and

opinionated tidbits about such situations on the Net. For example, emotional statement in the discussion thread dev-dm-

0?:

I only use those for mounting flash drives, and mapping encrypted partitions.Sorry, i dont do LVM anymore, after a small problem lost me 300GB of data. Its much easier to backup.

Putting root filesystem under LVM often happens if the first partition is a service partition (for example Dell service

partition). In this case swap partition and boot partition take another two primary partitions and extended is the last one

and it is usually put completely under LVM. In this case it is better to allocate SWAP partition on a different volume or to

a file so that the root partition is a primary partition.

If you have root system on LVM volume you need to train yourself to use recovery disk and mount those partitions. It

also helps to have a separate backup on CD or other media of /etc/lvm. Among other things it contains the file with the

structure of your LVM volume, for example /etc/lvm/backup/vg01

Page 3: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

3/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

LVM was originally written (adapted from IBM code?) in 1998 by Heinz Mauelshagen. A good introduction to the basic

concepts can be found in Wikipedia articles Logical volume management and Logical Volume Manager (Linux). Some

code was donated by IBM [IBM pitches its open source side]. It is unclear is it is still used. See Enterprise Volume

Management System - Wikipedia

IBM has donated technology, code and skills to the Linux community, Kloeckner said, citing the company's donation of the LogicalVolume Manager and its Journaling File System.

Matthew O'Keefe who from 1990 to May 2000, taught and performed research in storage systems and parallel

simulation software as a professor of electrical and computer engineering at the University of Minnesota founded Sistina

Software in May of 2000 to develop storage infrastructure software for Linux, including the Linux Logical Volume

Manager (LVM). They created LVM2. Sistina was acquired by Red Hat in December 2003.

LVM2 is identical in Red Hat and Suse although it has different GUI interface for managing volumes. The installers for

both Red Hat and Suse are LVM-aware.

Although Linux volume manager works OK and is pretty reliable, documentation sucks badly for a commercial product.

The most readable documentation that I have found is the article by Klaus Heinrich Kiwi Logical volume management

published at IBM Developer Works on September 11, 2007. Good cheatsheet is avai lable from RedHat - LVM

cheatsheet

The most readable documentation that I have found is the article by Klaus Heinrich

Kiwi Logical volume management published at IBM Developer Works on September

11, 2007. It is now outdated

Good cheatsheet is avai lable from RedHat - LVM cheatsheet

Moreover in RHEL 4 GUI interface is almost unusable as the left pane cannot be enlarged. YAST in Suse 10 was a much

better deal.

Terminology

The LVM hierarchy includes Physical Volume (PV) (typically a hard disk or partition, though it may well just be a device

that 'looks' like a hard disk e.g. a RAID device). Volume Group (VG) (the new virtual disk that can contain several

physical disks) and Logical Volumes (LV) -- the equivalent of a disk partition in a non-LVM system. The Volume Group is

the highest level abstraction used within the LVM.

hda1 hdc1 (PV:s on partitions or whole disks)

\ /

\ /

diskvg (VG)

/ | \

/ | \

usrlv rootlv varlv (LV:s)

| | |

ext2 reiserfs xfs (filesystems)

The lowest level in the LVM storage hierarchy is the Physical Volume (PV). A PV is a single device or partition and is

Page 4: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

4/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

created with the command: pvcreate device. This step initializes a partition for later use. During this step each physical

volume is divided chunks of data, known as physical extents, these extents have the same size as the logical extents for

the volume group.

Multiple Physical Volumes (initialized partitions) are merged into a Volume Group (VG). This is done with the command:

vgcreate volume_name device {device}. This step also registers volume_name in the LVM kernel module and therefore

it is made accessible to the kernel I/O layer. For example:

vgcreate test-volume /dev/hda2 /dev/hda10

A Volume Group is pool from which Logical Volumes (LV) can be allocated. LV is the equivalent of a disk partition in a

non-LVM system. The LV is visible as a standard block device; as such the LV can contain a file system (eg. /home).

Creating an LV is done with lvcreate command

Here is summary of terrminology used:

Partition - a portion of physical hard disk space. A hard disk may contain one or more partitions. Partitions are

defined by BIOS and described by partition tables stored on a harddrive.

Volume - a logical concept which hides the physical organization of storage space. A compatibility volume directly

corresponds to a partition while LVM volume may span more than one partition on one or more physical disks. A

volume is seen by users as a single drive letter.

Physical Volume (PV) Synonym for "hard disk". A single physical hard drive.

Volume Group (VG) A set of one or more PVs which form a single storage pool. You can define multiple VGs

on each system.

Logical Volume (LV) A usable unit of disk space within VG. LVs are used analogously to partitions on PCs or

slices under Solaris: they usually contain filesystems or paging spaces ("swap")Unlike physical partition can span

multiple physical volumes that constitute VG. .

Root partition. Physical or logical partition what holds root filesystem and mount points for all other partitions.

Can be physical partition or logical volume.

LVM Tools

LVM Tool Description

pvcreate Create physical volume from a hard drive

vgcreate Create logical volume group from one or more physical volumes

vgextend Add a physical volume to an existing volume group

vgreduce Remove a physical volume from a volume group

lvcreate Create a logical volume from available space in the volume group

lvextend Extend the size of a logical volume from free physical extents in the logical volume group

lvremove Remove a logical volume from a logical volume group, after unmounting it

vgdisplay Show properties of existing volume group

lvdisplay Show properties of existing logical volumes

pvscan Show properties of existing physical volumes

Getting the map of the LVM environment

Page 5: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

5/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

Commands vgdisplay, lvdisplay, and pvscan have man pages that provide a wealth of information how to navigated the

maze of volumes on the particular server.

The first command to use is pvdisplay which provides you with the information about volumes available

pvscan [-d|--debug] [-e|--exported] [-h|--help] [--ignorelockingfailure] [-n|--novolumegroup] [-s|--short] [-u|--uuid] [-v[v]|--verbose [--verbose]]

pvscan scans all supported LVM block devices in the system for physical volumes.

See lvm for common options.

-e, --exportedOnly show physical volumes belonging to exported volume groups.

-n, --novolumegroupOnly show physical volumes not belonging to any volume group.

-s, --shortShort listing format.

-u, --uuidShow UUIDs (Uniform Unique Identifiers) in addition to device special names.

vgdisplay shows logical volumes one by one and provides the information about free disk space on each:

vgdisplay vg0 | grep "Total PE"

Operations on Logical Volumes

Among typical operations ( adapted from A Walkthrough of the LVM for Linux) :

A dding a disk to the Volume Group

To add /dev/hda6 to the Volume Group just type vgextend vg01 /dev/hda6 and you're done!

You can check this out by using vgdisplay -v vg01. Note that there are now a lot more PEs available!

Moving Creating a striped Logical Volume Note that LVM created your whole Logical Volume on one Physical

Volume within the Volume Group. You can also stripe an LV across two Physical Volumes with the -i flag in

lvcreate. We'll create a new LV, lv02, striped across hda5 and hda6. Type lvcreate -l4 -nlv02 -i2 vg01 /dev/hda5

/dev/hda6. Specifying the PV on the command line tells LVM which PEs to use, while the -i2 command tells it to

stripe it across the two.

You now have an LV striped across two PVs!

Moving data within a Volume Group Up to now, PEs and LEs were pretty much interchangable. They are the

same size and are mapped automatically by LVM. This does not have to be the case, though. In fact, you can

move an entire LV from one PV to another, even while the disk is mounted and in use! This will impact your

performance, but it can prove useful.

Let's move lv01 to hda6 from hda5. Type pvmove -n/dev/vg01/lv01 /dev/hda5 /dev/hda6. This will move all LEs

used by lv01 mapped to PEs on /dev/hda5 to new PEs on /dev/hda6. Effectively, this migrates data from hda5 to

hda6. It takes a while, but when it's done, take a look with lvdisplay -v /dev/vg01/lv01 and notice that it now

resides entirely on /dev/hda6!

Removing a Logical Volume from a Volume Group Let's say we no longer need lv02. We can remove it and

place its PEs back in the empty pool for the Volume Group. First, unmounting its filesystem. Next, deactivate it

with lvchange -a n /dev/vg01/lv02. Finally, delete it by typing lvremove /dev/vg01/lv02. Look at the Volume

Page 6: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

6/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

Group and notice that the PEs are now unused.

Removing a disk from the Volume Group You can also remove a disk from a volume group. We aren't using

hda5 anymore, so we can remove it from the Volume Group. Just type vgreduce vg01 /dev/hda5 and it's gone!

A file system on logical volume may be extended. Also more space may be added to a VG by adding new partitions or

devices with the command: vgextend. For example:

lvextend -L +4G /dev/VolGroup00/LogVol04

The command pvmove can be used in several ways to move any LV elsewhere. There are also many more commands

to rename, remove, split, merge, activate, deactivate and get extended information about current PV's, VG's and LV's.

Here is a typical du map of a server with volume manager installed. As you can see all partitions except /boot partition

are referred vi path /dev/mapper/VolGroup00-LogVolxx where xx is two digit number:

Filesystem 1K-blocks Used Available Use% Mounted on

/dev/mapper/VolGroup00-LogVol00

4128448 316304 3602432 9% /

/dev/sda3 194449 22382 162027 13% /boot

none 2020484 0 2020484 0% /dev/shm

/dev/mapper/VolGroup00-LogVol05

4128448 42012 3876724 2% /home

/dev/mapper/VolGroup00-LogVol03

4128448 41640 3877096 2% /tmp

/dev/mapper/VolGroup00-LogVol02

8256952 3189944 4647580 41% /usr

/dev/mapper/VolGroup00-LogVol04

8256952 174232 7663344 3% /var

/dev/hde 594366 594366 0 100% /media/cdrecorder

Resiliency to renumbering of physical hard disks

LVM identifies PVs by UUID, not by device name. Each disk (PV) is labeled with a UUID, which uniquely identifies it to

the system. 'vgscan' identifies this after a new disk is added that changes your drive numbering. Most distros run vgscan

in the lvm startup scripts to cope with this on reboot after a hardware addition. If you're doing a hot-add, you'll have to

run this by hand I think. On the other hand, if your vg is activated and being used, the renumbering should not affect it

at all. It's only the activation that needs the identifier, and the worst case scenario is that the activation will fail without a

vgscan with a complaint about a missing PV.

The failure or removal of a drive that LVM is currently using will cause problems with current use and future activations

of the VG that was using it.

How to get information about free space

vgdisplay shows logical volumes one by one and provides the information about free disk space on each:

vgdisplay volume_group_one | grep "Total PE"

Page 7: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

7/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

How to create a new volume

# vgcreate vg01 /dev/hda2 /dev/hda10 Volume group "vg01" successfully created

How to create and mount a partition

1. Create the partition with lvcreate

# lvcreate -L 5G -n data vg02 Logical volume "data" created

2. Format partition

# mkfs -t ext3 /dev/vg02/data

3. Make mount point and mount it

# mkdir /data

# mount /dev/vg02/data /data/

4. Check results

# df -h /data

Filesystem Size Used Avail Use% Mounted on

/dev/mapper/test--volume-data

50.0G 33M 5.0G 1% /data

5. A dd it to /etc/fstab

You can create shell function to simplify this task if you need to create many similar partitions like is often the case with

Oracle databases. For example:

# Create oracle archive filesystem

# Parameters:

# 1 - name of archive

# 2 - size in gigabytes

# 3 - name of logical volume (default lv0)

function make_archive

{

mkdir -p /oracle/$1/archive

chown oracle:dba /oracle/$1/archive

lvcreate -L ${2}G -n archive vg0

mkfs -t ext3 /dev/vg0/archive

echo "/dev/$3/archive /oracle/$1/archive ext2 defaults 1 2" >> /etc/fstab

mount /oracle/$1/archive # that will check the mount point if fstab

df -k

}

How to extend the partition

Page 8: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

8/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

If one wishes to use all the free physical extents on the volume group one can achieve this using the lvm lvextend

command :

lvm lvextend -L +4G /dev/VolGroup00/LogVol04 # extend /var

ext2online /dev/VolGroup00/LogVol04

Option -l operates with free extents . This adds the 7153 free extents to the logical volume:

# lvm lvextend -l+7153 /dev/TestVG/TestLV

Extending logical volume TestLV to 30.28 GB

Logical volume TestLV successfully resized

"lvextend -L +54 /dev/vg01/lvol10 /dev/sdk3" tries to extend the size of that logical volume by 54MB on physical

volume /dev/sdk3. This is only possible if /dev/sdk3 is a member of volume group vg01.

Then the pvcreate command is used to create the new physical volume using the new partition, and the pvs again to

verify the new physical volume. See redhat.com Knowledgebase

After extending the volume group and the logical volume, it is possible to resize the file system on the fly. This is done

using ext2online. First I verify the file system size, perform the resize, and then verify the size again:

# df -h /mnt/test

Filesystem Size Used Avail Use% Mounted on

/dev/mapper/TestVG-TestLV

2.3G 36M 2.2G 2% /mnt/test

# ext2online /dev/TestVG/TestLV

ext2online v1.1.18 - 2001/03/18 for EXT2FS 0.5b

# df -h /mnt/test

Filesystem Size Used Avail Use% Mounted on

/dev/mapper/TestVG-TestLV

30G 39M 29G 1% /mnt/test

For more information see Resizing the fi le system

How to remove LVM partition

Use lvremove to Remove a logical volume from a logical volume group, after unmounting it

syntax:

lvremove [-A/--autobackup y/n] [-d/--debug] [-f/--force] [-h/-?/--help] [-t/--test] [-v/--verbose] LogicalVolumePath

[LogicalVolumePath...]

lvremove removes one or more logical volumes. Confirmation will be requested before deactivating any active logical

volume prior to removal. Logical volumes cannot be deactivated or removed while they are open (e.g. if they contain a

mounted filesystem).

Page 9: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

9/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

Options.

-f, --force Remove active logical volumes without confirmation.

For example:

Remove the active logical volume lvol1 in volume group vg00 without

asking for confirmation:

lvremove -f vg00/lvol1

Remove all logical volumes in volume group vg00:

lvremove vg00

Top updates Top visited

Google Search

Search

Bulletin Latest Past week Past month

Old News ;-)

[Sep 14, 2011] Using the Multipathed storage with LVM

Now that the multipath is configured, you need to perform disk management to make them available for use. If your original install wason LVM, you may want to add the new disks to the existing volume group and create some new logical volumes for use. If youroriginal install was on regular disk partitions, you may want to create new volume groups and logical volumes for use. In both cases,you might want to partition the volume groups and automate the mounting of these new partitions to certain mount points.

About this task

The following example illustrates how the above can be achieved. For detailed information about LVM administration, please consult:Red Hat LVM Administrator's Guide at http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.2/html/Cluster_Logical_Volume_Manager/ or SLES10 SP2 Storage Administration Guide athttp://www.novell.com/documentation/sles10/stor_evms/index.html?page=/documentation/sles10/stor_evms/data/mpiousing.html

Starting with an existing Linux environment on a blade, and a multipath zone configuration that will allow the blade to access somestorage, here are a set of generic steps to make use of the new storage:

Procedure

1. Determine which disks are not multipathed disks. From step 1 of previous section With existing configuration, the output of dfand fdisk -l should indicate which disks are already in use before multipath was setup. In this example, sda is the only disk

Page 10: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

10/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

exists before multipath was setup.2. Create and/or open /etc/multipath.conf and blacklist the local disk.

For a SLES10 machine:

cp /usr/share/doc/packages/multipath-

tools/multipath.conf.synthetic /etc/multipath.conf

The /usr/share/doc/packages/multipath-tools/multipath.conf.annotated file can be used as a reference to furtherdetermine how to configure your multipathing environment.For a RHEL5 machine:

Edit the /etc/multipath.conf that has already been created by default. Related documentations can be found in the/usr/share/doc/device-mapper-multipath-0.4.7/ directory.

3. Open the /etc/multipath.conf file, and edit the file to black list disks that are not meant to be multipathed. In this example, sdais blacklisted.

blacklist {

devnode "^sda"

}

4. Enable and activate the multipath daemon(s).

On both RHEL and SLES, the commands are:

chkconfig multipathd on

Additionally, on a RHEL system, this command is required:

chkconfig mdmpd on

5. Reboot the blade

Note: Note: if the machine is not rebooted, the latest configuration may not be detected.

6. Check if multipathd daemon is running by issuing:

service mdmpd status

Additionally, if you are running a RHEL system, check if mdmpd daemon is running:

service multipathd status

7. Run the command multipath -ll to verify that the disk(s) are now properly recognized as multipath devices.

multipath -ll

mpath2 (350010b900004b868) dm-3 IBM-ESXS,GNA073C3ESTT0Z

[size=68G][features=0][hwhandler=0]

\_ round-robin 0 [prio=1][active]

\_ 1:0:1:0 sdc 8:32 [active][ready]

\_ round-robin 0 [prio=1][enabled]

\_ 1:0:3:0 sde 8:64 [active][ready]

mpath1 (35000cca0071acd29) dm-2 IBM-ESXS,VPA073C3-ETS10

[size=68G][features=0][hwhandler=0]

\_ round-robin 0 [prio=1][active]

\_ 1:0:0:0 sdb 8:16 [active][ready]

\_ round-robin 0 [prio=1][enabled]

\_ 1:0:2:0 sdd 8:48 [active][ready]

As expected, two sets of paths are detected - two paths in each set. From examining the above output, notice that sdc andsde are actually the same physical disk, accessible from the blade via two different devices. Similarly in the case of sdb andsdd. Note that the device names are dm-2 and dm-3.

8. If your disks are new, skip this step. Optionally, if you have previous data or partition table, use the following command toerase the partition table.

dd if=/dev/zero of=/dev/dm-X bs=8k count=100

Page 11: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

11/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

X is the device number as shown in step 9. Be very careful when doing this step as it is destructive. Your data will not be ableto be recovered if you erase the partition table. In the test environment, both the disks will be used in the existing volumegroup.

dd if=/dev/zero of=/dev/dm-2 bs=8k count=100

dd if=/dev/zero of=/dev/dm-3 bs=8k count=100

Note: This step will erase your disks9. Create a new physical volume with each disk by entering the following command:

pvcreate /dev/dm-X

In our environment:

pvcreate /dev/dm-2

pvcreate /dev/dm-3

10. Run lvm pvdisplay to see if the physical volumes are displayed correctly. If at any time in this LV management process, youwould like to view the status of existing related entities like physical volume (pv), volume group (vg), and logical volume (lv),issue the corresponding command:

lvm pvdisplay

lvm vgdisplay

lvm lvdisplay

11. If the new entity you just created or changed could not be found, you may want to issue the corresponding command to scanfor the device:

pvscan

vgscan

lvscan

12. Run the vgscan command to show any existing volume groups. On a RHEL system, the installer creates VolGroup00 bydefault if another partitioning scheme is not chosen. On a SLES system, no volume groups exist. The following shows anoutput of an existing volume group VolGroup00.

vgscan

Reading all physical volumes. This may take a while...

Found volume group "VolGroup00" using metadata type lvm2

13. Add the physical volume(s) to an existing volume group using the vgextend command. In our environment, add /dev/dm-2 and/dev/dm-3 created in step 11 to the existing volume group VolGroup00 found in step 14, using the command:

vgextend VolGroup00 /dev/dm-2 /dev/dm-3

Volume Group “VolGroup00” successfully extended

14. If there is no existing volume group, create a new volume group using the vgcreate command. For example, to create a newvolume group VolGroup00 with the physical volumes /dev/dm-2 and /dev/dm-3, run this command:

vgcreate VolGroup00 /dev/dm-2 /dev/dm-3

Volume group "VolGroup00" successfully created

Creating a new logical volume and setting up automountingNow that more storage is available in the VolGroup00 volume group, you can use the extra storage to save your data.

[May 14, 2010] Restoring LVM Volumes with Acronis True Image

Knowledge Base

You need to back up logical volumes of LVM and ordinary (non-LVM) partitions. There is no need to back up physical volumes ofLVM, as they are backed up sector-by-sector and there is no guarantee that it will work after the restore.

The listed Acronis products recognize logical LVM volumes as Dynamic or GPT volumes.

Logical LVM volumes can be restored as non-LVM (regular) partitions in Acronis Rescue Mode. Logical LVM volumes can be restoredon top of existing LVM volumes. See LVM Volumes Acronis True Image 9.1 Server for Linux Supports or LVM Volumes Supported byAcronis True Image Echo.

Page 12: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

12/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

Solution

Restoring LVM volumes as non-LVMs

1. Restore partitionsRestore logical LVM volumes and non-LVM partitions one by one with Acronis backup software.Do not forget to make the boot partition Active (/ or /boot if available).

2. Make the system bootable1. Boot from Linux Distribution Rescue CD.2. Enter rescue mode.3. Mount the restored root(/) partition. If the rescue CD mounted partitions automatically, skip to the next step.

Most distributions will try to mount the system partitions as designated in /etc/fstab of the restored system. Since thereare no LVMs available, this process is likely to fail. This is why you might need to mount the restored partitionsmanually:

Enter the following command:

#cat /proc/partitions

You will get the list of recognized partitions:

major minor #blocks name

8 0 8388608 sda

8 1 104391 sda1

8 2 8281507 sda2

Mount the root(/) partition:

#mount -t [fs_type] [device] [system_mount_point]

In the example below /dev/sda2 is root, because it was restored as second primary partition on SATA disk

#mount -t ext3 /dev/sda2 /mnt/sysimage

4. Mount /boot if it was not mounted automatically:

#mount -t [fs_type] /dev/[device] /[system_mount_point]/boot

Example:

#mount -t ext3 /dev/sda1 /mnt/sysimage/boot

5. chroot to the mounted / of the restored partition:

#chroot [mount_point]

6. Mount /proc in chroot

#mount -t proc proc /proc

7. Create hard disk devices in /dev if it was not populated automatically.

Check existing partitions with cat /proc/partitions and create appropriate devices for them:

#/sbin/MAKEDEV [device]

8. Edit /etc/fstab on the restored partition:

Replace all entries of /dev/VolGroupXX/LogVolXX with appropriate /dev/[device]. You can find which device you need tomount in cat /proc/partitions.

9. Edit grub.conf

Open /boot/grub/grub.conf and edit it to replace /dev/VolGroupXX/LogVolXX with appropriate /dev/[device]

10. Reactivate GRUB

Run the following command to re-activate GRUB automatically:

#grub-install /dev/[device]

11. Make sure the system boots fine.

Restoring LVM volumes on prepared LVMs

Page 13: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

13/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

1. Prepare the LVM volumesBoot from Acronis Bootable Media;Press F11 after the Starting Acronis Loader... message appears and you get to the selection screen of the program;After you get the Linux Kernel Settings prompt, remove the word quiet and click OK;Select the Full version menu item to boot. Wait for # prompt to appear;List the partitions you have on the hard disk:

#fdisk -l

This will give not only the list of partitions on the hard drive, but also the name of the device associated with the harddisk.

Start creating partitions using fdisk :

#fdisk [device]

where [device] is the name of the device associated with the hard disk

Create physical volumes for LVMs:

#lvm pvcreate [partition]

for example, #lvm pvcreate /dev/sda2

Create LVM group

#lvm vgcreate [name] [device]

where [name] is a name of the Volume Group you create; and [device] is the name of the device associated with thepartition you want to add to the Volume Group

for example, #lvm vgcreate VolGroup00 /dev/sda2

Create LVM volumes inside the group:

#lvm lvcreate –L[size] -n[name] [VolumeGroup]

where [size] is the size of the Volume being created (e.g. 4G); [name] is the name of the Volume being created;[VolumeGroup] is the name of the Volume Group where we want to place the volume

For example, #lvm lvcreate -L6G -nLogVol00 VolGroup00

Activate the created LVM:

#lvm vgchange -ay

Start Acronis product:

#/bin/product

2. Restore partitionsRestore partitions from your backup archive to the created LVM volumes

[Feb 9, 2009] USB Hard Drive in RAID1

January 31, 2008 | www.bgevolution.com

This concept works just as for an internal hard drive. Although, USB drives seem to not remain part of the array after a reboot,therefore to use a USB device in a RAID1 setup, you will have to leave the drive connected, and the computer running. Another tacticis to occasionally sync your USB drive to the array, and shut down the USB drive after synchronization. Either tactic is effective.

You can create a quick script to add the USB partitions to the RAID1.

The first thing to do when synchronizing is to add the partition:

sudo mdadm --add /dev/md0 /dev/sdb1

I have 4 partitions therefore my script contains 4 add commands.

Then grow the arrays to fit the number of devices:

sudo mdadm --grow /dev/md0 --raid-devices=3

After growing the array your USB drive will magically sync USB is substantially slower than SATA or PATA. Anything over 100

Page 14: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

14/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

Gigabytes will take some time. My 149 Gigabyte /home partition takes about an hour and a half to synchronize. Once its synced I donot experience any apparent difference in system performance.

[Jan 9, 2009] Linux lvm

15.12.2008 | Linuxconfig.org

This article describes a basic logic behind a Linux logical volume manager by showing real examples of configuration and usage.Despite the fact that Debian Linux will be used for this tutorial, you can also apply the same command line syntax with other Linuxdistributions such as Red Hat, Mandriva, SuSe Linux and others.

[Nov 12, 2008] /dev/dm-0

fdisk -l output in case you are using LVM contains many messages like Disk /dev/dm-0 doesn't contain a valid partition table

LinuxQuestions.org

This has been very helpful to me. I found this thread by Goggle on dm-0 because I also got the no partition table error message.

Here is what I think:

When the programs fdisk and sfdisk are run with the option -l and no argument, e.g. # /sbin/fdisk -l

they look for all devices that can have cylinders, heads, sectors, etc. If they find such a device, they output that information tostandard output and they output the partition table to standard output. If there is no partition table, they have an error message (alsostandard output).

One can see this by piping to 'less', e.g.# /sbin/fdisk -l | less

/dev/dm-0 ... /dev/dm3 on my fedora C5 system seem to be device mappersassociated with LVM.

RAID might also require device mappers.

[Aug 26, 2008] Moving LVM volumes to a different volume group by Sander Marechal

2008-08-25 | www.jejik.com

I went with SystemRescueCD which comes with both mdadm and LVM out-of-the-box.

The system layout is quite simple. /dev/sda1 and /dev/sdb1 make up a 500 GB mdadm RAID1 volume. This RAID volume contains anLVM volume group called “3ware”, named so because in my old server it was connected to my 3ware RAID card. It contains a singlelogical volume called “media”. The original 80 GB disk is on /dev/sdc1 which contains an LVM volume group called “linuxvg”. Insidethat volume group are three volumes: “boot”, “root” and “swap”. Goal: Move linuxvg-root and linuxvg-boot to the 3ware volume group.Additional goal: Rename 3ware to linuxvg. The latter is more for aesthetic reasons but as a bonus it also means that there is no needto fiddle with grub or fstab settings after the move.

Before starting SystemRescueCD and start moving things around there are a few things that need to be done first. Start by making acopy of /etc/mdadm/mdadm.conf because you will need it later. Also, because the machine will be booting from the RAID array I needto install grub to those two disks.

# grub-install /dev/sda

# grub-install /dev/sdb

Now it’s time to boot into SystemRescueCD. I start off by copying /etc/mdadm/mdadm.conf back and starting the RAID1 array. Thiscommand scans for all the arrays defined in mdadm.conf and tries to start them.

# mdadm --assemble --scan

Next I need to make a couple of changes to /etc/lvm/lvm.conf. If I were to scan for LVM volume groups at this point, it would find the3ware group three times: once in /dev/md0, /dev/sda1 and /dev/sdb1. So I adjust the filter setting in lvm.conf so it will not scan/dev/sda1 and /dev/sdb1.

filter = [ "r|/dev/cdrom|", "r|/dev/sd[ab]1|" ]

Page 15: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

15/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

LVM can now scan the hard drives and find all the volume groups.

# vgscan

I disable the volume groups so that I can rename them. linuxvg becomes linuxold and 3ware becomes the new linuxvg. Then I re-enable the volume groups.

# vgchange -a n

# vgrename linuxvg linuxold

# vgrename 3ware linuxvg

# vgchange -a y

Now I can create a new logical volume in the 500 Gb volume group for my boot partition and create an ext3 filesystem in it.

# lvcreate --name boot --size 512MB linuxvg

# mkfs.ext3 /dev/mapper/linuxvg-boot

I create mount points to mount the original boot partition and the new boot partition and then use rsync to copy all the data. Don’t usecp for this! Rsync with the -ah option will preserve all soft links, hard links and file permissions while cp does not. If you donot want to use rsync you could also use the dd command to transfer the data directly from block device to block device.

# mkdir /mnt/src /mnt/dst

# mount -t ext3 /dev/mapper/linuxold-boot /mnt/src

# mount -t ext3 /dev/mapper/linuxvg-boot /mnt/dst

# rsync -avh /mnt/src/ /mnt/dst/

# umount /mnt/src /mnt/dst

Rinse and repeat to copy over the root filesystem.

# lvcreate --name root --size 40960MB linuxvg

# mkfs.ext3 /dev/mapper/linuxvg-root

# mount -t ext3 /dev/mapper/linuxold-root /mnt/src

# mount -t ext3 /dev/mapper/linuxvg-root /mnt/dst

# rsync -avh /mnt/src/ /mnt/dst/

# umount /mnt/src /mnt/dst

There's no sense in copying the swap volume. Simply create a new one.

# lvcreate --name swap --size 1024MB linuxvg

# mkswap /dev/mapper/linuxvg-swap

And that's it. I rebooted into Debian Lenny to make sure that everything worked and I removed the 80 GB disk from my server. Whilethis wans’t particularly hard, I do hope that the maintainers of LVM create an lvmove command to make this even easier.

[Aug 15, 2008] Linux RAID Smackdown Crush RAID 5 with RAID 10

LinuxPlanet

Creating RAID 10

No Linux installer that I know of supports RAID 10, so we have to jump through some extra hoops to set it up in a fresh installation.This is my favorite layout for RAID systems:

/dev/md0 is a RAID 1 array containing the root filesystem./dev/md1 is a RAID 10 array containing a single LVM group divided into logical volumes for /home, /var, and /tmp, and anythingelse I feel like stuffing in there.Each disk has its own identical swap partition that is not part of RAID or LVM, just plain old ordinary swap.

One way is to use your Linux installer to create the RAID 1 array and the swap partitions, then boot into the new filesystem andcreate the RAID 10 array. This works, but then you have to move /home, /var, /tmp, and whatever you else you want there, whichmeans copying files and editing /etc/fstab. I get tired thinking about it.

Another way is to prepare your arrays and logical volumes in advance and then install your new system over them, and that is whatwe are going to do. You need a bootable live Linux that includes mdadm, LVM2 and GParted, unless you're a crusty old command-line commando that doesn't need any sissy GUIs, and are happy with fdisk. Two that I know have all of these are Knoppix and

Page 16: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

16/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

SystemRescueCD; I used SystemRescueCD.

Step one is to partition all of your drives identically. The partition sizes in my example system are small for faster testing; on aproduction system the 2nd primary partition would be as large as possible:

1st primary partition, 5GB2nd primary partition, 7GBswap partition, 1GB

The first partition on each drive must be marked as bootable, and the first two partitions must be marked as "fd Linux raid auto" infdisk. In GParted, use Partition -> Manage Flags.

Now you can create your RAID arrays with the mdadm command. This command creates the RAID1 array for the root filesystem:

# mdadm -v --create /dev/md0 -- level=raid1 --ra id-devices=2 /dev/hda1 /dev/sda1

mdadm: layout defaults to n1

mdadm: chunk size defaults to 64K

mdadm: size set to 3076352K

mdadm: array /dev/md0 started.

This will take some time, which cat /proc/mdstat will tell you:

Personalities : 'linear' 'raid0' 'raid1' 'raid6' 'raid5' 'raid4' 'multipath' 'raid10' md0 : active raid10 sda1'1' hda1'0'

3076352 blocks 2 near-copies '2/2' 'UU'

'====>................' resync = 21.8% (673152/3076352) finish=3.2min speed=12471K/sec

This command creates the RAID 10 array:

# mdadm -v --create /dev/md1 --level=raid10 --raid-devices=2 /dev/hda2 /dev/sda2

Naturally you want to be very careful with your drive names, and give mdadm time to finish. It will tell you when it's done:

RAID10 conf printout:

--- wd: rd:2

disk 0, wo:0, o:1, dev:hda2

disk 1, wo:0, o:1, dev:sda2

mdadm --detail /dev/md0 displays detailed information on your arrays.

Create LVM Group and Volumes

Now we'll put a LVM group and volumes on /dev/md1. I use vg- for volume group names and lv- for the logical volumes in the volumegroups. Using descriptive names, like lv-home, will save your sanity later when you're creating filesystems and mountpoints. The -Loption specifies the size of the volume:

# pvcreate /dev/md1

# vgcreate vg-server1 /dev/md1

# lvcreate -L4g -nlv-home vg-server1

# lvcreate -L2g -nlv-var vg-server1

# lvcreate -L1g -nlv-tmp vg-server1

You'll get confirmations for every command, and you can use vgdisplay and lvdisplay to see the fruits of your labors. Use vgdisplayto see how much space is left.

Getting E-mail notifications when MD devices fail

I use the MD (multiple device) logical volume manager to mirror the boot devices on the Linux servers I support. When I first startedusing MD, the mdadm utility was not available to manage and monitor MD devices. Since disk failures are relatively common in largeshops, I used the shell script from my SysAdmin article Monitoring and Managing Linux Software RAID to send E-mail when a deviceentered the failed state. While reading through the mdadm(8) manual page, I came across the “–monitor” and “–mail” options. Theseoptions can be used to monitor the operational state of the MD devices in a server, and generate E-mail notifications if a problem isdetected. E-mail notification support can be enabled by running mdadm with the “–monitor” option to monitor devices, the “–daemonise” option to create a daemon process, and the “–mail” option to generate E-mail:

$ /sbin/mdadm –monitor –scan –daemonise –mail=root@localhost

Once mdadm is daemonized, an E-mail similar to the following will be sent each time a failure is detected:

Page 17: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

17/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

From: mdadm monitoring

To: [email protected]

Subject: Fail event on /dev/md1:biscuit

This is an automatically generated mail message from mdadm

running on biscuit

A Fail event had been detected on md device /dev/md1.

Faithfully yours, etc.

I digs me some mdadm!

Linux LVM silliness

While attempting to create a 2-way LVM mirror this weekend on my Fedora Core 5 workstation, I received the following error:

$ lvcreate -L1024 -m 1 vgdata

Not enough PVs with free space available for parallel allocation.

Consider --alloc anywhere if desperate.

Since the two devices were initialized specifically for this purpose and contained no other data, I was confused by this error message.After scouring Google for answers, I found a post that indicated that I needed a log LV for this to work, and the log LV had to be on it’sown disk. I am not sure about most people, but who on earth orders a box with three disks? Ugh!

Posted by matty, filed under Linux LVM. Date: May 3, 2006, 9:50 pm | 2 Comments

[linux-lvm] Raid 0+1

From: "Wayne Pascoe" <lists-june2004 penguinpowered org>To: linux-lvm redhat comSubject: [linux-lvm] Raid 0+1Date: Wed, 21 Jul 2004 13:22:53 +0100 (BST)

Hi all,

I am working on a project to evaluate LVM2 against Veritas Volume

Manager for a new Linux deployment. I am trying to get a Raid 0+1

solution working and I'm struggling.

So far, this is where I am:

1. I created 8GB partitions on 4 disks, sdb, sdc, sdd and sde, and set

their partition types to 8e with fdisk

2. I then ran vgscan, follwed by pvcreate /dev/sdb1, /dev/sdc1,

/dev/sdd1, /dev/sde1

3. Next, I created 2 volume groups as follows:

vgcreate StripedData1 /dev/sdb1 /dev/sdc1

vgcreate StripedData2 /dev/sdd1 /dev/sde1

4. Next, I created 2 volumes, one in each group as follows:

lvcreate -i 2 -I 64 -n Data1 -L 6G StripedData1

lvcreate -i 2 -I 64 -n Data2 -L 6G StripedData2

Now I have 2 striped volumes, but no redundancy. This is where I think

things start to go wrong.

Page 18: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

18/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

5. I now create a raid device, /dev/md0 consisting of these two

volumes. I run mkraid on this, create a file system, and mount it on

/Data1. This all works fine, and I have a 6GB filesystem on /Data1

Now I need to be able to resize this whole solution, and I'm not sure

if the way I've built it caters for what I need to do...

I unmount /Data1 and use lvextend to extend the 2 volumes from 6GB to

7.5GB. This succeeds. Now even though both of the volumes that make up

/dev/md0 are extended, I cannot resize /dev/md0 using resize2fs

/dev/md0

Can anyone advise me how I can achieve what I'm looking for here ? I'm

guessing maybe I did things the wrong way around, but I can't find a

solution that will give me both striping and mirroring :(

Thanks in advance,

--

Wayne Pascoe

LVM HOWTO

Introduction

1. Latest Version2. Disclaimer3. Contributors

1. What is LVM?2. What is Logical Volume Management?

2.1. Why would I want it?2.2. Benefits of Logical Volume Management on a Small System2.3. Benefits of Logical Volume Management on a Large System

3. Anatomy of LVM

3.1. volume group (VG)3.2. physical volume (PV)3.3. logical volume (LV)3.4. physical extent (PE)</ mapping modes (linear/striped)3.8. Snapshots

4. Frequently Asked Questions

4.1. LVM 2 FAQ4.2. LVM 1 FAQ

5. Acquiring LVM

5.1. Download the source5.2. Download the development source via CVS5.3. Before You Begin5.4. Initial Setup5.5. Checking Out Source Code5.6. Code Updates5.7. Starting a Project5.8. Hacking the Code5.9. Conflicts

6. Building the kernel modules

6.1. Building the device-mapper module6.2. Build the LVM 1 kernel module

Page 19: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

19/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

7. LVM 1 Boot time scripts

7.1. Caldera7.2. Debian7.3. Mandrake7.4. Redhat7.5. Slackware7.6. SuSE

8. LVM 2 Boot Time Scripts9. Building LVM from the Source

9.1. Make LVM library and tools9.2. Install LVM library and tools9.3. Removing LVM library and tools

10. Transitioning from previous versions of LVM to LVM 1.0.8

10.1. Upgrading to LVM 1.0.8 with a non-LVM root partition10.2. Upgrading to LVM 1.0.8 with an LVM root partition and initrd

11. Common Tasks

11.1. Initializing disks or disk partitions11.2. Creating a volume group11.3. Activating a volume group11.4. Removing a volume group11.5. Adding physical volumes to a volume group11.6. Removing physical volumes from a volume group11.7. Creating a logical volume11.8. Removing a logical volume11.9. Extending a logical volume11.10. Reducing a logical volume11.11. Migrating data off of a physical volume

12. Disk partitioning

12.1. Multiple partitions on the same disk12.2. Sun disk labels

13. Recipes

13.1. Setting up LVM on three SCSI disks13.2. Setting up LVM on three SCSI disks with striping13.3. Add a new disk to a multi-disk SCSI system13.4. Taking a Backup Using Snapshots13.5. Removing an Old Disk13.6. Moving a volume group to another system13.7. Splitting a volume group13.8. Converting a root filesystem to LVM 113.9. Recover physical volume metadata

A. Dangerous Operations

A.1. Restoring the VG UUIDs using uuid_fixerA.2. Sharing LVM volumes

B. Reporting Errors and BugsC. Contact and Links

C.1. Mail listsC.2. Links

D. GNU Free Documentation License

D.1. PREAMBLED.2. APPLICABILITY AND DEFINITIONSD.3. VERBATIM COPYINGD.4. COPYING IN QUANTITYD.5. MODIFICATIONSD.6. COMBINING DOCUMENTSD.7. COLLECTIONS OF DOCUMENTSD.8. AGGREGATION WITH INDEPENDENT WORKSD.9. TRANSLATIOND.10. TERMINATION

Page 20: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

20/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

D.11. FUTURE REVISIONS OF THIS LICENSED.12. ADDENDUM: How to use this License for your documents

The Linux and Unix Menagerie LVM Quick Command Reference For Linux And Unix

1. LVM Basic relationships. A quick run-down on how the different parts are related

Physical volume - This consists of one, or many, partitions (or physical extent groups) on a physical drive.Volume group - This is composed of one or more physical volumes and contains one or more logical volumes.Logical volume - This is contained within a volume group.

2. LVM creation commands (These commands are used to initialize, or create, new logical objects) - Note that we have yet toexplore these fully, as they can be used to do much more than we've demonstrated so far in our simple setup.

pvcreate - Used to create physical volumes.vgcreate - Used to create volume groups. lvcreate - Used to create logical volumes.

3. LVM monitoring and display commands (These commands are used to discover, and display the properties of, existing logicalobjects). Note that some of these commands include cross-referenced information. For instance, pvdisplay includes information aboutvolume groups associated with the physical volume.

pvscan - Used to scan the OS for physical volumes.vgscan - Used to scan the OS for volume groups.lvscan - Used to scan the OS for logical volumes.pvdisplay - Used to display information about physical volumes.vgdisplay - Used to display information about volume groups.lvdisplay - Used to display information about logical volumes.

4. LVM destruction or removal commands (These commands are used to ensure that logical objects are not allocable anymoreand/or remove them entirely) Note, again, that we haven't fully explored the possibilities with these commands either. The "change"commands in particular are good for a lot more than just prepping a logical object for destruction.

pvchange - Used to change the status of a physical volume.vgchange - Used to change the status of a volume group.lvchange - Used to change the status of a logical volume.pvremove - Used to wipe the disk label of a physical drive so that LVM does not recognize it as a physical volume.vgremove - Used to remove a volume group.lvremove - Used to remove a logical volume.

5. Manipulation commands (These commands allow you to play around with your existing logical objects. We haven't posted on"any" of these commands yet - Some of them can be extremely dangerous to goof with for no reason)

pvextend - Used to add physical devices (or partition(s) of same) to a physical volume.pvreduce - Used to remove physical devices (or partition(s) of same) from a physical volume.vgextend - Used to add new physical disk (or partition(s) of same) to a volume group.vgreduce - Used to remove physical disk (or partition(s) of same) from a volume group.lvextend - Used to increase the size of a logical volume. lvreduce - Used to decrease the size of a logical volume.

[Jun 19, 2011] Monitoring and Display Commands For LVM On Linux And Unix

The Linux and Unix Menagerie

Physical Volumes:

The two commands we'll be using here are pvscan and pvdisplay.

pvscan, as with all of the following commands, pretty much does what the name implies. It scans your system for LVM physicalvolumes. When used straight-up, it will list out all the physical volumes it can find on the system, including those "not" associatedwith volume groups (output truncated to save on space):

host # pvscanpvscanpvscan -- reading all physical volumes (this may take a while...)...pvscan -- ACTIVE PV "/dev/hda1" is in no VG [512 MB]...pvscan -- ACTIVE PV "/dev/hdd1" of VG "vg01"[512 MB / 266 MB free]...

Page 21: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

21/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

Next, we'll use pvdisplay to display our only physical volume:

host # pvdisplay /dev/hdd1 <-- Note that you can leave the /dev/hdd1, or any specification, off of the command line if you want todisplay all of your physical volumes. We just happen to know we only have one and are being particular ;)

...PV Name /dev/hdd1VG Name vg01PV Size 512 MB...

Other output should include whether or not the physical volume is allocatable (or "can be used" ;), total physical extents (see ourpost on getting started with LVM for a little more information on PE's), free physical extents, allocated physical extents and thephysical volume's UUID (Identifier).

Volume Groups:

The two commands we'll be using here are vgscan and vgdisplay.

vgscan will report on all existing volume groups, as well as create a file (generally) called /etc/lvmtab (Some versions will create an/etc/lvmtab.d directory as well):

host # vgscanvgscan -- reading all physical volumes (this may take a while...)vgscan -- found active volume group "vg01"...

vgdisplay can be used to check on the state and condition of our volume group(s). Again, we're specifying our volume group onthe command line, but this is not necessary:

host # vgdisplay vg01...VG Name vg01...VG Size 246 MB...

this command gives even more effusive output. Everything from the maximum logical volumes the volume group can contain(including how many it currently does and how many of those are open), separate (yet similar) information with regards to thephysical volumes it can encompass, all of the information you've come to expect about the physical extents and, of course, eachvolume's UUID.

redhat.com The Linux Logical Volume Manager by Heinz Mauelshagen and Matthew O'Keefe

IntroductionBasic LVM commandsDifferences between LVM1 and LVM2SummaryAbout the authors

Storage technology plays a critical role in increasing the performance, availability, and manageability of Linux servers. One of themost important new developments in the Linux 2.6 kernel—on which the Red Hat® Enterprise Linux® 4 kernel is based—is the LinuxLogical Volume Manager, version 2 (or LVM 2). It combines a more consistent and robust internal design with important newfeatures including volume mirroring and clustering, yet it is upwardly compatible with the original Logical Volume Manager 1 (LVM 1)commands and metadata. This article summarizes the basic principles behind the LVM and provide examples of basic operations tobe performed with it.

Introduction

Logical volume management is a widely-used technique for deploying logical rather than physical storage. With LVM, "logical"partitions can span across physical hard drives and can be resized (unlike traditional ext3 "raw" partitions). A physical disk is dividedinto one or more physical volumes (Pvs), and logical volume groups (VGs) are created by combining PVs as shown in Figure 1. LVMinternal organization. Notice the VGs can be an aggregate of PVs from multiple physical disks.

Figure 2. Mapping logical extents to physical extents shows how the logical volumes are mapped onto physical volumes. Each PVconsists of a number of fixed-size physical extents (PEs); similarly, each LV consists of a number of fixed-size logical extents (LEs).(LEs and PEs are always the same size, the default in LVM 2 is 4 MB.) An LV is created by mapping logical extents to physicalextents, so that references to logical block numbers are resolved to physical block numbers. These mappings can be constructed toachieve particular performance, scalability, or availability goals.

For example, multiple PVs can be connected together to create a single large logical volume as shown in Figure 3. LVM linearmapping. This approach, known as a linear mapping, allows a file system or database larger than a single volume to be created usingtwo physical disks. An alternative approach is a striped mapping, in which stripes (groups of contiguous physical extents) from

Page 22: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

22/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

alternate PVs are mapped to a single LV, as shown in Figure 4. LVM striped mapping. The striped mapping allows a single logicalvolume to nearly achieve the combined performance of two PVs and is used quite often to achieve high-bandwidth disk transfers.

Figure 4. LVM striped mapping (4 physical extents per stripe)

Through these different types of logical-to-physical mappings, LVM can achieve four important advantages over raw physical partitions:

1. Logical volumes can be resized while they are mounted and accessible by the database or file system, removing the downtimeassociated with adding or deleting storage from a Linux server

2. Data from one (potentially faulty or damaged) physical device may be relocated to another device that is newer, faster or moreresilient, while the original volume remains online and accessible

3. Logical volumes can be constructed by aggregating physical devices to increase performance (via disk striping) or redundancy(via disk mirroring and I/O multipathing)

4. Logical volume snapshots can be created to represent the exact state of the volume at a certain point-in-time, allowingaccurate backups to proceed simultaneously with regular system operation

Basic LVM commands

Initializing disks or disk partitions

To use LVM, partitions and whole disks must first be converted into physical volumes (PVs) using the pvcreate command. For

example, to convert /dev/hda and /dev/hdb into PVs use the following commands:

pvcreate /dev/hda pvcreate /dev/hdb

If a Linux partition is to be converted make sure that it is given partition type 0x8E using fdisk, then use pvcreate:

pvcreate /dev/hda1

Creating a volume group

Once you have one or more physical volumes created, you can create a volume group from these PVs using the vgcreate command.

The following command:

vgcreate volume_group_one /dev/hda /dev/hdb

creates a new VG called volume_group_one with two disks, /dev/hda and /dev/hdb, and 4 MB PEs. If both /dev/hda and /dev/hdb

are 128 GB in size, then the VG volume_group_one will have a total of 2**16 physical extents that can be allocated to logical

volumes.

Additional PVs can be added to this volume group using the vgextend command. The following commands convert /dev/hdc into a PV

and then adds that PV to volume_group_one:

pvcreate /dev/hdc vgextend volume_group_one /dev/hdc

This same PV can be removed from volume_group_one by the vgreduce command:

vgreduce volume_group_one /dev/hdc

Note that any logical volumes using physical extents from PV /dev/hdc will be removed as well. This raises the issue of how we

create an LV within a volume group in the first place.

Creating a logical volume

We use the lvcreate command to create a new logical volume using the free physical extents in the VG pool. Continuing our example

using VG volume_group_one (with two PVs /dev/hda and /dev/hdb and a total capacity of 256 GB), we could allocate nearly all the

PEs in the volume group to a single linear LV called logical_volume_one with the following LVM command:

lvcreate -n logical_volume_one --size 255G volume_group_one

Instead of specifying the LV size in GB we could also specify it in terms of logical extents. First we use vgdisplay to determine the

number of PEs in the volume_group_one:

vgdisplay volume_group_one | grep "Total PE"

which returns

Total PE 65536

Page 23: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

23/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

Then the following lvcreate command will create a logical volume with 65536 logical extents and fill the volume group completely:

lvcreate -n logical_volume_one -l 65536 volume_group_one

To create a 1500MB linear LV named logical_volume_one and its block device special file

/dev/volume_group_one/logical_volume_one use the following command:

lvcreate -L1500 -n logical_volume_one volume_group_one

The lvcreate command uses linear mappings by default.

Striped mappings can also be created with lvcreate. For example, to create a 255 GB large logical volume with two stripes and stripe

size of 4 KB the following command can be used:

lvcreate -i2 -I4 --size 255G -n logical_volume_one_striped volume_group_one

It is possible to allocate a logical volume from a specific physical volume in the VG by specifying the PV or PVs at the end of the

lvcreate command. If you want the logical volume to be allocated from a specific physical volume in the volume group, specify the PV

or PVs at the end of the lvcreate command line. For example, this command:

lvcreate -i2 -I4 -L128G -n logical_volume_one_striped volume_group_one /dev/hda /dev/hdb

creates a striped LV named logical_volume_one that is striped across two PVs (/dev/hda and /dev/hdb) with stripe size 4 KB and 128

GB in size.

An LV can be removed from a VG through the lvremove command, but first the LV must be unmounted:

umount /dev/volume_group_one/logical_volume_one lvremove /dev/volume_group_one/logical_volume_one

Note that LVM volume groups and underlying logical volumes are included in the device special file directory tree in the /dev directory

with the following layout:

/dev//

so that if we had two volume groups myvg1 and myvg2 and eatt>, six device special files would be created:

/dev/myvg1/lv01 /dev/myvg1/lv02 /dev/myvg1/lv03 /dev/myvg2/lv01 /dev/myvg2/lv02 /dev/myvg2/lv03

Extending a logical volume

An LV can be extended by using the lvextend command. You can specify either an absolute size for the extended LV or how muchadditional storage you want to add to the LVM. For example:

lvextend -L120G /dev/myvg/homevol

will extend LV /dev/myvg/homevol to 12 GB, while

lvextend -L+10G /dev/myvg/homevol

will extend LV /dev/myvg/homevol by an additional 10 GB. Once a logical volume has been extended, the underlying file system can

be expanded to exploit the additional storage now available on the LV. With Red Hat Enterprise Linux 4, it is possible to expand boththe ext3fs and GFS file systems online, without bringing the system down. (The ext3 file system can be shrunk or expanded offline

using the ext2resize command.) To resize ext3fs, the following command

ext2online /dev/myvg/homevol

will extend the ext3 file system to completely fill the LV, /dev/myvg/homevol, on which it resides.

The file system specified by device (partition, loop device, or logical volume) or mount point must currently be mounted, and it will beenlarged to fill the device, by default. If an optional size parameter is specified, then this size will be used instead.

Recommended Links

In case of broken links please try to use Google search. If you find the page please notify us about new location

Page 24: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

24/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

Search

Internal pages updates by age: Latest : Past week : Past month : Past year

Wikipedia

Logical volume management - Wikipedia, the free encyclopedia

Logical Volume Manager (Linux) - Wikipedia, the free encyclopedia

Enterprise Volume Management System - Wikipedia, the free encyclopedia

Redhat|

Linux Logical Volume Manager overview paper

Red Hat Magazine | Tips and tricks- What is the procedure to resize..

LVM Administrator's Guide for RHEL 4.6

LVM Administrator's Guide

Linux Logical Volume Management (LVM) Guide by A J Lewis

Red Hat Enterprise Linux 5 deployment guide

Novell

Using Logical Volume Management (LVM) to Organize Your Disks on SLES 10 Novell User

Communities

Linux LVM (Logical Volume Management)

How to Mount a Specific Partition of a Xen Fi le-backed Virtual Disk

Partitioning Your Hard Disk Before Instal l ing SUSE Novell User Communities

InformIT Managing Storage in Red Hat Enterprise Linux 5 Understanding LVM

A Beginner's Guide To LVM HowtoForge - Linux Howtos and Tutorials

How To Resize ext3 Partitions Without Losing Data HowtoForge - Linux Howtos and Tutorials

Expanding Linux Partitions with LVM - FedoraNEWS.ORG

Linux Logical Volume Manager (LVM) on Software RA ID

Linux lvm - Logical Volume Manager - Linuxconfig.org

LVM HOWTO Outdated and incomplete.

RHEL Logical Volume Manager (LVM)

LVM Configuration

LVM2 Resource Page provides links to tarballs, mailing lists, source code, documentation, and chat channels for

LVM2.

An Introduction to Disk Partitions

Page 25: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

25/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

Linux man page on LVM2 tools for more details.

Linux on System z: Volume management recommendations" (developerWorks, October 2005) discusses LVM2 schemes

for kernel 2.6, as well as the Enterprise Volume Management System (EVMS) as an alternative.

"Common threads: Learning Linux LVM, Part 1" (developerWorks, March 2001) and "Common threads: Learning Linux

LVM, Part 2" (developerWorks, April 2001) outdated articles by Daniel Robbins ([email protected]), President/CEO,

Gentoo Technologies, Inc.

Linux Documentation Project has a variety of useful documents, especially its HOWTOs.

LVM HOWTO

A Beginner's Guide To LVM | HowtoForge - Linux Howtos and Tutorials

Managing RAID and LVM with Linux

LinuxDevCenter.com -- Managing Disk Space with LVM

LVM2 Resource Page

Linux Logical Volume Manager (LVM) on Software RAID

Expanding Linux Partitions with LVM - FedoraNEWS.ORG

HOWTO

LVM Howto

CentOS/Red Hat Deployment Guide has a RA ID/LVM howto

MythTV's RA ID howto

Recommended Papers

[Aug 11, 2007] Logical volume management by Klaus Heinrich Kiwi

Sep 11, 2007 | IBM developerworks

Volume management is not new in the -ix world (UNIX®, AIX, and so forth). And logical volume management (LVM) hasbeen around since Linux® kernel 2.4v1 and 2.6.9v2. This article reveals the most useful features of LVM2—a relativelynew userspace toolset that provides logical volume management facilities—and suggests several ways to simplify yoursystem administration tasks.

Volume Managers in Linux

Barriers and journaling filesystems

Linux Logical Volume Manager (LVM) on Software RAID

Page 26: The Linux Logical Volume Manager (LVM)

27/02/2012 The Linux Logical Volume Manager (LVM)

26/26www.softpanorama.org/Commercial_linuxes/logical_volume_manager.shtml

Copyright © 1996-2011 by Dr. Nikolai Bezroukov. www.softpanorama.org was created as a service to the UN Sustainable Development NetworkingProgramme (SDNP) in the author free time. This document is an industrial compilation designed and created exclusively for educational use and isdistributed under the Softpanorama Content License. Site uses AdSense so you need to be aware of Google privacy policy. Original materials copyrightbelong to respective owners. Quotes are made for educational purposes only in compliance with the fair use doctrine. This is a Spartan WHYFF (We Help You For

Free) site written by people for whom English is not a native language. Grammar and spelling errors should be expected. The site contain some broken links as it develops like a living tree...

Disclaimer:

The statements, views and opinions presented on this web page are those of the author and are not endorsed by, nor do they necessarily reflect, theopinions of the author present and former employers, SDNP or any other organization the author may be associated with.We do not warrant the correctness of the information provided or its fitness for any purpose

Last modified: January 27, 2012