alua - asymmetric loalua_-_asymmetric_logical_unit_accessgical unit access

20
 Asymmetric Logical Unit Access I/O for Logical Unit „A, going directly to Node „ Aowning the Logical Unit  In this acronym, „Logical Unit Access  part is well understood b ut „Asymmetricalat the beginning does make it sound little complicated. So, What exactly is „asymmetrical? Well, if you google for the word „asymmetricalyou will find many definitions around this word but in general, most definitions are trying to convey the same message. In a nutshell it means    “when the two halves are not equal”, from storage „multipathingperspective it means –   Not all paths available to a LUN, necessarily have same equal access path”.  In this document we shall try and cover all aspects around ALUA & multipathing. Ashwin Pawar Sep, 2014 www.simplysan.com 

Upload: demodx-demodxz

Post on 12-Apr-2018

227 views

Category:

Documents


0 download

TRANSCRIPT

7/21/2019 Alua - Asymmetric LoALUA_-_ASYMMETRIC_LOGICAL_UNIT_ACCESSgical Unit Access

http://slidepdf.com/reader/full/alua-asymmetric-loalua-asymmetriclogicalunitaccessgical-unit-access 1/20

7/21/2019 Alua - Asymmetric LoALUA_-_ASYMMETRIC_LOGICAL_UNIT_ACCESSgical Unit Access

http://slidepdf.com/reader/full/alua-asymmetric-loalua-asymmetriclogicalunitaccessgical-unit-access 2/20

  Why multipathing & ALUA needed 

When you have a device [LUN] presented to the host using multiple ports [multiple paths] it does add complexity at the OS level. In other words, getting the device toshow up properly as a single [pseudo] device is one thing and then have the OS

understand its port characteristics is virtually impossible without breaking into theoperating system code and writing a module to sit in the storage stack to tap into thesefeatures. This led to the development of Multipathing, which basically provides highavailability, performance and fault tolerance at the front-end/Host side.

On the back-end/Storage side, same characteristics are provided by Active/ActiveStorage arrays. A/A Storage arrays exposes multiple-target ports to the Host, in otherwords the „Host‟ can access the „unit‟ of storage from any of the ports available on theA/A storage array, sounds great, but to determine which ports are optimized (direct-

 path-to-node-owning-the-lun) and which ports are non-optimized (Indirect-path-to-node-owning-the-lun) are the most important decisions for the Host to process inorder to ensure optimum storage access paths to the lun. As a result, this led to thedevelopment of Hardware based device specific modules, and subsequently thestandardization of SCSI standard called „ALUA‟.

Overview of Mainstream Multipathing Software 

7/21/2019 Alua - Asymmetric LoALUA_-_ASYMMETRIC_LOGICAL_UNIT_ACCESSgical Unit Access

http://slidepdf.com/reader/full/alua-asymmetric-loalua-asymmetriclogicalunitaccessgical-unit-access 3/20

7/21/2019 Alua - Asymmetric LoALUA_-_ASYMMETRIC_LOGICAL_UNIT_ACCESSgical Unit Access

http://slidepdf.com/reader/full/alua-asymmetric-loalua-asymmetriclogicalunitaccessgical-unit-access 4/20

  Following is my „own‟ illustration of ALUA for learning purpose only. 

Figure 1

I/O for Logical Unit 'A' going directly to the Node 'A' owning the Lun.

TPG: TargetPortGroup

ALUA allows you to see any given LUN via both storage processors as active butonly one of these storage processors “owns” the LUN and because of that there will

 be optimized and non-optimized paths. The optimized paths are the ones with a direct path to the storage processor [ Node-A] that owns the LUN. The non-optimized pathshave a connection with the storage processor that does not own the LUN but have an

indirect path to the storage processor that does own it via an interconnect bus.

7/21/2019 Alua - Asymmetric LoALUA_-_ASYMMETRIC_LOGICAL_UNIT_ACCESSgical Unit Access

http://slidepdf.com/reader/full/alua-asymmetric-loalua-asymmetriclogicalunitaccessgical-unit-access 5/20

Which storage array types are candidates for ALUA?Active/Active Storage Systems are the ideal candidate for ALUA. That means, ALUAdoes not apply to Active/Passive Storage Systems. You might ask, why is that?

Let's understand the difference between Active/Active & Active/Passive Storage

Systems.

Active/Passive Storage Systems:

With Active/Passive storage systems, one controller is assigned to a LUN as primarycontroller (owner of the LUN) and handles all the I/O requests to it. The othercontroller –  or multiple other controllers, if available –  acts as standby controller. Thestandby controller of a LUN only issues I/O requests to it, if their primary controllerfailed. The key word here is 'primary controller failed', which means at any giventime, only one controller is serving the LUN and hence the question of preferredcontroller does not come into picture.

Active/Active Storage Systems:With Active/Active storage systems, multiple controllers can issue I/O requests to anindividual LUN concurrently. Both the controllers are Active; there is no stand-by

concept. In Active/Active setup, each controller can have multiple ports. I/O requestsreaching the storage system through ports of the preferred controller of a LUN will besent directly to the LUN. I/O requests arriving at the non-preferred controller of aLUN will be first forwarded to the preferred controller of the LUN.

Therefore, for storage devices with ALUA feature implemented, there must be at leasttwo target port groups, first one would be direct/Optimized TPG [Controller A] andsecond one Indirect/Un-optimized TPG [Controller B]

Active/Active Storage Systems further divide into two categories:

  Asymmetrical Active/Active (SAA) storage systems:With theses type of arrays, one controller is assigned to each LUN as a preferredcontroller. Each controller can have multiple ports –  our example above showed 2

 ports per controller. I/O requests reaching the storage system through ports of the

 preferred controller of a LUN will be sent directly to the LUN. I/O requests arriving atthe non-preferred controller of a LUN will be first forwarded to the preferredcontroller of the LUN. These arrays are also called Asymmetrical Logical UnitAccess (ALUA) compliant devices.

Multi-pathing software can query ALUA compliant arrays to load balance only between paths connected to the preferred controller and use the paths to the non- preferred controller for automatic path failover if all of the paths to the primarycontroller fail.

7/21/2019 Alua - Asymmetric LoALUA_-_ASYMMETRIC_LOGICAL_UNIT_ACCESSgical Unit Access

http://slidepdf.com/reader/full/alua-asymmetric-loalua-asymmetriclogicalunitaccessgical-unit-access 6/20

  Symmetrical Active/Active (SAA) storage systems:

These types of arrays do not have a primary or preferred controller per LUN. I/Orequests can be issued over all paths mapped to a LUN. Some models of the HPStorageWorks XP Disk Array family are symmetrical active/active arrays.

Device specific Multi-pathing solutions available from different vendors:1.LVM PVlinks2.PVLinks3.Symantec DMP4.HP Storage Works SecurePath®5.EMC PowerPath®

Microsoft Windows 2008 Introduced Native MPIO with feature that utilizes ALUAfor path selection. Hence, if you are running Windows 2008, you don't have to worryabout installing vendor specific DSM for Active/Active storage systems that supportsALUA. Native MPIO can handle this for you. This has been made possible throughthe standardization of ALUA in SCSI-3 specification.

Similarly, many OS vendors are now providing ALUA feature in the NativeMultipathing software. Following is the rough estimation of the time line sincevarious vendors adopted ALUA.

ALUA adaptation Time Line

Note: With ALUA standardization, mainstream operating systems [with „built-in Native Multipathing] now supports ALUA natively on the Active/Active arrayswithout having to install Hardware vendor provided Device Specific Plug-ins.

7/21/2019 Alua - Asymmetric LoALUA_-_ASYMMETRIC_LOGICAL_UNIT_ACCESSgical Unit Access

http://slidepdf.com/reader/full/alua-asymmetric-loalua-asymmetriclogicalunitaccessgical-unit-access 7/20

  Storage Vendor Plug-ins No longer required 

Asymmetric Logical Unit Access (ALUA) do not need a plug-in to work with NativeMulti-Pathing which comes out of box with mainstream standard Operating Systems.

As ALUA support has been widely adopted and delivered in the Host side OS, nospecial storage plug-ins are required, which means volume manager and arrays based

 plug-ins are becoming less dominant or unneeded for Active/Active Storage arrays.

ALUA devices can operate in two modes: implicit and/or explicit.

Implicit ALUA: With the implicit ALUA style, the host multipathing software canmonitor the path states but cannot change them, either automatically or manually. Ofthe active paths, a path may be specified as preferred (optimized), and as non-

 preferred (non-optimized). If there are active preferred paths, then only these pathswill receive commands, and will be load balanced to evenly distribute the commands.If there are no active preferred paths, then the active non-preferred paths are used in around-robin fashion. If there are no active non-preferred paths, then the LUN cannot

 be accessed until the controller activates its standby paths.

Explicit ALUA: Devices allow the host to use the Set Target Port Group taskmanagement command to set the Target Port Group's state. In implicit ALUA, thetarget device itself manages a device‟s Target Port Group states. 

1st Stage: Discovery

  TARGET PORT GROUPS SUPPORT [TPGS]:SCSI logical units with asymmetric logical unit access may be identified using theINQUIRY command. The value in the target port group support (TPGS) fieldindicates whether or not the logical unit supports asymmetric logical unit access and ifso whether implicit or explicit management is supported. The asymmetric accessstates supported by a logical unit may be determined by the REPORT TARGET

PORT GROUPS command parameter data.

2nd Stage: Report Access States

  REPORT TARGET PORT GROUPS [RTPG]:The REPORT TARGET PORT GROUPS command requests that the device serversend target port group information to the application client. This command shall besupported by logical units that report in the standard INQUIRY data that they supportasymmetric logical unit access (i.e., return a non-zero value in the TPGS field).

7/21/2019 Alua - Asymmetric LoALUA_-_ASYMMETRIC_LOGICAL_UNIT_ACCESSgical Unit Access

http://slidepdf.com/reader/full/alua-asymmetric-loalua-asymmetriclogicalunitaccessgical-unit-access 8/20

TARGET PORT GROUPS Asymmetric access states

Target Port Groups (TPG) allows path grouping and dynamic load balancing. Each port in the same TPG has the same port state, which can be one of these:

1.  Active/Optimized

2. 

Active/Non-Optimized3.  Standby4.

 

Unavailable5.  In-transition

These map to the following existing Data ONTAP terms:

  Active/Optimized = Local/Fast/Primary

  Active/Non-Optimized = Partner/Proxy/Slow/Secondary

 

Unavailable = Cluster IC is down, path is not functional

  Transitioning = Path is transitioning to another state

ALUA allows SCSI Initiator ports to make intelligent path decisions

“Asymmetric Logical Unit Access” (ALUA) was included in SPC-2 and updated inthe SPC-3 specification. This interface allows an initiator to discover Target PortGroups - groups of ports expected to provide common failover behaviour for specific

logical units. The Target Port Group Support (TPGS) field in the standard INQUIRYresponse describes the logical unit's adherence to the standard, whether the logicalunit provides symmetric or asymmetric access, and whether the logical unit usesexplicit or implicit failover. The standard provides a standard explicit failovercommand, and commands to determine which ports are members of a target portgroup and other information about the multipath configuration.

In short - ALUA enables support for SCSI-3 target port group commands.

7/21/2019 Alua - Asymmetric LoALUA_-_ASYMMETRIC_LOGICAL_UNIT_ACCESSgical Unit Access

http://slidepdf.com/reader/full/alua-asymmetric-loalua-asymmetriclogicalunitaccessgical-unit-access 9/20

What is DM-Multipath [Redhat]?Device mapper multipathing (DM-Multipath) allows you to configure multiple I/O

 paths between server nodes and storage arrays into a single device. Without DM-Multipath, each path from a server node to a storage controller is treated by thesystem as a separate device, even when the I/O path connects the same server node to

the same storage controller. DM-Multipath provides a way of organizing the I/O pathslogically, by creating a single multipath device on top of the underlying devices

What do you mean by I/O paths ?I/O paths are physical SAN connections that can include separate cables, switches,and controllers. Multipathing aggregates the I/O paths, creating a new device thatconsists of the aggregated paths.

If any element of an I/O path (the cable, switch, or controller) fails, DM-Multipathswitches to an alternate path.

Let's list down all the possible components that could fail in a typical I/O path between server & storage?

Points of possible failure:1.FC-HBA/iSCSI-HBA/NIC2.FC/Ethernet cable3.SAN switch4.Array controller/ Array controller port

With DM-Multipath configured, a failure at any of these points will cause DM-Multipath to switch to the alternate I/O path.

Can I use LVM on top of Multipath device?Yes. After creating multipath devices, you can use the multipath device names just asyou would use a physical device name when creating an LVM physical volume.

For example, if /dev/mapper/mpathn is the name of a multipath device, the followingcommand will mark /dev/mapper/mpathn as a physical volume.

After creating multipath devices, you can use the multipath device names just as you

would use a physical device name when creating an LVM physical volume. Forexample, if /dev/mapper/mpathn [n=device number] is the name of a multipathdevice, the following command will mark /dev/mapper/mpathn as a physical volume.

pvcreate /dev/mapper/mpathn

For example: In our case, we have one „multipath‟ device available by name'mpath53'[root@redhat /]# cd /dev/mapper/[root@redhat mapper]# lltotal 0crw------- 1 root root 10, 60 Sep 19 14:25 control

 brw-rw---- 1 root disk 253, 0 Sep 20 13:30 mpath53

7/21/2019 Alua - Asymmetric LoALUA_-_ASYMMETRIC_LOGICAL_UNIT_ACCESSgical Unit Access

http://slidepdf.com/reader/full/alua-asymmetric-loalua-asymmetriclogicalunitaccessgical-unit-access 10/20

 Now, using pvcreate we can create a physical volume to be used for LVM purpose:

[root@redhat mapper]# pvcreate /dev/mapper/mpath53Writing physical volume data to disk "/dev/mpath/mpath53"Physical volume "/dev/mpath/mpath53" successfully created

[root@redhat mapper]#

Next step: Once we have one or more physical volumes created, we can create avolume group on top of PVs using the vgcreate command. In our case, we havecreated just one 'physical volume' using pvcreate command. Following commandcreates 'volumegroup' on top of Physical Volume and on top volume group we cancreate a Logicalvolume to be used for filesystem purpose.

Volume create command:

[root@redhat mapper]# vgcreate volume_53 /dev/mapper/mpath53Volume group "volume_53" successfully created

[root@redhat mapper]#

Logical volume create command: [For example purpose, we will create 1GB disk]

[root@redhat mapper]# lvcreate -n lv_53 -L 1G volume_53Logical volume "lv_53" created

[root@redhat mapper]#

 Next, let's lay/install the filesystem on top of the Logival volume we just created.

First we will find out how our logical volume looks using 'lvdisplay' command:

[root@redhat mapper]# lvdisplay--- Logical volume ---LV Name /dev/volume_53/lv_53VG Name volume_53LV UUID 2bQkgR-vXAQ-tmwZ-Afs1-Knxv-0mUG-NepgLBLV Write Access read/writeLV Status available

# open 0LV Size 1.00 GBCurrent LE 256Segments 1Allocation inheritRead ahead sectors auto- currently set to 256Block device 253:1

7/21/2019 Alua - Asymmetric LoALUA_-_ASYMMETRIC_LOGICAL_UNIT_ACCESSgical Unit Access

http://slidepdf.com/reader/full/alua-asymmetric-loalua-asymmetriclogicalunitaccessgical-unit-access 11/20

 Now, using the LV name, lets lay the filesystem, we are going to put the 'EXT3'filesystem in the following example:

[root@redhat mapper]# mkfs.ext3 /dev/volume_53/lv_53 mke2fs 1.39 (29-May-2006)

Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)131072 inodes, 262144 blocksCreating journal (8192 blocks): doneWriting superblocks and filesystem accounting information: doneThis filesystem will be automatically checked every 36 mounts or180 days, whichever comes first. Use tune2fs -c or -i to override.

  Finally, make it available to user-space by mounting it.

[root@redhat mapper]# mount /dev/volume_53/lv_53 /mnt/netapp/

  Using the 'mount' command we should see the new filesystem:

[root@redhat mapper]# mount/dev/sda2 on / type ext3 (rw)

 proc on /proc type proc (rw)sysfs on /sys type sysfs (rw)/dev/mapper/volume_53-lv_53 on /mnt/netapp type ext3 (rw)[root@redhat mapper]#

Important: If you are using LVM on top of Multipath, then make sure Multipath is be loaded before LVM to ensure that multipath maps are built correctly. Loadingmultipath after LVM can result in incomplete device maps for a multipath device

 because LVM locks the device, and MPIO cannot create the maps properly.

Overview of Storage Stack and Multipath Positioning

7/21/2019 Alua - Asymmetric LoALUA_-_ASYMMETRIC_LOGICAL_UNIT_ACCESSgical Unit Access

http://slidepdf.com/reader/full/alua-asymmetric-loalua-asymmetriclogicalunitaccessgical-unit-access 12/20

  How to troubleshoot dm-multipath failure issues 

On Linux [Redhat]

Querying the multipath I/O status outputs the current status of the multipath maps.This is perhaps the first thing you would do to find out if paths show up?

The two key switches with 'multupath' tool are:

1. multipath -l2. multipath -ll

  The multipath -l option displays the current path status as of the last time thatthe path checker was run. It does not run the path checker.

  The multipath -ll option runs the path checker, updates the path information,then displays the current status information. This option always the displaysthe latest information about the path status.

At a terminal console prompt, enter

[root@redhat ]# multipath -llThis displays information for each multipathed device. For example:

3600601607cf30e00184589a37a31d911[size=127 GB][features="0"][hwhandler="1 alua"]\_ round-robin 0 [active][first]

\_ 1:0:1:2 sdav 66:240 [ready ][active]\_ 0:0:1:2 sdr 65:16 [ready ][active]

\_ round-robin 0 [enabled]\_ 1:0:0:2 sdag 66:0 [ready ][active]\_ 0:0:0:2 sdc 8:32 [ready ][active]

You can also use „dmsetup‟ command to see the number of paths: 

root@redhat ~]# dmsetup ls --treevolume_53-lv_53 (253:1)└─mpath53 (253:0) 

├─ (8:80) ├─ (8:64) ├─ (8:48) └─ (8:32) 

[root@redhat ~]#

7/21/2019 Alua - Asymmetric LoALUA_-_ASYMMETRIC_LOGICAL_UNIT_ACCESSgical Unit Access

http://slidepdf.com/reader/full/alua-asymmetric-loalua-asymmetriclogicalunitaccessgical-unit-access 13/20

 

Interactive Tool for Multipath Troubleshooting

The multipathd -k  command is an interactive interface to the -> multipathd daemon.Entering this command brings up an interactive multipath console. After entering this

command, you can enter help to get a list of available commands, you can enter ainteractive command, or you can enter CTRL-D to quit.

[root@redhat /]# multipathd -k

multipathd> helpmultipath-tools v0.4.7 (03/12, 2006)CLI commands reference:list|show pathslist|show maps|multipathslist|show maps|multipaths statuslist|show maps|multipaths statslist|show maps|multipaths topologylist|show topologylist|show map|multipath $map topologylist|show configlist|show blacklistlist|show devicesadd path $pathremove|del path $pathadd map|multipath $mapremove|del map|multipath $map

switch|switchgroup map|multipath $map group $groupreconfiguresuspend map|multipath $mapresume map|multipath $mapreinstate path $pathfail path $pathdisablequeueing map|multipath $maprestorequeueing map|multipath $mapdisablequeueing maps|multipathsrestorequeueing maps|multipathsresize map|multipath $map

multipathd>

In this example, I am using 'show paths' command to see the multipaths on my host:

[root@redhat /]# multipathd -kmultipathd> show pathshcil dev dev_t pri dm_st chk_st next_check3:0:0:0 sdc 8:32 2 [active][ready] XX........ 4/205:0:0:0 sdf 8:80 2 [active][ready] XX........ 4/202:0:0:0 sdd 8:48 2 [active][ready] XX........ 4/204:0:0:0 sde 8:64 2 [active][ready] XX........ 4/20

multipathd>

7/21/2019 Alua - Asymmetric LoALUA_-_ASYMMETRIC_LOGICAL_UNIT_ACCESSgical Unit Access

http://slidepdf.com/reader/full/alua-asymmetric-loalua-asymmetriclogicalunitaccessgical-unit-access 14/20

multipathd> show topologyreload: mpath53 (360a98000427045777a24463968533155) dm-0 NETAPP,LUN[size=2.0G][features=1 queue_if_no_path][hwhandler=0 ][rw ]\_ round-robin 0 [prio=8][enabled]\_ 3:0:0:0 sdc 8:32 [active][ready]

\_ 5:0:0:0 sdf 8:80 [active][ready]\_ 2:0:0:0 sdd 8:48 [active][ready]\_ 4:0:0:0 sde 8:64 [active][ready]

multipathd>

multipathd> show multipaths statsname path_faults switch_grp map_loads total_q_time q_timeoutsmpath53 0 0 4 0 0multipathd>

multipathd> show paths

hcil dev dev_t pri dm_st chk_st next_check3:0:0:0 sdc 8:32 2 [active][ready] XXX....... 7/205:0:0:0 sdf 8:80 2 [active][ready] XXX....... 7/202:0:0:0 sdd 8:48 2 [active][ready] XXX....... 7/204:0:0:0 sde 8:64 2 [active][ready] XXX....... 7/20

 Now, let's simulate path failure - We will fail the path 'sdc'

multipathd> fail path sdcok

multipathd> show pathshcil dev dev_t pri dm_st chk_st next_check3:0:0:0 sdc 8:32 2 [failed][faulty] X......... 3/205:0:0:0 sdf 8:80 2 [active][ready] XXX....... 7/202:0:0:0 sdd 8:48 2 [active][ready] XXX....... 7/204:0:0:0 sde 8:64 2 [active][ready] XXX....... 7/20multipathd>

As you can see 'sdc' is now marked 'faulty' but due to constant polling, default intervalis 5 seconds, the path should come back up as Active immediately.

multipathd> show pathshcil dev dev_t pri dm_st chk_st next_check3:0:0:0 sdc 8:32 2 [active][ready] XXX....... 7/205:0:0:0 sdf 8:80 2 [active][ready] XXX....... 7/202:0:0:0 sdd 8:48 2 [active][ready] XXX....... 7/204:0:0:0 sde 8:64 2 [active][ready] XXX....... 7/20

The goal of multipath I/O is to provide connectivity fault tolerance between thestorage system and the server. When you configure multipath I/O for a stand-aloneserver, the retry setting protects the server operating system from receiving I/O errorsas long as possible. It queues messages until a multipath failover occurs and provides

a healthy connection.

7/21/2019 Alua - Asymmetric LoALUA_-_ASYMMETRIC_LOGICAL_UNIT_ACCESSgical Unit Access

http://slidepdf.com/reader/full/alua-asymmetric-loalua-asymmetriclogicalunitaccessgical-unit-access 15/20

However, when connectivity errors occur for a cluster node, you want to report theI/O failure in order to trigger the resource failover instead of waiting for a multipathfailover to be resolved.

In cluster environments, you must modify the retry setting so that the cluster node

receives an I/O error in relation to the cluster verification process. Please read theOEM document for recommended retry settings.

Enabling ALUA on NetApp Storage & Host 

Where & how to enable ALUA on the Host when using NetApp Active/Activestorage systems:

Where:On NetApp Storage: ALUA is enabled or disabled on the igroup mapped to a NetAppLUN on the NetApp controller.

Only FCP igroups support ALUA in ontap 7-mode or simple Ontap HA.

You might ask why only FCP and not iSCSI in 7-mode, because there is no proxy path in 7-mode, as both controllers have different IP address and the IP addresses aretied to the Physical NIC/Ports.

However, with Ontap cmode or cluster mode, access characteristics changes, as the

 physical adapters are virtualised in cmode, and clients accesses the data via LIF[Virtual Adapter/Logical Interface] and in this case, the IP addresses are tied to theLIF and not to the Physical NIC/Ports, which means if port failure is detected, a LIFcan be migrated to another working port on the another node with-in the cluster, andhence ALUA is now supported with iSCSI in cmode setups.

This table only shows Windows OS, but ALUA is supported with iSCSI & ontapcmode on non-windows OS. Please check NetApp interoperability matrix table.

7/21/2019 Alua - Asymmetric LoALUA_-_ASYMMETRIC_LOGICAL_UNIT_ACCESSgical Unit Access

http://slidepdf.com/reader/full/alua-asymmetric-loalua-asymmetriclogicalunitaccessgical-unit-access 16/20

How to enable ALUA:If ALUA is not enabled for your igroup, you can manually enable it by setting thealua option to yes. If you map multiple igroups to a LUN and you enable one of theigroups for ALUA, you must enable all the igroups for ALUA.Steps

1. Run the following command to check if ALUA is enabled:filer> igroup show -v igroup_name2.If ALUA is not enabled, then manually enable ALUA on the igroup following thiscommand:filer>igroup set igroup alua yes

On the Host:

1.Validate the host OS and the multipathing software as well as the storage controllersoftware support ALUA. If yes, then proceed.

For example, ALUA is not supported for VMware ESX until vSphere 4.0. Check withthe host OS vendor for supportability.

2.Check the host system for any script that might be managing the paths automaticallyand if so, disable it.3.If using SnapDrive, verify that there are no settings disabling the ALUA set in theconfiguration file.

Note: The output of igroup show -v displays the FCP initiator logged in on physical ports as well as a port called "vtic". VTIC is an abbreviation for "virtual targetinterconnect". VTIC provides a connection between the two nodes in an HA pair,enabling LUNs to be served through target ports on both nodes. It is normal to seeVTIC as one of the ports in the output of igroup show -v.

Note: It is recommended to first enable ALUA on the igroups on a storage system prior to discovering it on the Host; if you are enabling it after discovering the LUN,then make sure you reboot the host to detect ALUA.

7/21/2019 Alua - Asymmetric LoALUA_-_ASYMMETRIC_LOGICAL_UNIT_ACCESSgical Unit Access

http://slidepdf.com/reader/full/alua-asymmetric-loalua-asymmetriclogicalunitaccessgical-unit-access 17/20

  ALUA support on VMware 

[Courtesy: VMware Knowledgebase ID 1022030]ESX/ESXi 4.1 or ESXi 5.x host supports Asymmetric Logical Unit Access (ALUA),

the output of storage commands has some new parameters.

If you run the command:In ESX/ESXi 4.x  –   #esxcli nmp device list -dnaa.60060160455025000aa724285e1ddf11,In ESX/ESXi 5.x  –   #esxcli storage nmp device list -dnaa.60060160455025000aa724285e1ddf11

You see output similar to:naa.60060160455025000aa724285e1ddf11Device Display Name: DGC Fibre Channel Disk(naa.60060160455025000aa724285e1ddf11)Storage Array Type: VMW_SATP_ALUA_XXStorage Array Type Device Config: {navireg=on,ipfilter=on}{implicit_support=on;explicit_support=on;explicit_allow=on;alua_followover=on;{TPG_id=1,TPG_state=AO}{TPG_id=2,TPG

 _state=ANO}}Path Selection Policy: VMW_PSP_FIXED_APPath Selection Policy Device Config: {preferred=vmhba1:

C0:T0:L0;current=vmhba1: C0:T0:L0}Working Paths: vmhba1:C0:T0:L0

The output may contain these new device configuration parameters:•implicit_support=on

This parameter shows whether or not the device supports implicit ALUA. You cannotset this option as it is a property of the LUN.

•explicit_support 

This parameter shows whether or not the device supports explicit ALUA. You cannotset this option as it is a property of the LUN.

7/21/2019 Alua - Asymmetric LoALUA_-_ASYMMETRIC_LOGICAL_UNIT_ACCESSgical Unit Access

http://slidepdf.com/reader/full/alua-asymmetric-loalua-asymmetriclogicalunitaccessgical-unit-access 18/20

•explicit_allow

This parameter shows whether or not the user allows the SATP to exercise its explicitALUA capability if the need arises during path failure. This only matters if the deviceactually supports explicit ALUA (that is, explicit_support is on). This option is turned

on using the esxcli command enable_explicit_alua and turned off using the esxclicommand disable_explicit_alua.

•alua_followover

This parameter shows whether or not the user allows the SATP to exercise the follow-over policy, which prevents path thrashing in multi-host setups. This option is turnedon using the esxcli command enable_alua_followover and turned off using the esxclicommand disable_alua_followover.

If you run the command:

In ESX/ESXi 4.x –   #esxcli nmp path list -dnaa.60060160455025000aa724285e1ddf11In ESXi 5.x  –  #esxcli storage nmp path list -dnaa.60060160455025000aa724285e1ddf11

You see output similar to:fc.20000000c987f8c5:10000000c987f8c5-fc.50060160bce0383c:5006016e3ce0383c-naa.60060160455025000aa724285e1ddf11

Runtime Name: vmhba2:C0:T1:L0Device: naa.60060160455025000aa724285e1ddf11Device Display Name: DGC Fibre Channel Disk

(naa.60060160455025000aa724285e1ddf11)Group State: active unoptimized Array Priority: 0Storage Array Type Path Config:

{TPG_id=2,TPG_state=ANO,RTP_id=18,RTP_health=UP}Path Selection Policy Path Config: {current: no; preferred: no}

fc.20000000c987f8c5:10000000c987f8c5-fc.50060160bce0383c:500601663ce0383c-naa.60060160455025000aa724285e1ddf11

Runtime Name: vmhba2:C0:T0:L0

Device: naa.60060160455025000aa724285e1ddf11Device Display Name: DGC Fibre Channel Disk(naa.60060160455025000aa724285e1ddf11)

Group State: activeArray Priority: 1Storage Array Type Path Config:

{TPG_id=1,TPG_state=AO,RTP_id=7,RTP_health=UP}Path Selection Policy Path Config: {current: no; preferred: no}

In the output:

  TPG_state = ANO means Active/Non-Optimized

 

TPG_state = AO means Active/Optimized.

7/21/2019 Alua - Asymmetric LoALUA_-_ASYMMETRIC_LOGICAL_UNIT_ACCESSgical Unit Access

http://slidepdf.com/reader/full/alua-asymmetric-loalua-asymmetriclogicalunitaccessgical-unit-access 19/20

  Frequently Used Terms

  Command Descriptor Block (CDB): The standard format for SCSI commands.CDBs are commonly 6, 10, or 12 bytes long, though they can be 16 bytes or of

variable length.

  Multipath I/O (MPIO): A method by which data can take multiple redundant paths between a server and storage.

  SCSI Target: The receiving end of a SCSI session, typically a device such as adisk drive, solid state drive, tape drive, or scanner.

  Target Portal Group (TPG): A list of IP addresses and TCP port numbers thatdetermines which interfaces a specific iSCSI target will listen to.

  Array: An array is a group of disks that is housed in one or more diskenclosures. The disks are connected to two controllers running software that

 presents disk storage capacity as one or more virtual disks. The term “array” issynonymous with storage array, storage system, and virtual array.

  The logical unit (LU) is a SCSI convention used to identify elements of astorage system; for example, hosts see a virtual disk as an LU. An LU is alsoreferred to as a Vdisk. The logical unit number (LUN) assigned by the user toaVdisk for a particular host is the LUN at which that host will see the virtualdisk.

  LUN The logical unit number (LUN) is a SCSI convention used to enumerateLU elements; for example, the host recognizes a particular Vdisk by itsassigned LUN.

  VTIC is an abbreviation for "virtual target interconnect".

7/21/2019 Alua - Asymmetric LoALUA_-_ASYMMETRIC_LOGICAL_UNIT_ACCESSgical Unit Access

http://slidepdf.com/reader/full/alua-asymmetric-loalua-asymmetriclogicalunitaccessgical-unit-access 20/20

 

List of Abbreviations

Ashwin PawarSep, 2014

www.simplysan.com 

[email protected]