storage best practices vmware webinar 07202012a - pk

61
© 2012 IBM Corporation Best Practices for IBM Storage with VMware SVC / Storwize V7000 / XIV Accelerate with ATS Pete Kisich Storage Solutions Engineering VM Platforms Accelerate with ATS Webinars https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Upload: newbeone

Post on 22-Oct-2015

105 views

Category:

Documents


2 download

TRANSCRIPT

© 2012 IBM Corporation

Best Practices for IBM Storage with VMwareSVC / Storwize V7000 / XIV

Accelerate with ATS

Pete KisichStorage Solutions EngineeringVM Platforms

Accelerate with ATS Webinarshttps://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

© 2012 IBM Corporation 2https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

VMware IBM Storage Best Practices

VMware Native Multi-pathing and Pluggable Storage Architecture

FC Connectivity Best Practices

iSCSI Connectivity Best Practices

General VMware storage Best Practices

Using and Verifying VAAI Performance

Using the vCenter GUI Plug-in

Thin Provisioning on VMware

Additional Resources

© 2012 IBM Corporation 3https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

Pluggable Storage Architecture (PSA)

•Introduced in vSphere 4.0

•Modular storage architecture consisting of…

► Default native multipath driver (NMP) supplied by VMware

► Optional third party developed plug-ins (e.g. EMC PowerPath)

© 2012 IBM Corporation 4https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

VMware NMP

•VMware NMP is an extensible module that manages “sub” plug-ins

► Storage Array Type Plug-Ins (SATPs)

► Path Selection Plug-Ins (PSPs)

•Decision framework for how VMkernel will access data

NMP Layer > SATP Layer > PSP Layer

© 2012 IBM Corporation 5https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

VMware NMP

•Storage Array Type Plug-In (SATP)

► VMware recognizes SAN and loads appropriate SATP for all supported arrays

► Responsible for array specific functions

● Monitor path state

● Perform array specific functions for fail-over

► IBM SAN Volume Controller/Storwize V7000 = VMW_SATP_SVC

► IBM XIV = VMW_SATP_ALUA

•Path Selection Plug-In (PSP)

► SATP determines which PSP is assigned

► Chooses path for I/O Requests

© 2012 IBM Corporation 6https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

VMware PSPs – Most Recently Used (MRU)

•VMW_PSP_MRU

•Selects first working path discovered at boot

•If this path becomes unavailable, host switches to an alternative path

► Host continues to use this path until it is unavailable

© 2012 IBM Corporation 7https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

VMware PSPs - Fixed

•VMW_PSP_FIXED

•Default (not recommendation) of IBM SVC/Storwize V7000

•Selects first working path at boot as Preferred Path

•If Preferred Path is unavailable, random alternative path is used

•Automatically reverts to Preferred Path when it is restored

© 2012 IBM Corporation 8https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

VMware PSPs – Round Robin

•VMW_PSP_RR

•Default of IBM XIV

•Load is balanced across all paths

► 1,000 I/O requests sent before next path is selected

► This is configurable and can be lowered

•Failed paths are excluded from selection until restored

© 2012 IBM Corporation 9https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

PSP Considerations for SVC/V7000

•SVC/V7000 supports MRU, FIXED and RR

•ESX still uses default FIXED

► Goal is to have this default changed for next major certified release

•Issue/impact

► First identified path (preferred path) same for all LUNs

•Recommended to change default:• ESX/ESXi 4.x: # esxcli nmp satp setdefaultpsp --psp <policy> --satp <your SATP name>

ESXi 5.0: # esxcli storage nmp satp set --default-psp <policy> --satp <your SATP name>

• Where <policy> is:

► VMW_PSP_MRU for Most Recently Used mode

► VMW_PSP_FIXED for Fixed mode

► VMW_PSP_RR for Round Robin mode

•Existing (reboot) and new LUNs

will be assigned VMW_PSP_RR

© 2012 IBM Corporation 10https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

Considerations for XIV

•Multipathing► Round robin policy *always* unless version is below vSphere

4.0 (then use fix path).

► Do not use MRU

•Zoning► Balance load across all IO controllers

•LUN Size and Datastore► Use large LUNs – 1.5TB to 3TB – XIV caches large LUNs

better

► LVM extents are supported but not recommended. In vSphere

5.0 with XIV, you can increase the size of a datastore and LUN

(up to 64TB) on-line.

For all best practices

http://www.redbooks.ibm.com/abstracts/sg247904.html

© 2012 IBM Corporation 11https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

Tuning for XIV

•Queue Depth

► Set both the queue_depth and the Disk.SchedNumReqOutstanding VMWare kernel parameter to 128 on

an ESX host that has exclusive access to its LUNs.

► Set both the queue_depth and the Disk.SchedNumReqOutstanding VMWare kernel parameter to 64

when a few ESX hosts share access to a common group of LUNs.

•PSP Tuning – Reducing Path Switching

► Enable use of non-optimal paths with Round-Robin PSP

● #esxcli nmp roundrobin setconfig --device eui.0017380000691cb1 --useANO=1

► Tune amount of I/O to use per path

● # esxcli nmp roundrobin setconfig --device eui.0017380000691cb1 --iops=10 --type "iops"

© 2012 IBM Corporation 12https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

New feature in IBM Storage Management Console for VMware vCenter 3.0

VMware Storage Agnostic Features

� Enforce preferred Multipathing

►Can be set for each storage device attached

►Supports 4.x and 5.x vCenter

© 2012 IBM Corporation 13https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

VMware IBM Storage Best Practices

VMware Native Multi-pathing and Pluggable Storage Architecture

FC Connectivity Best Practices

iSCSI Connectivity Best Practices

General VMware storage Best Practices

Using and Verifying VAAI Performance

Using the vCenter GUI Plug-in

Thin Provisioning on VMware

Additional Resources

© 2012 IBM Corporation 14https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

SVC/ StorWize V7000 FC Connectivity Best Practices

•Limit total paths to four per volume

•Use consistent LUN-IDs for volumes

► Not required but recommended

► No “host group” in GUI

● Place all ports for all hosts in single group

● Or – be diligent with LUN-ID assignment

•Follow standard zoning best practices

► e.g. single initiator zones

© 2012 IBM Corporation 15https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

XIV FC Connectivity Best Practices

•Always balance with RR over all active modules (6 in a full system)

• Max performance is 12 paths per initiator but

this may not be suitable for large scale

implementations

•Utilize XIV host cluster groups for LUN assignment

•Single initiator zones should always be used

© 2012 IBM Corporation 16https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

VMware IBM Storage Best Practices

VMware Native Multi-pathing and Pluggable Storage Architecture

FC Connectivity Best Practices

iSCSI Connectivity Best Practices

General VMware storage Best Practices

Using and Verifying VAAI Performance

Using the vCenter GUI Plug-in

Thin Provisioning on VMware

Additional Resources

© 2012 IBM Corporation 17https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

VMware iSCSI Initiator•VMware includes software iSCSI initiator

► Commonly used and only supported way to connect to SVC/V7000

► Uses Static/Dynamic Discovery Method to know target IPs configured on the Target IQN

(Each SVC/V7000 node)

► 1 source IQN, but can have multiple initiator IPs

Port

Binding

© 2012 IBM Corporation 18https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

General VMware iSCSi Best Practices

•One VMkernel port group per physical NIC

► VMkernel port is bound to physical NIC port in vSwitch creating a “path”

► Creates 1-to-1 “path” for VMware NMP

► Utilize same PSP as for FC connectivity

© 2012 IBM Corporation 19https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

VMware iSCSI Initiator – Static Discovery ExampleV7000 iSCSI Targets:

Node-1_Port-0 – 1.1.1.10

Node-1_Port-1 – 1.1.1.11

Node-2_Port-0 – 1.1.1.12

Node-2_Port-1 – 1.1.1.13

iSCSI Sessions Created:

Node-1

Vmk-0 (1.1.1.1) to Port-0 (1.1.1.10)

Vmk-0 (1.1.1.1) to Port-1 (1.1.1.11)

Vmk-1 (1.1.1.2) to Port-0 (1.1.1.10)

Vmk-1 (1.1.1.2) to Port-1 (1.1.1.11)

Node-2

Vmk-0 (1.1.1.1) to Port-0 (1.1.1.12)

Vmk-0 (1.1.1.1) to Port-1 (1.1.1.13)

Vmk-1 (1.1.1.2) to Port-0 (1.1.1.12)

Vmk-1 (1.1.1.2) to Port-1 (1.1.1.13)

•Configuration results in 4 sessions per V7000 node

© 2012 IBM Corporation 20https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

SVC/V7000 iSCSI Considerations

•Refer to:

► Guidelines for the Attachment of VMware iSCSI Hosts to SAN Volume Controller and Storwize V7000 – Errata (v6.2.x

and higher) Feb 8th 2012

• Prior to 6.3.0.1 VMware iSCSI Multi-session is NOT supported

► Maximum of 1 VMware iSCSI initiator session per V7000 IQN (node)

► Static Discovery Only, Dynamic Discovery is not supported

•With 6.3.0.1 VMware iSCSI Multi-session IS supported

► Maximum of 4 VMware iSCSI initiator sessions per V7000 IQN (node)

Static Discovery is strongly suggested at this time

© 2012 IBM Corporation 21https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

SVC/V7000 iSCSI Considerations (2 of 3)

V7000 iSCSI Static Targets:

Node-1_Port-0 – 1.1.1.10

Node-1_Port-1 – 1.1.1.11

Node-2_Port-0 – 1.1.1.12

Node-2_Port-1 – 1.1.1.13

iSCSI Sessions Created:

Node-1

Vmk-0 (1.1.1.1) to Port-0 (1.1.1.10)

Vmk-0 (1.1.1.1) to Port-1 (1.1.1.11)

Vmk-1 (1.1.1.2) to Port-0 (1.1.1.10)

Vmk-1 (1.1.1.2) to Port-1 (1.1.1.11)

Node-2

Vmk-0 (1.1.1.1) to Port-0 (1.1.1.12)

Vmk-0 (1.1.1.1) to Port-1 (1.1.1.13)

Vmk-1 (1.1.1.2) to Port-0 (1.1.1.12)

Vmk-1 (1.1.1.2) to Port-1 (1.1.1.13

•Configuration results in 1 sessions per V7000 node

•Guidelines prior to 6.3.0.1 with Static Discovery

© 2012 IBM Corporation 22https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

SVC/V7000 iSCSI Considerations (3 of 3)

V7000 iSCSI Static Targets:

Node-1_Port-0 – 1.1.1.10

Node-1_Port-1 – 1.1.1.11

Node-2_Port-0 – 1.1.1.12

Node-2_Port-1 – 1.1.1.13

iSCSI Sessions Created:

Node-1

Vmk-0 (1.1.1.1) to Port-0 (1.1.1.10)

Vmk-0 (1.1.1.1) to Port-1 (1.1.1.11)

Vmk-1 (1.1.1.2) to Port-0 (1.1.1.10)

Vmk-1 (1.1.1.2) to Port-1 (1.1.1.11)

Node-2

Vmk-0 (1.1.1.1) to Port-0 (1.1.1.12)

Vmk-0 (1.1.1.1) to Port-1 (1.1.1.13)

Vmk-1 (1.1.1.2) to Port-0 (1.1.1.12)

Vmk-1 (1.1.1.2) to Port-1 (1.1.1.13

•Configuration results in 4 sessions per V7000 node

•Guidelines for 6.3.0.1 and newer with Static Discovery

© 2012 IBM Corporation 23https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

SVC/V7000 iSCSI Recommendations

•See VMware KB: 1002598 – Disable “Delayed ACK” on vSphere Hosts

► Has shown large increases in virtual machine and VMware operation performance

•On vSphere Host utilize:

► VMware Software initiator

► Maximum of 2 physical NICs bonded to 2 VMkernel ports

•Enable jumbo frames for throughput intensive workloads (must be done at all

layers)

•For best performance keep maximum paths to 4 per LUN

► Requires static discovery and manual load balancing of V7000 ports if more than one host is

used

•For performance (slightly less than above) utilize maximum of 8 paths

► Either static or dynamic discovery can be used and all V7000 ports can be used by all hosts

© 2012 IBM Corporation 24https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

XIV iSCSI Recommendations

•Enable jumbo frames for throughput intensive workloads (must be done at all

layers).

•Use round robin to all IO Controllers

► Each initiator should see a target port on each

•Queue depth can also be chanaged on the iSCSI software initiator

► If more bandwidth is needed, LUN queue depth can be modified

© 2012 IBM Corporation 25https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

VMware IBM Storage Best Practices

VMware Native Multi-pathing and Pluggable Storage Architecture

FC Connectivity Best Practices

iSCSI Connectivity Best Practices

General VMware storage Best Practices

Using and Verifying VAAI Performance

Using the vCenter GUI Plug-in

Thin Provisioning on VMware

Additional Resources

© 2012 IBM Corporation 26https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

Notable Storage Maximums in vSphere 5

• http://www.vmware.com/pdf/vsphere5/r50/vsphere-50-configuration-maximums.pdf

8Concurrent Storage VMotion per datastore

2Concurrent Storage VMotion per vSphere host

1024Total Number of Paths per vSphere Host

32Number of paths to LUN for a vSphere Server

2048Maximum Virtual disks per host

256LUNs per vSphere Host

64TBLUN Size supported on vSphere host

2TB minus 512 bytesMaximum Virtual Disk

MaximumStorage Element

© 2012 IBM Corporation 27https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

Datastore Volume Sizes

•vSphere 5.0 drastically increased size of support Datastores (64TB)

•As long as Storage Pool has capacity/performance large Datastores can be used

► Concerns with SCSI-2 locking have been mitigated with VAAI

► Most use cases call for a minimum of 1TB to a maximum of 4TB size volumes

● Keep in mind time required to migrate volumes if needed.

● Also snapshots are a consideration here

● Less “orphaned” space

► Avoid using very few LARGE volumes to balance workloads

● Take advantage of all ports on system

● Better utilization of queues

● Try to balance workload across at least 8 LUNs

© 2012 IBM Corporation 28https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

VM Snapshots

• Best Practices

► VMware Snapshots are not backups, although backup software typically use VMware snapshots

● Regular monitoring

– Configure vCenter Snapshot alarms – KB 1018029

• Limit to 2-3 snapshots in a chain to prevent performance degradation

• Delete all snapshots before Virtual Machine disk changes

• Confirm via command line if uncertainty of snapshot state exists

• As a general rule, use Snapshots for no more than 24-72 hours

• Use Hardware snapshots for longer retention (FCM)

• Improvements

► ESX(i) 4.0 U2 – Snapshot deletion takes up less space on disk

► ESXi 5.0 – New functionality to monitor snapshots and provide warning if snapshots need consolidation.

• Snapshot Best Practices – KB 1025279

• Understanding Virtual Machine Snapshots in ESX – KB 1015180

© 2012 IBM Corporation 29https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

29

Common solution for VMware: - Shares a common UI- Can configure two possible targets

1. TSM Storage Pool

2. Hardware snapshot

Common solution for VMware: - Shares a common UI- Can configure two possible targets

1. TSM Storage Pool

2. Hardware snapshot

Common UI using VMware vCenter

Tivoli Storage ManagerStorage Pool

Hardware Snapshot

FlashCopyManager

vStorage API for

Data Protection

Flash Copy Manager Integration

© 2012 IBM Corporation 30https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

Storage DRS – Manual Mode

•Recommend running Storage DRS manual mode initially

Recommendations display Space Utilization (before & after) for source & destination datastores as well as the current latency values of the source and destination.

© 2012 IBM Corporation 31https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

Storage DRS Manual Mode

• When SDRS is in manual mode, recommendations are displayed in the Storage

DRS tab:

Recommendations display Space Utilization (before & after) for source & destination datastores as well as the current latency values of the source and destination.

© 2012 IBM Corporation 32https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

VMware IBM Storage Best Practices

VMware Native Multi-pathing and Pluggable Storage Architecture

FC Connectivity Best Practices

iSCSI Connectivity Best Practices

General VMware storage Best Practices

Using and Verifying VAAI Performance

Using the vCenter GUI Plug-in

Thin Provisioning on VMware

Additional Resources

© 2012 IBM Corporation 33https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

Performance Advantage

�Expected better performance on VAAI enabled storage

Full CopyFull Copy Primitive

© 2012 IBM Corporation 34https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

VMDK

VMDK

VMDK

VMDK

VMDK

VMDK

Hardware Assisted Locking

�More granular locking

�Improved performance

�Better cluster scalability

Sample locks events

�VM migrate, SVM

�Power on/off

�New VM or VMDK

Atomic Test and Set (ATS)

© 2012 IBM Corporation 35https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

Block Zeroing

�Reduces IOPS and CPU

overhead for creation of EZT (Eager Zeroed Thick) VMDKs

Why EZT?

�Smallest IO overhead (after creation)

�Somewhat better performance

�VMware Fault Tolerance for

vSphere 4 requires EZT

Block Zeroing

© 2012 IBM Corporation 36https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

• Without API• VMFS deletes a file, allocations are

returned for use within VMFS but not at array.

• Manual space reclamation often required to reclaim pool space at array level.

• With API• SCSI UNMAP releases the blocks back

to the free pool.

• Is used anytime VMFS deletes (svMotion, Delete VM, Delete Snapshot, Delete)

VMDK

VMFS-5

Extent

VMDK

Utilization

Storage Pool (free blocks)

SCSI WRITE -DATA

SCSI WRITE -DATA

Without API only VMFS metadata change

Data remains consumed on disk

CREATE FILE

CREATE FILE

DELETE FILE

SCSI WRITE -DATA

SCSI WRITE -DATA

SCSI UNMAP –Space is immediately reclaimed

CREATE FILE

CREATE FILE

DELETE FILE

UNMAP (Introduced in 5.0, Changing in 5.1)

© 2012 IBM Corporation 37https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

Monitoring VAAI Performance with resxtop/esxtop

� Find Disk ID assigned by VMware:► Left Graphic - Configuration, Storage in vSpere Client, select the datastore name, properties, and by hovering the mouse

over the device name

► Right Graphic – Under the IBM Storage tab in vCenter, select the datastore and view the ID

© 2012 IBM Corporation 38https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

Monitoring VAAI Performance with resxtop/esxtop

� Viewing VAAI Performance Statistics► SSH to the ESX server and type esxtop (or use resxtop in the powercli) and go the LUN performance view by pressing “u“.

► To view VAAI information add new columns and remove others

► Once in esxtop, press u then f (or letter for VAAI stats. Disable all others so output will fit in screen)

© 2012 IBM Corporation 39https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

VMware IBM Storage Best Practices

VMware Native Multi-pathing and Pluggable Storage Architecture

FC Connectivity Best Practices

iSCSI Connectivity Best Practices

General VMware storage Best Practices

Using and Verifying VAAI Performance

Using the vCenter GUI Plug-in

Thin Provisioning on VMware

Additional Resources

© 2012 IBM Corporation 40https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

IBM Storage Plugin

� Supports XIV SVC V7000 and SONAS

� Simply create LUNs or Shares

� Present LUNs or Shares to ESX servers (NAS)

� Features

►Provisioning

►mapping and monitoring IBM storage in vSphere.

►FC and NAS Support

►Manage the ESX multi-path for XIV storage

►Provision LUNs in bulk

© 2012 IBM Corporation 41https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

Select a datastore

Select a LUN in that Datastore

View XIV LUN info

IBM Storage Plugin – Volume Views

© 2012 IBM Corporation 42https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

IBM Storage Plugin – Adding Storage

© 2012 IBM Corporation 43https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

Tasks in vCenter

� ESX servers rescan to discover new LUNS

� Updates IBM plugin

© 2012 IBM Corporation 44https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

LUNs in vCenter

New LUN

Can be used for Datastore or RDM

#IBMEDGE

© 2012 IBM Corporation 45https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

VMware IBM Storage Best Practices

VMware Native Multi-pathing and Pluggable Storage Architecture

FC Connectivity Best Practices

iSCSI Connectivity Best Practices

General VMware storage Best Practices

Using and Verifying VAAI Performance

Using the vCenter GUI Plug-in

Thin Provisioning on VMware

Additional Resources

© 2012 IBM Corporation 46https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

OS File System/Database/Application Considerations

� Successful thin provisioning requires a “thin-friendly” environment:

► Filesystem (for VMware, need to consider VMFS and/or Guest OS filesystem)

► Database

► Application

� What does “Thin-friendly” mean?

1. Localized data placement

2. Reuse of freed space (write to previously used and deleted space before writing to never-used space)

3. Communication of filesystem deleted space to storage system for reclamation

� User options and actions may affect success of thin provisioning:

► Format options

► Defrag (may defeat thin provisioning by touching space)

► “Zero file” utilities can enable space reclamation for storage systems with zero detect or scrubbing

© 2012 IBM Corporation 47https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

Thin-Friendly Evaluation (unofficial - based on documentation & observations)

yesSolarisZFS

noAIX/LinuxIBM GPFS

yesHP, Sun, Linux, AIXVeritas File System (VxFS)

noSolarisUFS

yesSGIXFS

yesLinuxReiserFS

noLinuxExt 2

noLinuxExt 3

Yes (quick format)WindowsNTFS

yesVMwareVMFS

File System OS Thin-Friendly?

Oracle OCFS2 Linux no

HFS HP/UX no

ASM* Oracle yes

yesHP/UXJFS (VxFS)

yesAIX/LinuxJFS2

UNIX Platforms not on VMware (for completeness)

© 2012 IBM Corporation 48https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

Viritual Disk (VMDK) Thin Provisioning

� Three format options for creating a virtual disk in VMware

1. Eager Zero Thick (EZT) – required for best performance and for Fault Tolerance

– Space is reserved in datastore - unused space in VMDK may NOT be used for other VMDKs in same datastore

– VMDK is not available until formatted with 0s

– VAAI can help

2. Lazy Zero Thick (LZT) - “flat”

– Unused space in VMDK may NOT be used for other VMDKs in same datastore

– VMDK is immediately available – formatted with 0s on first write

– Default

3. Thin

– Unused space in VMDK MAY be used for other VMDKs in same datastore so issue is datastore running out of

space

– VMDK is immediately available – formatted with 0s on first write

– Specified size will be “provisioned” size. “Used” size will be based on blocksize and data written

© 2012 IBM Corporation 49https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

Virtual Disk (VMDK) Thin Provisioning

� Advantages

► More Space Savings than disk over time (assume UNMAP not available)

● More aware of data that has moved or deleted

► Over-provisioned conditions are typically easier to resolve

● Growing a datastore volume is typically easier than adding more physical storage

� Disadvantages

►When a thin provisioned disk grows, the ESX host must make a SCSI reservation

● Can effect scalability

► More objects to manage to pool based thin provisioning on disk

● SVC uses pool management with ‘autoexpand’

● XIV Manages by Pool

© 2012 IBM Corporation 50https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

SAN Volume Controller(SVC) / Storwize V7000 Thin Provisioning Considerations

� ‘Grain’ size for a volume set at 256K

● Cannot be changed after the thin-provisioned volume has been created

● 256K is Strongly recommended and is default for 6.3.0.3 and above

● Specify the same grain size for FlashCopy

� Use autoexpand unless very closely monitored

● Storage pool reserve can protect over-allocation of volumes

� Do not thin provision external virtualized disk behind SVC or V7000

� Thin provisioning exposes volume space utilization (usage is not displayed for

thick volumes)

� Thin provisioning speeds up copying (only actual data is copied from thin

source volume)

� Volume brought in as image mode cannot be thin provisioned

© 2012 IBM Corporation 51https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

XIV Thin Provisioning Considerations� Thin provisioning choice is at pool level (regular or thin)

► For thin pool, user specifies:

● Pool soft size (aggregate volume sizes presented to hosts)

● Pool hard size (aggregate physical space available for data)

► All volumes in pool inherit pool type (regular or thin)

� Changing from thick <-> thin

● Volume type may be changed thick<>thin by moving from thick<>thin pool (dynamic,

immediate change)

● Pool type may be changed thick<> thin (dynamic, immediate)

� Other changes to thin provisioning (dynamic, immediate)

● Change pool soft size

● Change pool hard size

● Move volume into/out of pool

ZERO performance impact as XIV volumes are always written ‘thin’

© 2012 IBM Corporation 52https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

General Guidelines for using thin provisioning

� Determine whether software is thin-friendly (based on experience or vendor information)

● OS/Filesystem

● Database

● Application

� Do not thin provision:

● Apps that are not thin-friendly

● Apps that are extremely risk-averse

● Highest transaction apps (except for XIV thin provisioning)

� Automate monitoring, reporting and notifications

► Set Thresholds according to how your business can respond

� Plan procedures for adding space (decide whether to automate)

� Use VAAI

● Limits impact of SCSI Reservations is Thin Provisioning is used

● Improves performance

● Please check with your IBM storage expert for Code required for VAAI

© 2012 IBM Corporation 53https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

Compression Feature Notes

� Common workloads suitable for compression

► Databases – DB2, Oracle, MS-SQL, etc.

► Applications based on databases – SAP, Oracle Applications, etc.

► Server Virtualization – KVM, VMware, Hyper-V, etc.

► Other compressible workloads – engineering, seismic, collaboration, etc.

� Common workloads NOT suitable for compression

► Workloads using pre-compressed data types such as video, images, audio, etc.

► Workloads using encrypted data

► Heavy sequential write oriented workloads

► Other workloads using incompressible data or data with low compression rate

� Recommended hardware configurations for compression:

► 4 core systems (V7000, CF8, older CG8) with less than 25% CPU utilization (before enabling

compression)

► 6 core systems (newer CG8) with less than 50% CPU utilization (before enabling compression)

� Compressed volumes are not yet supported with Easy Tier

© 2012 IBM Corporation 54https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en

Accelerate with ATS

VMware IBM Storage Best Practices

VMware Native Multi Pathing and Pluggable Storage Architecture

FC Connectivity Best Practices

iSCSI Connectivity Best Practices

General VMware storage Best Practices

Using and Verifying VAAI Performance

Using the vCenter GUI Plug-in

Thin Provisioning on VMware

Additional Resources and Performance Tips

© 2012 IBM Corporation 55https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?lang=en 55

VMware and IBM Storage Integration Points

� IBM Storage Management Console for VMware vCenter (“VMware plug-in”)

● XIV storage administration from vCenter console

� vStorage API for Array Integration (VAAI)

● Performance and scalability enhancements

� VMware Site Recovery Manager (SRM)

● Clustering solution

� vStorage APIs for Data Protection (VADP)

● FlashCopy Manager for VMware and Tivoli Storage Manager for Virtual Environments

� VMware APIs for Storage Awareness (VASA)

● Alerts, events, base for future profile-driven storage

� vSphere Metro Storage Cluster (vMSC)

● SVC Split IO Groups

� Virtual Volume (vVol) (future)

● IBM is 1 of 5 VMware co-development partners

ATS Masters - Storage

© 2012 IBM Corporation

Performance Monitoring and Troubleshooting� vCenter (high level)

• Historical performance data (check statistics

levels)

• Consolidated metrics for all hosts / datastores in

environment

� esxtop / resxtop (tactical)

• Single ESX/ESXi Host

• Detailed performance data in real time

� vscsiStats (storage guts)

• Detailed virtual SCSI device latency metrics

• Seek distance, IO size, Latency

• Displays Histograms

• Array tools (resolving a pin-pointed problem)

• TPC, Management GUI (SVC/V7000, XIV), XIVTop

ATS Masters - Storage

© 2012 IBM Corporation

Monitoring Disk Throughput with resxtop/esxtop

� Measure throughput (IOps) with:

– READs/s and WRITEs/s

– READs/s + WRITEs/s = IOPS

� Measure bandwidth (MBps) with:

– MBREAD/s and MBWRTN/s

� IO size is what makes a workload IOps or MBps gated

ATS Masters - Storage

© 2012 IBM Corporation

Disk Throughput Example

Virtual machine view: Type v.

Device view: Type u.

Adapter view: Type d.

ATS Masters - Storage

© 2012 IBM Corporation

Monitoring Disk Latency with resxtop/esxtop

Host bus adapters (HBAs) include SCSI, iSCSI, RAID, and FC-

HBA adapters.

latency stats from the device, the kernel,

and the guest

DAVG/cmd: Average latency (ms) of the device (LUN)

KAVG/cmd: Average latency (ms) in the VMkernel, also known as “queuing time”

GAVG/cmd: Average latency (ms) in the guest. GAVG = DAVG + KAVG.

ATS Masters - Storage

© 2012 IBM Corporation

Disk Latency and Queuing Example

normalVMkernellatency

queuing at the device

ATS Masters - Storage

© 2012 IBM Corporation

� Accelerate with ATS Education

https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?order=desc&maxresults=

100&sortby=0&lang=en

ATS Customer Webinars

Previous XIV Sessions:

� XIV Asynchronous Mirror

�SONAS Gateway for XIV

� Introducing XIV Gen3

� XIV 3.1 Update

Recordings:

https://www.ibm.com/developerworks/mydeveloperworks/blogs/accelerate/?order=desc&maxresults=100&sortby=0&lang=en