san volume controller best practices for storage ... · pdf filesan volume controller best...
TRANSCRIPT
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators
Chuck Laing- Senior Technical Staff Member
23 Oct 2013
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
2
Agenda
Max Limitations and BP recommendations
How to rack and cable
How to zone hosts and storage/testing
How to configure disk controllers
Architecting SVC MDGs
Data Placement and Host Vdisk mapping
How to utilize copy services
Additional reference material
Source: If applicable, describe source origin
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
3
MDG1
Pool 2
MDG3
Cluster:
•Max 4 Node-pairs (8
Nodes total) or IO Groups
•Large environments may
have multiple clusters SAN Volume Controllers nodes
Managed Disks (MDisks):
•Select LUNs (MDisks) from up to
64 physical disk subsystems
•Max 128 Storage Pools
•Max 128 MDisks per Pool
•Max 4096 MDisks per Cluster
•Can add or remove from Pool
I/O Group A I/O Group B
Volumes:
Max 8192 Volumes total (2048 per IO
Group) up to 1PB in size
Each VDisk is assigned to:
• Specific Node-pair (IOGroup)
• Specific Storage Pool
Pool 1 Pool 3
SAN Volume Controller – Terminology/Limitations
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
4
Max Limitations and BP recommendations
Always check the Max Limit configuration URL for the most current updates - 7.1
– http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004368
– Some limitations have changed, (for example; Generic Host Properties)
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
5
Most common questions / max limits versus BP recommendations
What is the max number of WWNN/WWPNs devices?
– Recommendation - The more wwpns per wwnn, the more throughput up to 16
What is the max amount of virtualized disk capacity?
– Recommendation – Start at 125TB/IOgrp if performance requirments unknown
How many MDGs per cluster should I create versus the max number of MDGs?
– Recommendation – try to limit it to 5 or less for better SVC cache utilization
How many Mdisk should I place in an MDG versus the max number of Mdisks?
– Recomendation – >= 8 per MDG more is better to a point!
• Max size of Mdisks <= 2TB or > 2TB where device type allows for better qdepth
• The fewer the backend mdisks the better the qdepth
• Spread equally across all SVC node ports (Load balancing)
• Make the size of an array if not using ET
• Make multiple striped vdisks and a number that can be spread across the paths
equally
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
6
Should I mask host connections through zoning or SVC configurations?
– Best practice is to use the default value of 1111 (all ports enabled) and control
masking through zoning
– Use host type Target Port Group (TPGS) for Solaris host, HP/UX for HP and
Generic for everything else other than OpenVMS
– Separate disk and tape IO on host HBAs
– Host Vdisk sizes of 2TB largest to 5GB smallest
Should I format the Vdisks upon creation or not?
– No need to! More explanation in backup slides at the end of this deck.
Should I use Easy Tier at the Back-end Storage or Front-end SVC level?
– Recommendation – don’t use both, either one or the other
– Depends on objective, advantages in each
Most common questions / max limits versus BP recommendations
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
7
What size Vdisks should I create - as large as the host will allow?
– Recommenedation is to limit MM/GM <= 500GB and spread across more ports
– Balance vdisks across
• Preferred nodes
• Host servers
• IOgrps
What works better, 4 or 8 paths per Vdisk?
– Recommendation is 4 paths per Vdisk
How many Iogrps should I map to a host? 4?
– Recommendation is to size per throughput and number of hosts per cluster
How many IO connections/zones per storage device to the SVC should I zone?
– 16 from any “one” storage device unit zoned with all SVC node ports
Can a host have only one connection to the SVC - Dual host HBA connections
What is the best extent size to use?
– Recommedations – generally make all MDGs the same ext size – see addtions sec.
• 1024MB for DS8K
• 1024 for XIV
• 256/512 for internal or external SSD
• Capacity versus performance requirements/thresholds
Most common questions / max limits versus BP recommendations
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
How much capacity can an SVC CG8 Cluster handle?
– Global SSA recommendation is specific to the starting/entry point, when initial ordering takes
place where specific performance requirements are unknown
– It is strongly recommended to limit the total capacity on the SVC to 500 TB as a
starting/entry point for the CG8 models.
– The breakdown per SVC IOgrp is 125TB per IOgrp
• Note: For CG8 or newer models the entry point is 500TB regardless of Copy Services
or not
– For CF8 and older models - the recommended limit of 500 TB is only valid under the
following conditions, otherwise 300TB is the max limit:
• 4Gbs SAN Fabric min (Speed on the Fabric Switches)
• No FLASHCOPY
• No REPLICATION
The bottom line is to start at 500TB, gage growth on real performance measurements, adjust
to not exceed SLA requirements, add capacity that exceed requirements to a different (new)
SVC Cluster
8
BP -Threshold of capacity limits- SVC cluster
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
9
Rack and Stack
SAN Volume Controller Best Practices for
Storage Administrators
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
10
SVC – Single rack elevation or optional split rack design
Remember this!!
The right way
This example is with the RPS
– 2145 SVCs at the top
– UPS units in the middle
– RPS units the bottom
– IU fillers between the nodes
Iogrp 0
Iogrp 3 Iogrp 2 Iogrp 1
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
11
SVC Cluster Power Cabling Physical Diagrams
IBM SAN Volume Controller Physical Installation Racking Guidelines
Power Cord from UPS gets plugged
into Power Out 1 on front side of this
RPS unit
Power cord from primary
ePDU (left side for odd
nodes / right side for
even nodes) gets
plugged into Main power
inlet on back side of this
RPS unit
Power cord from
secondary ePDU (right
side for odd nodes / left
side for even nodes)
gets plugged into Main
power inlet on back
side of RPS unit
The picture below is of the RPS (Redundant ac-Power Switch),
IBM Feature Code 8300, that is included as part of newer SVC
clusters. In more general terms this RPS unit is actually an
Automatic Transfer Switch (ATS).
The picture orientation is looking at an actual installed RPS
from the front of the cabinet.
You can see that the RPS is attached to its brackets and
mounted on the cabinet back rails to ensure the RPS ‘main’ &
‘backup’ receptacles are facing the back of the cabinet. The
‘power out’ receptacles that are for connection to the UPS unit
should be facing the front of the cabinet.
IBM RPS Unit Info
ON
OFF
20
ON
OFF
20
12A Max.
Circ
uit B
reak
er20
ABR
ANCH
A
12A Max.
Circ
uit B
reak
er20
ABR
ANCH
B
ON
OFF
20
ON
OFF
20
12A Max.
Circ
uit B
reak
er20
ABR
ANCH
A
12A Max.
Circ
uit B
reak
er20
ABR
ANCH
B
21
ATT
ENTI
ONCONNECT ONLY IBM SAN VOLUMECONTROLLERS TO THESE OUTLETS.SEE SAN VOLUME CONTROLLER
INSTALLATION GUIDE.
21
ATT
ENTI
ONCONNECT ONLY IBM SAN VOLUMECONTROLLERS TO THESE OUTLETS.SEE SAN VOLUME CONTROLLER
INSTALLATION GUIDE.
AC
DC
AC
DC
21
ATT
ENTI
ONCONNECT ONLY IBM SAN VOLUMECONTROLLERS TO THESE OUTLETS.SEE SAN VOLUME CONTROLLER
INSTALLATION GUIDE.
21
ATT
ENTI
ONCONNECT ONLY IBM SAN VOLUMECONTROLLERS TO THESE OUTLETS.SEE SAN VOLUME CONTROLLER
INSTALLATION GUIDE.
21
ATT
ENTI
ONCONNECT ONLY IBM SAN VOLUMECONTROLLERS TO THESE OUTLETS.SEE SAN VOLUME CONTROLLER
INSTALLATION GUIDE.
21
ATT
ENTI
ONCONNECT ONLY IBM SAN VOLUMECONTROLLERS TO THESE OUTLETS.SEE SAN VOLUME CONTROLLER
INSTALLATION GUIDE.
21
ATT
ENTI
ONCONNECT ONLY IBM SAN VOLUMECONTROLLERS TO THESE OUTLETS.SEE SAN VOLUME CONTROLLER
INSTALLATION GUIDE.
21
ATT
ENTI
ONCONNECT ONLY IBM SAN VOLUMECONTROLLERS TO THESE OUTLETS.SEE SAN VOLUME CONTROLLER
INSTALLATION GUIDE.
AC
DC
AC
DC
AC
DC
AC
DC
AC
DC
AC
DC
AC
DC
AC
DC
AC
DC
AC
DC
AC
DC
AC
DC
AC
DC
AC
DC
1U Filler Panel
1U Filler Panel
1U Filler Panel
1U Filler Panel
1U Filler Panel
1U Filler Panel
1U Filler Panel
1U Filler Panel
1U Filler Panel
1U filler panel or optional SSPC server
1U Filler Panel
8U Reserved for data wiring patch panels
or filler panels
1U filler panel or optional 1U monitor
Left Side
ePDU
Right Side
ePDU
Main
Input
Backup
Input
SVC #8 IOGroup 3 Node B
SVC #7 IOGroup 3 Node A
SVC #6 IOGroup 2 Node B
SVC #5 IOGroup 2 Node A
SVC #4 IOGroup 1 Node B
SVC #3 IOGroup 1 Node A
SVC #2 IOGroup 0 Node B
SVC #1 IOGroup 0 Node A
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
12
Racking and Stacking: SVC Best Practice: A Right Way Example
Remember this!!
This example is with the
mandatory ac-power switch
– 2145 SVCs at the top
– UPS units in the middle
– RPS units the bottom
Rear View Front View
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
13
How to zone hosts and storage/testing
SAN Volume Controller Best Practices for
Storage Administrators
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
14
Best Practice for Storage zones
– Create two cluster zones (only consisting of node ports not in MM/GM and node to
node traffic
– Each Backend Storage device should be separated into its own zone with SVC
– Zone Backend Storage ports and SVC ports together
– Never put Host OS ports, SVC ports and Backend Storage ports together in the
same zone
• Instead - Create zones with Host ports and SVC ports
- Create zones with Backend and SVC ports
• Never use the same DS8K ports or any native back-end port for connectivity to
SVC and an attached host
• If SVC is attached to the DS8K or other native back-end devices and the DS8K
or other back-end device is using native GM (not SVC GM) then dedicate
appropriate back-end ports specifically for GM, not to be used for attaching any
other device, whether Host Server, SVC or other connectivity relationships.
– Note - Never span zones to include more than one Backend storage device!
• Never make zoning changes on redundant Fabrics at the same time
• Make changes on one fabric and wait 30 min in-between
IBM Implementation Services for storage software - SVC
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
15
DS8K Right I/O Enclosures
Bay 1
1
3
0
1
3
1
1
3
2
1
3
3
C1R4
1
0
0
1
0
1
1
0
2
1
0
3
C0R2
Bay 3
3
3
0
3
3
1
3
3
2
3
3
3
C1R7
3
0
0
3
0
1
3
0
2
3
0
3
C0R3
Bay 5
5
3
0
5
3
1
5
3
2
5
3
3
C1R8
5
0
0
5
0
1
5
0
2
5
0
3
C0R4
Bay 7
7
3
0
7
3
1
7
3
2
7
3
3
C1R7
7
0
0
7
0
1
7
0
2
7
0
3
C0R3
DS8K Left I/O EnclosuresBay 0
0
0
0
0
0
1
0
0
2
0
0
3
C1L4
0
3
0
0
3
1
0
3
2
0
3
3
C0L2
Bay 2
2
0
0
2
0
1
2
0
2
2
0
3
C1L7
2
3
0
2
3
1
2
3
2
2
3
3
C0L3
Bay 4
4
0
0
4
0
1
4
0
2
4
0
3
C1L8
4
3
0
4
3
1
4
3
2
4
3
3
C0L4
Bay 6
6
0
0
6
0
1
6
0
2
6
0
3
C1L7
6
3
0
6
3
1
6
3
2
6
3
3
C0L3
iogrp 02048 LUNs max
Node 1
HBA 1
P1 P2 P3 P4
HBA 2
Node 2
HBA 1
P1 P2 P3 P4
HBA 2
Node 3
HBA 1
P1 P2 P3 P4
HBA 2
Node 4
HBA 1
P1 P2 P3 P4
HBA 2
iogrp 12048 LUNs max
4 Node SVC MAX Vdisk 4096 wwpn5005076801
DIR1 SAN Fabric DIR2 SAN Fabric
port1=11052ca port1=11052b7 port1=110529e port1=110528b
port2=12052ca port2=12052b7 port2=120529e port2=120528b
port3=13052ca port3=13052b7 port3=130529e port3=130528b
port4=14052ca port4=14052b7 port4=140529e port4=140528b
Application Host Server
A2 B2
The same native
backend ports should not be
shared for both direct host connectivity
and SVC connectivity as shown here.
The correct way to bypass SVC is to use
other backend ports not connected to the
SVC
DS8K Right I/O Enclosures
Bay 1
1
3
0
1
3
1
1
3
2
1
3
3
C1R4
1
0
0
1
0
1
1
0
2
1
0
3
C0R2
Bay 3
3
3
0
3
3
1
3
3
2
3
3
3
C1R7
3
0
0
3
0
1
3
0
2
3
0
3
C0R3
Bay 5
5
3
0
5
3
1
5
3
2
5
3
3
C1R8
5
0
0
5
0
1
5
0
2
5
0
3
C0R4
Bay 7
7
3
0
7
3
1
7
3
2
7
3
3
C1R7
7
0
0
7
0
1
7
0
2
7
0
3
C0R3
DS8K Left I/O EnclosuresBay 0
0
0
0
0
0
1
0
0
2
0
0
3
C1L4
0
3
0
0
3
1
0
3
2
0
3
3
C0L2
Bay 2
2
0
0
2
0
1
2
0
2
2
0
3
C1L7
2
3
0
2
3
1
2
3
2
2
3
3
C0L3
Bay 4
4
0
0
4
0
1
4
0
2
4
0
3
C1L8
4
3
0
4
3
1
4
3
2
4
3
3
C0L4
Bay 6
6
0
0
6
0
1
6
0
2
6
0
3
C1L7
6
3
0
6
3
1
6
3
2
6
3
3
C0L3
iogrp 02048 LUNs max
Node 1
HBA 1
P1 P2 P3 P4
HBA 2
Node 2
HBA 1
P1 P2 P3 P4
HBA 2
Node 3
HBA 1
P1 P2 P3 P4
HBA 2
Node 4
HBA 1
P1 P2 P3 P4
HBA 2
iogrp 12048 LUNs max
4 Node SVC MAX Vdisk 4096 wwpn5005076801
DIR1 SAN Fabric DIR2 SAN Fabric
port1=11052ca port1=11052b7 port1=110529e port1=110528b
port2=12052ca port2=12052b7 port2=120529e port2=120528b
port3=13052ca port3=13052b7 port3=130529e port3=130528b
port4=14052ca port4=14052b7 port4=140529e port4=140528b
Application Host Server
A2 B2
The same native
backend ports should not be
shared for both direct host connectivity
and SVC connectivity as shown here.
The correct way to bypass SVC is to use
other backend ports not connected to the
SVC
The same port on the back-end is
being used for host and SVC
SVC Correct Example SVC Incorrect Example
Zoning Best Practices - Continued
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
16
Back panel of SVC node:
Fabric A Fabric B
IBM_2145:admin> svcinfo lsnode 1
port_id 5005076801100135 (port 3)
port_status active
port_id 5005076801200135 (port 4)
port_status active
port_id 5005076801300135 (port 2)
port_status active
port_id 5005076801400135 (port 1)
port_id 5005076801500135 (port 5)
port_status active
port_id 5005076801600135 (port 6)
port_status active
port_id 5005076801700135 (port 7
port_status active
port_id 5005076801800135 (port 8)
port_status active
1
3 4
2
Ports Physically
Numbered 5-8 Left
to Right
Numbers in yellow used
to make WWPN unique
5 6 7 8
Do not add ports 5-8 into
the cluster zone
Use these ports for
MM/GM only with version
6.4.1 and 7.1.x family
Zoning BP – understanding the physical and logical
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
17
Zoning Best Practices - XIV
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
18
Too many paths to a Vdisk What could happen?
– If the recommended number of paths to a vdisk are exceeded a path failure may not be
recovered in the required amount of time
• Causes excessive I/O waits, resulting in application failures
• Under certain circumstances, it can reduce performance
• Note: 8 paths are supported but 4 are optimum for SDD/SDDDSM/SDDPCM
SVC host zones
– There must be a single zone for each host port. This zone must contain the host port,
and one port from each SVC node that the host will need to access Hosts with four (or
more) Host Bus Adapters (HBAs)
– Four or more HBA’s:
– Configure your SVC Host Definitions (and zoning) as though the single host is two or
more separate hosts
– During Vdisk assignment, alternate which Vdisk is assigned to one of the “pseudo-
hosts”, in a round robin fashion (a pseudo-host is nothing more than another regular
host definition in the SVC host config. Each pseudo-host will contain 2 unique host
WWPNs, 1 WWPN mapped to each fabric)
All SVC nodes must see same set of LUNs from disk controller
– Otherwise degraded mode on controller and/or MDisks
Zoning Best Practices - SVC Preferred Node Scheme
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
SAN Fabric 1
19
vdisk1 vdisk2
Preferred path for vdisk1 is SVC
N1P1 & N1P5
Non Preferred path for vdisk1 is
SVC N2P1 &N2P5
Preferred path for vdisk2 is SVC
N2P1 & N2P5
Non Preferred path for vdisk2 is
SVC N1P1 &N1P5
DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2145 ALGORITHM: Load Balance
SERIAL: 600507680181059C4000000000000007
==============================================================
Path# Adapter/Path Name State Mode Select Errors
0 fscsi0/path0 OPEN NORMAL 1996022 0
1* fscsi0/path1 OPEN NORMAL 29 0
2 fscsi2/path2 OPEN NORMAL 1902495 0
3* fscsi2/path3 OPEN NORMAL 29 0
Server/Host view of the datapaths
How do you get to many paths?
Examples of correct Host to SVC Cluster zoning with 8 ports
Host
B1 A1
SAN Fabric 2
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
20
Examples of correct zoning prior to 8 ports - is this correct?
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
SAN Fabric 1
21
Correct Storage to SVC zoning with 8 ports in an “existing” environment
SAN Fabric 2
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
SAN Fabric 1
22
Correct Storage to SVC zoning with 8 ports in a “new” environment
SAN Fabric 2
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
23
Correct way to make MM/GM zone, new implementation of CG8, code 7.1.x
With 7.1 you can use port masking to tell the cluster what ports to use for heartbeat/MM/GM
Ports 7-8 cannot be used for storage connections. Ports 7-8 can only be used for SVC to
host connections, (heartbeat) and / or GM/MM traffic.
– Exclude the targeted GM/MM and heartbeat ports from your normal host to SVC and
storage to SVC traffic)
For heartbeat on existing environments – put/dedicate all port 7’s (:70:) in one zone on one
fabric and all port 8(:80:) in one zone on the other fabric
– In scenarios where replication (GM/MM) is not part of the environment, ports 7 and 8
would be used for heartbeat and 5 and 6 would be used for disk-storage and host
connections /traffic
For heartbeat on new builds, take care to use physical port 2(:30:) on HBA1 and port 8(:80:)
on HBA2 for more resiliency across HBA cards or physical ports 3(:10:) and 7(:70:)
respectively to avoid a total failure in the event of one card failing
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
24
New zoning BP with 7.1.0.x and 8 port nodes
Think of 4 types of zones per fabric (Use case - existing environment, adding 2 FA HBA)
FA-ST1
Storage Zone
N1P1, N1P3, N2P1, N2P3
SP1, SP3
FA-Host 1 Host Zone/IOgrp
N1P1&N2P1or N1P3&N2P3
Host HBA 1
FA-SVC Zone Node to Node/Heartbeat
N1P8, N2P8
FA-MM/GM Mirror Zone
N1P6, N2P6,
RN1P6, RN2P6
FB-ST1 Storage Zone
N1P2, N1P4, N2P2, N2P4
SP2, SP4
FB-Host 1 Host Zone/IOgrp
N1P2&N2P2 or N1P4&N2P4
Host HBA 2
FB-SVC Zone Node to Node/Heartbeat
N1P7, N2P7
FB-MM/GM Mirror Zone
N1P5, N2P5,
RN1P5, RN2P5
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
25
New zoning BP with 7.1.0.x and 8 port nodes
Think of 4 types of zones per fabric-Use case – new build, with resiliency across FA HBAs
FA-ST1
Storage Zone
N1P1, N1P6, N2P1, N2P6
SP1, SP3
FA-Host 1 Host Zone/IOgrp
N1P1-N2P1 or N1P6-N2P6
Host HBA 1
FA-SVC Zone Node to Node/Heartbeat
N1P3, N2P3…
FA-MM/GM Mirror Zone
N1P8, N2P8,
RN1P8, RN2P8
FB-ST1 Storage Zone
N1P2, N1P5, N2P2, N2P5
SP2, SP4
FB-Host 1 Host Zone/IOgrp
N1P5-N2P5 or N1P4-N2P4
Host HBA 2
FB-SVC Zone Node to Node/Heartbeat
N1P7, N2P7…
FB-MM/GM Mirror Zone
N1P2, N2P2,
RN1P2, RN2P2
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
26
WAN direct connection to the
FCiP connection to the SVC ports
GM/MM zone A
WAN direct connection to the
FCiP connection to the SVC ports
GM/MM zone B
Adding 2nd HBA - MM/GM zone, existing environment - 7.1.0.x
Make the local_fc_port_mask (Node to Node/heartbeat) = 11000000
Make the partner_fc_port_mask (MM/GM) = 00110000
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
27
WAN direct connection to the
FCiP connection to the SVC ports
GM/MM zone A
WAN direct connection to the
FCiP connection to the SVC ports
GM/MM zone B
Correct way to make MM/GM zone, new implementation of CG8, code 7.1.x
Make the local_fc_port_mask (Node to Node/heartbeat) = 01000100
Make the partner_fc_port_mask (MM/GM = 10000010
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
28
How to configure Disk Controllers
SAN Volume Controller Best Practices for
Storage Administrators
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
29
Disk Controller Best Practices - Continued
SVC Managed Disk Group – All Volumes in MDG are created on Arrays that are the same technology unless ET
• Same disk size (146 GB)
• Same disk speed (15K RPM SAS)
• Same RAID type (RAID-5)
• No exceptions
– How many Arrays in an SVC MDG
• Enough to satisfy the application workload requirements use Disk Magic
• Enough to physically hold the application
• Minimum of 8 (allows greatest port workload spread in a IO Group)
4 is too few (old message - let’s get away from this)
• Balance across DA Pairs
• Balance across CECs
– Typically an application would reside in a single MDG
• OK for multiple applications to be in the same MDG
• Applications in multiple MDGs have availability considerations
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
30
Architecting SVC MDGs/Disk Controller Best Practices
Spread the
MDGs
Across as
many
Hardware
resources
As possible
If Flashcopy is
used
Then make at
min, 2
MDGs
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
31
All MDisks in an MDG should have similar performance
– Create minimum of one MDG per disk controller
– Add MDisks with same drive size, speed, RAID type from same Backend Storage device
Preferred path for VDisks in an I/O Group can be tailored
– Default algorithm will usually provide good results
• SVC alternates VDisks across nodes to attempt to balance workload
– Use TPC to monitor node workload balance
• Override (GUI) defaults to assign new VDisks to least used nodes (Control through
CLI)
Use striped mode VDisk layout in most cases
– Consider using sequential VDisks with applications/DBs that do their own striping
– Balance need for copy services with need for a few large VDisks
• Remember limit of 64MB/s per VDisk for FC (background copy rate can be changed)
• Remember limit of 25MB/s per VDisk for MM/GM sync/resync (background copy rate is pre-
defined)
• More smaller Vdisks spread across more ports, are less susceptible of creating IO
bottlenecks than fewer larger LUNs across fewer node ports
– Default balances load across MDisks and randomizes starting MDisk per VDisk
Disk Controller Best Practices
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
36
SAN Fabric A SAN Fabric B
Node 3
1 2
3 4
I/O G-1
1 2
3 4
Node 4
Node 1
1 2
3 4
I/O G-0
1 2
3 4
Node 2M
Dis
k1
0 / A
rray1
0
MD
isk8
/ Arra
y8
MD
isk7
/ Arra
y7
MD
isk6
/ Arra
y6
MD
isk9
/ Arra
y9
MD
isk5
/ Arra
y5
MD
isk3
/ Arra
y3
MD
isk2
/ Arra
y2
MD
isk1
/ Arra
y1
MD
isk4
/ Arra
y4
MD
isk1
1 / A
rray1
1
MD
isk1
2 / A
rray1
2
MD
isk1
3 / A
rray1
3
MDisk Group 1 / DS5K_1
VDisk 1
VDisk 2
VDisk 3
VDisk 4
SVC
Cluster
Channels 1
and 3
HOST ZONING
Create a SVC/Host zone for each server that receives storage from the SVC cluster.
Example:
Zone Server 1 port A (RED) with all SVC node port 3's.
Zone Server 1 port B (BLUE) with all SVC node port 2's. Zone Server 2 port A (RED) with all SVC node port 1's.
Zone Server 2 port B (BLUE) with all SVC node port 4's.
*** NOTE *** SVC supports a maximum of 256 host objects per I/O group thus a maximum of 1024 per cluster. The above host zoning results in each server being seen by every I/O group and the default host object creation behavior results in each host object counting as one towards this 256 maximum.
To create more then 256 host objects in the cluster you must zone a host to a subset of the I/O groups, you must assign the host object at host creation time to that same subset of I/O groups and then you must assign that host’s VDisks to one of those I/O groups in that same subset.
Server 1
A B
Server 2
A B
SVC ZONING
Create one zone in the RED fabric with all the SVC node ports cabled to Fabric A and create one zone in the BLUE fabric with all the SVC node ports cabled to Fabric B.
Example:
All odd (RED) SVC node ports in one zone and all even (BLUE) SVC node ports in one zone.
Note: For a cluster to be created and to operate correctly all node ports must be zoned together.
STORAGE
ZONING
Create a SVC/Storage zone for each storage subsystem virtualized by the SVC cluster.
Example: Zone DS5K_1 controller A and B daughter card channel ports 1 and 3 with all SVC node ports 1 and 3 in the RED fabric.
Zone DS5K_1 controller A and B daughter card channel ports 2 and 4 with all SVC node ports 2 and 4 in the BLUE fabric.
Cntrl A
Channels 2
and 4
Channels 2
and 4
SVC Cabling
and Zoning
Cntrl B
Channels 1
and 3
Best Practice
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
33
Best practices are to spread across drive enclosures / loops
– Ensure Channel Protection is configured
Making DS3/4/5K Arrays
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
When should I use SVC ET versus DS8000 ET?
Functional differences between SVC and DS8000 ET Consider the following
– SVC is still at ET1 vs DS8000 at ET3
• First - ET3 gives SSD/Enterprise/Nearline(SATA) or three level of tiering vs two tier only on SVC, unless using IILM
• Secondly - DS8000 ET can take advantage of rank and DA utilization knowledge in moving extents up/down tiers whereas SVC can't.
3
4
ET for DS8000 Behind SVC
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
What MDisk size should be created from DS8800 and presented to SVC?
– SVC MDisk is ≤ 2 TB until V6.2
– Key to choose an MDisk size that balances the number of LUNs across the 16 paths and maximizes storage utilization
– Example: two HDD Extent Pools of 38452GB each
• Using 48 LUNs per Extent Pool provides 38448GB usable capacity
• MDisk size of 801GB
• 3 MDisks per SVC path (6 in total for both Extent Pools)
– For MDisks from SSD, OK to have smaller MDisk size
• Spread SSD MDisks across paths – use same approach
3
5
DS8800 Volume Configuration (2)
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
Recommendation
General recommendation –Use Storage Pool Striping / Multi-rank Extent Pools rather than
traditional Single rank Extent Pools on the DS8000 with SVC
Only use Single rank Extent Pools when there are valid reasons in the environment
–Shared nothing environment such as Oracle ASM, DB2 Warehouse and GPFS
–Easy Tier Manual Mode allows the merge of pools so easier to change later on
When in doubt, engage ATS – send email to ([email protected])
3
6
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
Advantages – MDisks all the same size
– Easier administration for SVC Admin
– DS8000 Easy Tier – its free, so order and enable it, unless SVC ET is used, do not use both SVC ET and DS8K ET together
• SSD mixed with FC or SATA or FC mixed with SATA
• The DS8k ET and Encryption work together (supported) 6.1 or higher SVC ... 76.10.x or higher for DS8700
• Auto rebalance capability of ET on the DS8000 will ensure performance skew is flattened and no need to run the manual SVC rebalance Perl scripts to do so
• ET builds a performance profile for each rank in the pool (ie RAID type, speed, capacity, 6+P+S/7+P etc) and monitors performance of the rank.
• If a rank becomes overloaded vs it's profile capability, then move extents to other non busy ranks.
– Required for EasyTier extent level relocations and the automatic exploitation of SSDs
– See EasyTier Redpaper
“Best Practice” does not replace thoughtful design – Evolution of storage capabilities changing longstanding practices
– Think about the workloads…Think about the configuration
– Monitor both SVC and the DS8000
– SVC and DS8000 virtualization make changes easier, less disruptive
– Understand workload isolation and availability requirements vs ease of use
Pools in DS8000 using ET Behind SVC
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
38
38
Disk Controller Best Practices
If there is no plan/strategy for MDG configuration use only one MDG per backend disk
system
– MDG RAS Considerations:
• May make sense to create multiple MDGs if you ensure a host only gets its’
VDisks built from one of the MDGs
• If MDG goes offline then it impacts only a subset of all the hosts using SVC
• If not going to isolate hosts to MDGs then create one large MDG per disk
controller
• Assumes physical disks are all same size and speed and same RAID level
– MDG Performance Considerations:
• May make sense to create multiple MDGs if attempting to isolate workloads to
different disk spindles
• With SATA disk drives avoid more then one MDG if possible
• We tend to see to many MDGs with too few MDisks and thus over driven MDisks
• We tend to see too few large physical disk spindles and I/O suffers
• Focus on spindle counts to meet workload requirements
In Large environments, remember availability
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
VDisk / Volume creation, round robin in IO group by default
Number of MDisks in the MDG – Most important attribute influencing performance
– Minimum of 8 MDisks in MDG to use all 8 ports in a IO group
– More MDisks in the MDG is better for transactional workloads
– Note: Increasing performance “potential” adversely increases impact boundary
• Cannot be avoided up to minimum performance requirements
Rotational speed of the disks – More significant than access density
SSD considerations – Measurement of benefit can be analyzed without SSD using SVC STAT tool
– Can use SSD from backend storage
3
9
SVC Performance Considerations
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
40
V4.2.1 adds dynamic cache partitioning while maintaining the overall cache LRU algorithm
– Enterprise controllers provide similar cache feature
– Pre-V4.2.1, if cache is 100% full due to not being able to destage writes to slow disk subsystem, then other applications on faster disk subsystems may be affected
Partitioning done by Managed Disk Group (MDG)
– This was chosen as most users configure MDGs to match disk subsystem performance characteristics
– This means a maximum of 128 partitions (MDG limit)
– Maximum occupancy percent is for writes only
• Read I/O requests continue normally
No one partition may occupy more than its allocated limit
The read/write nature of the cache remains unchanged.
– That is the cache can be 100% read, write or some ratio thereof
Number of
MDGs
equals # of
partitions
Maximum
occupancy
allowance as
% of cache
1 100
2 75
3 40
4 30
5 or more 25
SVC Performance Considerations- SVC Cache Partitioning
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
41
Architecting SVC MDGs – Extent size considerations
Managed Disk Group Extent Size
– Maximum cluster capacity is related to extent size
• 16MB extent = 64TB and doubles for each increment in extent size e.g. 32MB =
128TB
– Strongly recommend minimum 128/256MB
• SPC benchmarks used 256MB extent
– Does not really affect performance supposedly
– Pick extent size and use for all MDGs
• Can’t migrate VDisks between MDGs with different extent sizes
• V4.3.x helps here with introduction of VDisk Mirroring (Now the preferred migration
method)
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
42
Disk Controller Best Practices Architecting SVC MDGs with V6.4.x and above
Architecting SVC MDGs – Extent size considerations
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
XIV Volume (MDisk) Size for SVC
Considerations for determining what size volume to create on XIV and present to SVC are:
– Maximize available space
– For pre-SVC R6.3 environments, ensure number of volumes being created is divisible
by the number of ports on XIV zoned to SVC
• Provides the best balance of the volumes across all the ports
– With SVC R6.3 software and round-robin multipathing, number of volumes is less
important
• But still have at least 3-4 volumes per path
– Largest XIV volume that SVC can detect is 2048 GB
• Current as of SVC R6.4 firmware
– Maximum of 511 LUNs from one XIV system can be mapped to a SVC cluster
See the following tables on the next two slides for LUN sizing:
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
Number of XIV Modules
Installed
Number of LUNs (MDisks) at
1632 GB each
IBM XIV System TB used IBM XIV System TB
Capacity Available
6 16 26.1 27
9 26 42.4 43
10 30 48.9 50
11 33 53.9 54
12 37 60.4 61
13 40 65.3 66
14 44 71.8 73
15 48 78.3 79
Number of XIV Modules
Installed
Number of LUNs (MDisks) at
1666 GB each
IBM XIV System TB used IBM XIV System TB
Capacity Available
6 33 54.9 55.7
9 52 86.6 87.8
10 61 101.6 102.4
11 66 109.9 111.3
12 75 124.9 125.7
13 80 133.2 134.7
14 88 146.6 148.1
15 96 159.9 161
Number of LUNs using 1632 GB (Decimal) LUNs with XIV Gen2 A14 System using 1TB drives
Number of LUNs using 1666 GB (Decimal) LUNs with XIV Gen2 A14 System using 2TB drives
XIV Gen2 A14 Recommended LUN Sizes
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
Number of XIV
Modules Installed
Number of LUNs
(MDisks) at 1669 GB
each
IBM XIV System TB
used
IBM XIV System TB
Capacity Available
6 33 55.1 55.7
9 52 86.8 88.0
10 61 101.8 102.6
11 66 110.1 111.5
12 75 125.2 125.9
13 80 133.5 134.9
14 89 148.5 149.3
15 96 160.2 161.3
Number of XIV
Modules Installed
Number of LUNs
(MDisks) at 2185 GB
each
IBM XIV System TB
used
IBM XIV System TB
Capacity Available
6 38 83.0 84.1
9 60 131.1 132.8
10 70 152.9 154.9
11 77 168.2 168.3
12 86 187.9 190.0
13 93 203.2 203.6
14 103 225.0 225.3
15 111 242.5 243.3
Number of LUNs using 1669 GB (Decimal) LUNs with XIV Gen3 114 System using 2TB drives
Number of LUNs using 2185 GB (Decimal) LUNs with XIV Gen3 114 System using 3TB drives
XIV Gen3 114 Recommended LUN Sizes
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
SVC and XIV Configuration Best Practices
XIV Recommendations
– Use 2 interface host ports from each of the active Interface Modules
• Use ports 1 and 3 from each interface module 4, -9
• Change port 4 setting on each of the XIV Interface Modules from Initiator to Target in order to
optimize HBA buffer allocations
– Zone all available ports with all SVC node ports
Create one SVC storage pool per XIV using 1GB or larger extent size
– Large extent size ensures effective use of XIV distributed cache
Create striped SVC volumes using all MDisks in SVC storage pool
Do not share XIV LUNs to multiple devices such as more than 1 SVC, NAS or any other
virtulization platform, as this will compromise physical disk spindle failure boundaries and
issues will be widespread. You can not isolate XIV physical spindles to multiple devices.
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
XIV Configuration Considerations
XIV Snapshot, thin provisioning, synchronous and asynchronous replication, LUN
expansion on XIV Mdisks are not supported
– RPQ available to allow use of XIV Thin Provisioning
No need to reserve Snapshot space on XIV
– XIV GUI will default to 10% of the Storage Pool size. Remember to zero it out
Use XIV host port 1 and 3 on active interface modules (maximum of 12 paths) On XIV Gen2, change fibre channel port-4 personality to target
– Optimize buffer credits
– Gen3 - no need Configuring XIV host connectivity for the SVC cluster – Method 1
– Create a cluster
– Add each node as a host in the cluster
– Allows for easier logical administration if multiple SVCs
– Use ‘default’ host type for SVC
– Map all volumes to the cluster Configuring XIV host connectivity for the SVC cluster – Method 2
– Create one host definition on XIV and include all SVC node WWPNs
– Simpler than method 1, but makes performance investigation more difficult
– Use ‘default’ host type for SVC
– Map all volumes to the SVC host
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
48
Data Placement and Host Vdisk mapping
SAN Volume Controller Best Practices for
Storage Administrators
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
49
Data Placement and Host Vdisk mapping – things to consider
Don’t spread file systems across multiple frames
Spreading versus Isolation
Spreading
– Spreading the I/O across MDGs exploits the aggregate throughput offered by more
physical resources working together
– Spreading I/O across the hardware resources will also render more throughput than
isolating the I/O to only a subset of hardware resource
– You may reason that the more hardware resources you can spread across, the better the
throughput
• Makes it more difficult to manage code upgrades, etc.
• Impact boundaries are compromised (data availability is weakened)
Isolation
– In some cases more isolation on dedicated resources may produce better I/O throughput
by eliminating I/O contention
• More robust for data availability
• Applications are less susceptible to slow draining devices
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
50
Data Placement and Host Vdisk mapping Spreading
– Spreading the I/O across MDGs exploits the aggregate throughput offered by more
physical resources working together
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
51
Data Placement and Host Vdisk mapping Spreading versus isolation
Arrays in
MDG0
MDG3
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
52
Data Placement and Host Vdisk mapping
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
53
Spreading versus isolation
Each MDG could
Represent a different
Back-end Storage
Frame as well
DS8K
DS5K
XIV
ETC
Data Placement and Host Vdisk mapping
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
54
How to utilize copy services
SAN Volume Controller Best Practices for
Storage Administrators
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
55
55
Copy Services – Replication Best Practices
Metro/Global Mirror background copy rate is pre-defined
– Per VDisk limit is 25MB/s
– Maximum per I/O Group is roughly 250MB/s
Bandwidth configurable based on pipe between clusters
– Only affects initial sync and any resyncs
– Set bandwidth setting to 30-50% of pipe capacity initially
– Size pipe for peak loads or expect write performance delays
Be careful using slower disk subsystem for the secondary VDisks for high performance primary VDisks
– SVC cache may not be able to buffer all the writes
• Flushing cache writes to SATA may slow I/O at production site
Be careful using space-efficient secondary VDisks at DR site
– Use of SE VDisks for FC targets of GM secondaries may trigger 1920s
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
56
Copy Services – Replication Best Practices
Strategy is....
SVC
– A copy at primary site and B copy at target site (this is a crash consistent copy). C copy needed at target site for DR testing (as well as periodic point in time flashes where data could be used as gold copy (non-DR times))
– See the following slide as an example set-up
Native Back-end Storage
– A copy at primary site and B & C copy at target site (B is not crash consistent rather C is automatically made and is crash consistent). D copy needed at target site for DR testing (as well as periodic point in time flashes where data could be used as gold copy (non-DR times)).
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
57 IBM Confidential
Example SVC Global Mirror Solution Overview (Option/Scenario 1)
Source DISK (A copy)
Any given volume can only be in one MM/GM relationship.
– 1 source to 1 target.
– For multiple targets, use MM/GM on volume or FCM to suspend and resync source to
target relationships, from one source.
The C copy will be created from the B copy using FLASHCOPY NOCOPY and could use a
SE strg pool.
D/R testing will do minimal checkout with no heavy testing. Mostly a read check-out. It will
use C copy.
In the event of an actual Disaster, the B copy will need to be used.
B copy is never used for any type of testing (it will remain a gold copy).
– In the event we fall out of sync far enough that we cannot immediately flip over to
Global Mirror, a C copy will need to be created prior to resuming data replication (in
order to create a gold copy).
Target DISK (B copy)
Can Mount the B copy Target DISK (C copy) Can mount the C copy
After changes are made, reverse
from C to B overwrites the gold copy Flashed
DISK pool This is a Full/Incremental (no SE) as it will
be dumped to tape daily. This is for in-house
backup.
Source Location Target Location
C copy is created via FLASH NOCOPY.
Physical storage will be a space efficient
pool of storage that is 25% capacity of B copy
•GM works
•Reverse GM works
•Reflecting change on B copy
•SE FlashCopy works
•Reverse SE FlashCopy works
•“0” copy rate FlashCopy works
•Reverse “0” copy rate FlashCopy works
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
58
Copy Services – FlashCopy Best Practices
FlashCopy
– Source and Target SVC Clusters in a MM/GM relation should be at the same Firmware
levels.
• Some Mismatch levels are supported, but best practice is to fun the same level
• Try to use different spindles for source and target
– V4.3 introduces SE VDisks so can do snaps (FC w/nocopy)
• Can have 256 different point-in-times copies to recover from CDP like function
• Unfortunately, no flash back without changing to full copy and waiting for it to
complete
– Background copy rate is configurable and default is 50 meaning 2MBs per VDisk
• Customers sometimes complain it is taking a long time to do a full copy
• Can dynamically change the copy rate up to a maximum of 64MBs per VDisk
• Balance speed of copy versus impact on production applications
– Be careful with SATA for use as targets
• If you use them then upgrade to V4.2.1.x or later to avoid cache saturation
• Consider using sequential VDisks for SATA
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
59
Copy Services – Replication Best Practices
Bandwidth Considerations
– If the network bandwidth is too small to handle the traffic then application write I/O
response times may be elongated
– For SVC GM must support short term ‘Peak Write’ bandwidth requirements
• SVC GM much more sensitive to lack of bandwidth than DS8000
Bursts should exceed bandwidth for a ‘few’ minutes only
Default link tolerance in GM is 5 minutes
• Pipe must be able to handle normal and peak write I/O
Even after loss of redundant link where no interruption to operation is required
– Need to consider initial sync and re-sync workload as well
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
60
Useful links and Max Limitations and BP recommendations
Always check the Max Limit configuration URL for the most current updates - 7.1
– http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004368
– Some limitations have changed, (Generic host properties)
6.4- http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004115
6.3 - https://www-304.ibm.com/support/docview.wss?uid=ssg1S1003903
5.1- http://www-
01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003555#Max
IBM System Storage SAN Volume Controller Best Practices and Performance
Guidelines
– http://www.redbooks.ibm.com/abstracts/sg247521.html?Open
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
Questions-
61 © Copyright IBM Corporation 2011
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
Thank you!
For you interest
and attendance
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
63
Extra’s
SAN Volume Controller Best Practices for
Storage Administrators
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
64
Cluster Zoning Continued
Cisco Example
–CLI View of one Fabric /4 node SVC • zone name SVC_ALL_PORTS vsan 10
• fcalias name RUBSTLSVC01_N1_P3 vsan 10
• pwwn 50:05:07:68:01:10:b3:88
• fcalias name RUBSTLSVC01_N2_P3 vsan 10
• pwwn 50:05:07:68:01:10:b3:5e
• fcalias name RUBSTLSVC01_N3_P3 vsan 10
• pwwn 50:05:07:68:01:10:b3:65
• fcalias name RUBSTLSVC01_N4_P3 vsan 10
• pwwn 50:05:07:68:01:10:b3:7a
• fcalias name RUBSTLSVC01_N4_P1 vsan 10
• pwwn 50:05:07:68:01:40:b3:7a
• fcalias name RUBSTLSVC01_N3_P1 vsan 10
• pwwn 50:05:07:68:01:40:b3:65
• fcalias name RUBSTLSVC01_N1_P1 vsan 10
• pwwn 50:05:07:68:01:40:b3:88
• fcalias name RUBSTLSVC01_N2_P1 vsan 10
• pwwn 50:05:07:68:01:40:b3:5°
–GUI View of one Fabric/4 node SVC
SVC Node Ports
IBM Implementation Services for Storage Software – SVC
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
65
DS8K Right I/O Enclosures
Bay 1
1
3
0
1
3
1
1
3
2
1
3
3
C1R4
1
0
0
1
0
1
1
0
2
1
0
3
C0R2
Bay 3
3
3
0
3
3
1
3
3
2
3
3
3
C1R7
3
0
0
3
0
1
3
0
2
3
0
3
C0R3
Bay 5
5
3
0
5
3
1
5
3
2
5
3
3
C1R8
5
0
0
5
0
1
5
0
2
5
0
3
C0R4
Bay 7
7
3
0
7
3
1
7
3
2
7
3
3
C1R7
7
0
0
7
0
1
7
0
2
7
0
3
C0R3
DS8K Left I/O EnclosuresBay 0
0
0
0
0
0
1
0
0
2
0
0
3
C1L4
0
3
0
0
3
1
0
3
2
0
3
3
C0L2
Bay 2
2
0
0
2
0
1
2
0
2
2
0
3
C1L7
2
3
0
2
3
1
2
3
2
2
3
3
C0L3
Bay 4
4
0
0
4
0
1
4
0
2
4
0
3
C1L8
4
3
0
4
3
1
4
3
2
4
3
3
C0L4
Bay 6
6
0
0
6
0
1
6
0
2
6
0
3
C1L7
6
3
0
6
3
1
6
3
2
6
3
3
C0L3
iogrp 02048 LUNs max
Node 1
HBA
P1 P2 P3 P4
Node 2
HBA
P1 P2 P3 P4
Node 3
HBA
P1 P2 P3 P4
Node 4
HBA
P1 P2 P3 P4
iogrp 12048 LUNs max
4 Node SVC MAX Vdisk 4096 wwpn5005076801
DIR1 SAN Fabric DIR2 SAN Fabric
Supported DS8K to SVC ZoningEither ports 1&4 and 2&3 should be
zoned to a fabric or ports as shown on the
previous page, both configs are supported
Newer SVC nodes may contain
one HBA card with 4 ports
Example Back-end Storage to SVC Zoning
Zoning BP–Spread across all avail resources prior to 8 ports
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
66
One SVC port from each node on each fabric should be zoned for GM traffic, taking care not to include both ports that a server might use. This means ports that would also be used for replication would be either ports 1 & 2, or ports 3 & 4
For each node in a cluster, exactly two fibre channel ports should be zoned to exactly two fibre channel ports from each node in the partner cluster.
If dual-redundant ISLs are available, then the two ports from each node should be split evenly between the two ISLs, i.e. exactly one port from each node should be zoned across each ISL.
Local cluster zoning should continue to follow the standard requirement for all ports on all nodes in a cluster to be zoned to one another.
This is discussed more verbosely on the Flash published for this issue on the IBM website:
–http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003634 and https://www-304.ibm.com/support/docview.wss?uid=ssg1S1003634
Previous we used to do this - Global Mirror Zone with 4 port nodes
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
vdisk3
67
vdisk1
vdisk2
Preferred path for vdisk1 is SVC
N1P2 & N1P3
Non Preferred path for vdisk1 is SVC
N2P2 &N2P3
Preferred path for vdisk2 is SVC
N2P2 & N2P3
Non Preferred path for vdisk2 is SVC
N1P2 &N1P3
vdisk1
vdisk2 vdisk3 vdisk4
vdisk4
How it works
Examples of correct Host to SVC Cluster zoning
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
68
iogrp 02048 LUNs max
Node 1
HBA 1
P1 P2 P3 P4
HBA 2
Node 2
HBA 1
P1 P2 P3 P4
HBA 2
Node 3
HBA 1
P1 P2 P3 P4
HBA 2
Node 4
HBA 1
P1 P2 P3 P4
HBA 2
iogrp 12048 LUNs max
4 Node SVC MAX Vdisk 4096 CF8wwpn
5005076801
DIR1 SAN Fabric DIR2 SAN Fabric
b03vio101NRPOKVIO1A
d1 d3
SVC Host Definitionsid:1
name:P770_1_vio1A
10000000C9C0B3DB
10000000C9C0DC7F
10000000C9C0E0E0
10000000C9C0A984
fscsi0=10000000C9C0A984
fscsi2=10000000C9C0E0E0
fscsi5=10000000C9C0DC7F
fscsi7=10000000C9C0B3DB
port1=10B374 port1=10B363 port1=10B371 port1=10B335
port2=20B374 port2=20B363 port2=20B371 port2=20B335
port3=30B374 port3=30B363 port3=30B371 port3=30B335
port4=40B374 port4=40B363 port4=40B371 port4=40B335
Zone for p770_1_vio1a_d1
10000000c9779a4a
500507680110B374
500507680130B374
500507680110B363
500507680130B363
500507680110B371
500507680130B371
500507680110B335
500507680130B335
Zone for p770_1_vio1a_d3_SVC
10000000C9C0DC7F
500507680120B374
500507680140B374
500507680120B363
500507680140B363
500507680120B371
500507680140B371
500507680120B335
500507680140B335
d2 d4
Zone for p770_1_vio1a_d4_SVC
10000000C9C0B3DB
500507680120B374
500507680140B374
500507680120B363
500507680140B363
500507680120B371
500507680140B371
500507680120B335
500507680140B335
Zone for p770_1_vio1a_d2
10000000C9C0E0E0
500507680110B374
500507680130B374
500507680110B363
500507680130B363
500507680110B371
500507680130B371
500507680110B335
500507680130B335
Over subscribed SVC to Host HBA Zoning causing to many datapaths
DEV#: 3 DEVICE NAME: hdisk3 TYPE: 2145 ALGORITHM: Load Balance
SERIAL: 600507680181059BA000000000000005
==========================================================
Path# Adapter/Path Name State Mode Select Errors
0 fscsi0/path0 OPEN NORMAL 558254 0
1* fscsi0/path1 OPEN NORMAL 197 0
2* fscsi0/path2 OPEN NORMAL 197 0
3 fscsi0/path3 OPEN NORMAL 493559 0
4 fscsi2/path4 OPEN NORMAL 493330 0
5* fscsi2/path5 OPEN NORMAL 197 0
6* fscsi2/path6 OPEN NORMAL 197 0
7 fscsi2/path7 OPEN NORMAL 493451 0
8 fscsi5/path8 OPEN NORMAL 492225 0
9* fscsi5/path9 OPEN NORMAL 197 0
10* fscsi5/path10 OPEN NORMAL 197 0
11 fscsi5/path11 OPEN NORMAL 492660 0
12 fscsi7/path12 OPEN NORMAL 491988 0
13* fscsi7/path13 OPEN NORMAL 197 0
14* fscsi7/path14 OPEN NORMAL 197 0
15 fscsi7/path15 OPEN NORMAL 492943 0
How do you get to many paths? Answer!
Server/Host datapath - oversubscribed
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
69
Host Vdisk creation
Should I format the Vdisks upon creation or not?
Consider the following:
– When data is deleted at the host the data is still on the physical disk drives, just the
pointers to it are gone
– When a new volume is created, from previously used extents the data is still physically on
the drives that make up the capacity for that new volume
– SVC will overwrite that data with new data written to it which is fine but for a read to
unallocated LBAs, SVC will return all zeros to the host to make sure host doesn't get
whatever is actually on that LBA range on the spindles.
– No need to format at storage level but is a option f desired (e.g. SVC format writes all
zeros to volume before it can be used) or host level (e.g. Windows 2008 full format with
zeros to volume before use),
– If format is used it will take time and for large SVC volumes, e.g. 2TB, up to a day
– Many Vdisk creations = a lot of write IO and potential performance impact
No need to!
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
70
Architecting SVC MDGs – Extent size considerations
Age old question – what extent size to use as it can range from 16MB – 8GB
– The extent size is set by the storage administrator at pool creation time and all MDisks
in the pool will have the same extent size
– The maximum storage capacity SVC supports is determined by the extent size utilized
– Recommend that clients use the same extent size for all storage pools in a cluster
If SSDs are being used in SVC, use 256MB/512MB to optimize space utilization
If no SSDs in SVC, use 1GB to align with DS8000
Large SVC implementations may utilize larger extent size to maximize capacity to be
virtualized if required (ie. 512MB and above)
– Refer to the table on the next page
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
71
vdisk1vdisk2
Preferred path for vdisk1 is SVC
N1P2 & N1P3
Non Preferred path for vdisk1 is
SVC N2P2 &N2P3
Preferred path for vdisk2 is SVC
N2P2 & N2P3
Non Preferred path for vdisk2 is
SVC N1P2 &N1P5
DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2145 ALGORITHM: Load Balance
SERIAL: 600507680181059C4000000000000007
==============================================================
Path# Adapter/Path Name State Mode Select Errors
0 fscsi0/path0 OPEN NORMAL 1996022 0
1* fscsi0/path1 OPEN NORMAL 29 0
2 fscsi2/path2 OPEN NORMAL 1902495 0
3* fscsi2/path3 OPEN NORMAL 29 0
Server/Host view of the datapaths
How do you get to many paths?
Examples of correct Host to SVC Cluster zoning Prior to 8 ports
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
72
Disk Controller Best Practices – create larger Mdisks
SVC 6.1 supports up to 256 TB for an Mdisk, where 5.1 is limited at 2 TB
– See the following web links for information
• SVC 5.1: http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003555
• SVC 6.1: http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003704
• 6.1 or higher SVC ... 76.10.x or higher for DS8700
– Fewer Mdisks mean better Back-end queue depth
• Less Mdisks presented to the SVC Cluster make the SVC Qdepth value greater
• R6.3 introduced Round Robin backend support so this becomes less of an issue once
you get to R6.3. Prior to R6.3, SVC would assign MDisks to paths and only use that
path under normal circumstances.
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
73
In general configure disk systems as you would without SVC
– Disk drives
• Be careful with large disk drives, you may end up with too few spindles to handle load
• RAID-5 suggested, but RAID-10 is viable and useful, SVC does not negate this
• Exceptions:
– RAID Array sizes
• 8+P or 4+P recommended for DS4K/5K family if possible
• Use DS5K segment size of 128KB or larger to help sequential performance
• Upgrade to EXP810 drawers if possible
• Create LUN size equal to RAID array/rank if it doesn’t exceed 2TB unless microcode
supports the larger drives, see the next slide for details on DS8K specifics
• Create minimum of one LUN per Fibre port on disk controller zoned with SVC
When adding more disks to a subsystem consider adding the new MDisks to existing MDGs
versus creating additional small MDGs
– Use Perl script to restripe VDisk extents evenly across all MDisks in MDG or if using Easy
Tier no need to tune
• Go to: www.ibm.com/alphaworks and search on ‘svctools’
Disk Controller Best Practices - Traditional
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
74
Maximum of 64 WWNNs
– EMC DMX/SYMM, All HDS and SUN/HP HDS clones use one WWNN per port; each
appears as a separate controller to SVC
• Map LUNs thru up to 16 FA ports
Results in 16 WWNNs/WWPNs used out of the max of 64
– IBM, EMC Clariion, HP, etc. use one WWNN per subsystem; each appears as a single
controller with multiple ports/WWPNs
• Maximum of 16 ports/WWPNs per WWNN using 1 out of the max of 64
DS8K use eight 4 port HA cards
– Use port 1 and 3 or 2 and 4 on each card
• Provides 16 ports for SVC use
• Use 8 ports minimum up to 40 ranks
• Use 16 ports, the maximum, for 40+ ranks
Greater Qdepth is archived with fewer Mdisks
Disk Controller Best Practices - Continued
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
75
DS4K/5K – EMC Clariion/CX
– Both have preferred controller architecture
• SVC honors this configuration
– Use minimum of 4 and preferably 8 ports or more up to maximum of 16
– More ports equate to more concurrent I/O driven by SVC
– Support for mapping controller A ports to Fabric A and controller B ports to Fabric B or cross connecting ports to both fabrics from both controllers
• Later is preferred to avoid AVT/Trespass occurring if a fabric or all paths to a fabric fail
– SVC supports SVC queue depth change for CX models
• Drives more I/O per port per Mdisk
Disk Controller Best Practices - Continued
© 2013 IBM Corporation
SAN Volume Controller Best Practices for Storage Administrators pFS538
76
Copy Services – Replication Best Practices
Primary Server Recovery Server
Secondary VDisk Primary VDisk
1. Current relationships run fine over the 90 Mb/sec (about 9 MB/sec)
2. Customer starts 10 new relationships
• Initial sync activity floods the link due to bandwidth setting
3. Primary can’t replicate in a timely fashion possibly causing
colliding writes
• Get Metro-Mirror like performance
4. After 5 minutes of poor performance suspend busiest relationships
• Can happen if WAN or secondary site is overloaded
SAN Volume Controller SAN Volume Controller
SAN Connection(s)
Bandwidth=50MB/sec
Link tolerance=300 seconds
WAN Bandwidth – two T3s (90 Mb/sec)