xiv svc best practices - march 2013

33
Advanced Technical Skills (ATS) North America Configuration Best Practices for SVC / XIV © 2013 IBM Corporation Jim Sedgwick ATS [email protected] Brian Sherman ATS [email protected] March 20 th , 2013

Upload: jinesh-shah

Post on 19-Jul-2015

703 views

Category:

Technology


3 download

TRANSCRIPT

Advanced Technical Skills (ATS) North America

Configuration Best Practices for SVC / XIV

© 2013 IBM Corporation

Jim Sedgwick ATS [email protected]

Brian Sherman ATS [email protected]

March 20th, 2013

Advanced Technical Skills (ATS) North America

Agenda

� Introduction of SVC and XIV terminology

� SVC logical configuration recommendations

� Configuration zoning considerations

© 2013 IBM Corporation2

� SSD and Easy Tier deployment considerations

� XIV Logical configuration recommendations

� Sample configurations – XIV Gen2/Gen3 configurations

Advanced Technical Skills (ATS) North America

SAN Volume Controller and XIV

� SAN Volume Controller (SVC)

– IBM’s premier enterprise storage virtualization product

– Complete set of advanced functions, highest scale out performance, enterprise-class reliability

� XIV

© 2013 IBM Corporation

XIV

– IBM Enterprise Disk storage product

– Ease of use, high performance, enterprise-class reliability

� SVC in front of XIV

– Several advantages! With considerations for each!

3

Advanced Technical Skills (ATS) North America

SVC Value in XIV Environments

� Extend the value of your XIV system with XIV/SVC – Scalable performance and capacity– Seamless and non-disruptive data movement within the

SVC cluster– High Availability and multi-site mirroring configurations

• VDisk mirroring• Stretched Cluster

© 2013 IBM Corporation

• Stretched Cluster– Self tuning and healing– Multi box consistency group support– Simple and rapid deployment of LUNs for SVC usage– Common heterogeneous storage management– Broader OS support with SVC– Well documented/tested/proven ‘Best Practices’ to simplify

deployment

4

Advanced Technical Skills (ATS) North America

SVC Provides Flexibility Across Entire Storage Infrastructure

Volume

Manage the storage

pool from a central

point

Make changes to the

storage without

disrupting host

applications

Volume Volume Volume

Automated use of

© 2013 IBM Corporation5

SAN Storage

SAN

HDSXIV

SAN Volume Controller

DS3/4/5000HPEMC

Combine the capacity

from multiple storage

boxes into a single pool

of storage

Advanced Copy Services

Apply copy services

across the storage

pool

Automated use of

SSDs through Sub-

LUN tiering with

Easy Tier

Advanced Technical Skills (ATS) North America

SVC Virtualization Provides Value to Clients

� Improved TCO– Real-time Compression reduces $/GB

– Save on Advanced Function licenses

– Save on Multipath drivers

– Increase use of lower cost disk

– Transparency for Multi-Vendor disk

– Save time/cost on future migrations

� Maximized storage investment

� Improved application availability

– Migrate data during ‘normal’ work hours, application independent

– Maintain copies of critical data on multiple disk arrays to protect against array outage

� Improved storage utilization

– Opens up the use capacity from multiple disk arrays on same server

© 2013 IBM Corporation6

� Maximized storage investment– Leverage existing storage as long as it ‘makes good

business sense’

– Flexible policy driven tiered storage architecture

� Increased productivity and consistency– Eliminate weekend work and overtime when

migrating, adding storage

– Easy to administrate, saves time

– Improves ability to react quickly (i.e. relocate data to address problem, etc.)

– Single point of administration, less skill needed to support heterogeneous storage platforms.

on same server

– Thin Provisioning, Zero Detect on Write, Storage Pools, etc - used to maximize storage usage

– Easy Tier to provide ‘Sub-LUN’ optimization when/where/if needed

� Improved performance and connectivity

– Improves performance by striping the host LUN across multiple physical ‘back-end’ LUNs, using more physical disk drives to improve performance

– Simplifies storage connectivity to hosts. Hosts only worry about connecting to SVC

Advanced Technical Skills (ATS) North America

Volume

(VDisk)

Storage

Pool mdiskgrp0 [EMC Group] mdiskgrp1 [IBM Group]

vdisk0

125GB

vdisk2

525GB

vdisk3

1500GB

vdisk4

275GB

vdisk5

5GB

Mapping to Hosts w/SDD or supported MultiPath Driver

vdisk1

10GB

Mirrored Thin Provisioned

SVC Logical Configuration View

© 2013 IBM Corporation7

Pool

(MDG)Extent

16MB – 8GB

Managed

Disk

(MDisk)

LUN

Stripe

Up to 512 KB

mdisk0

1000GB

mdisk1

1000GB

mdisk2

1000GB

mdisk3

1000GB

mdisk6

2000GB

mdisk5

2000GB

mdisk4

2000GB

EMC

1000GB

EMC

1000GB

EMC

1000GB

EMC

1000GB

IBM

2000GB

IBM

2000GB

IBM

2000GB

mdiskgrp0 [EMC Group]

4000GB

mdiskgrp1 [IBM Group]

6000GB

Advanced Technical Skills (ATS) North America

SVC Terminology

Cluster:– Max 4 Node -pairs (8

I/O Group A I/O Group B

Volumes/LUNs (Virtual Disks):Max 8192 Volumes total (2048 per IO Group) up to 256TB in size Each Volume is assigned to:– Specific Node-pair (IOGroup)– Specific Storage Pool

© 2013 IBM Corporation8

MDG1

Pool 2

MDG3

– Max 4 Node -pairs (8 Nodes total) or IO Groups

– Large environments may have multiple clusters

SAN Volume Controllers nodes

Managed Disks (MDisks):– Select LUNs (MDisks) from up to

64 physical disk subsystems – Max 128 Storage Pools (MDG)– Max 128 MDisks per Pool– Max 4096 MDisks per Cluster– Can add or remove from Pool

Pool 1 Pool 3

Advanced Technical Skills (ATS) North America

SVC Software Licensing with XIV

SVC Standard SVC for XIV Edition

State-of-the-Art User Interface (GUI/CLI/API)

Base License–Per Usable TB tiered

Base License–Per XIV module

Thin Provisioning

Easy Tier

Volume Mirroring

Migration

© 2013 IBM Corporation

Migration

External Virtualization

FlashCopy Option 1–Per TB source

Remote Copy–Metro Mirror–Global Mirror

Option 2–Per TB involved at each location

Real-time Compression Option 3–Per TB of selected volumes

Option:–Per TB of selected volumes

Advanced Technical Skills (ATS) North America

SVC Considerations

� Typically recommend creating only one SVC storage p ool (managed disk group) per XIV system

– With a large number of SVC storage pools (MDG), examine how SVC implements write cache partitioning (Read I/O is not affected) and potentially create multiple XIV storage pools for each SVC storage pool (MDG)

© 2013 IBM Corporation10

Number of Pools equals # of partitions

Maximum occupancy allowance as % of cache

1 100

2 75

3 40

4 30

5 or more 25

Advanced Technical Skills (ATS) North America

SVC Considerations (cont’d)

� Extent size guideline– SVC supports extent sizes of 16, 32, 64, 128, 256, 512, 1024, 2048,

and 8192 MB• Property of the Storage Pool definition • Affects capacity SVC can manage vs. performance

– Recommend 1 GB extent size• 256 MB (default) and 512 MB are also ok

� Striped, Sequential or Image Mode Volume (VDisk) gu ideline

© 2013 IBM Corporation

� Striped, Sequential or Image Mode Volume (VDisk) gu ideline– Utilize Striped mode volumes

� Identifying an XIV volume in the SVC– When an XIV volume is mapped to SVC, map it to a LUN ID as shown

in the XIV GUI - XIV LUN mapping table display• LUN ID displayed is in Decimal

– When SVC discovers it, SVC brings it in as the corresponding Hex value which is placed into the field "Controller LUN Number”• Example, if the XIV GUI shows LUN 18, the SVC "Controller LUN Number" becomes

0000000000000012

11

Advanced Technical Skills (ATS) North America

SVC Cluster Zone

for intra-node

communication

SVC to Storage Zones

all SVC nodes zoned

SVC/XIV Zone Configuration

© 2013 IBM Corporation12

all SVC nodes zoned

to all storage ports

SVC to Host Zones

host only zoned to

specific IO group

Advanced Technical Skills (ATS) North America

SVC Multi-pathing Configuration

• SVC (pre R6.3) has simple multi-pathing from each NODE in the cluster to XIV, this simple multi-pathing allows the SVC to communicate to many disk controllers

– Upon LUN Discovery each SVC node will see all paths and pick a preferred port and path to use for a given LUN/MDisk

– Each subsequent LUN found in discovery will be assigned the next port and path, the SVC will pick the next port with the least number of MDisk paths assigned to it.

– For example SVC Node A Port 0 will see XIV LUN0 on HA0 port 0 and will use this path (it’s not the only one), repeat and overlap for each LUN/MDisk assigned to the SVC. SVC node port 1 -> LUN1 on HA0 port 1 … SVC node port 3 -> LUN15 on HA3 port 4

© 2013 IBM Corporation13

LUN15 on HA3 port 4

• This simple multi-pathing algorithm spreads workloa d among as many resources as possible

• “Round-robin” port selection for virtualized disk in R 6.3– In SVC 6.3, I/Os are submitted using one path per target port per managed

disk per node• Enables I/O to a managed disk to progress in a “round robin” fashion• Spread across multiple storage system ports• Paths are chosen according to port groups presented by storage system

Advanced Technical Skills (ATS) North America

MDisk Preferred Path and Port distributed across

per Node resources at MDisk discovery

SVC Port to XIV Port Assignment

NodeA - Port 0 NodeA - Port 1 NodeA - Port 2 NodeA - Port 3

© 2013 IBM Corporation

IM1P1

IM1P3

IM2P1

IM2P3

IM3P1

IM3P3

IM4P1

IM4P3

IM5P1

IM5P3

IM6P1

IM6P3

LUNs presented through 12 ports on all Interface Modules

Advanced Technical Skills (ATS) North America

SVC MDisk to XIV LUN Assignment (Pre R6.3)

NodeA - Port 0 NodeA - Port 1 NodeA - Port 2 NodeA - Port 3

MDisk0 MDisk3MDisk9 MDisk12

MDisk1 MDisk2

© 2013 IBM Corporation

15

IM1P1

IM1P3

IM2P1

IM2P3

IM3P1

IM3P3

IM4P1

IM4P3

IM5P1

IM5P3

IM6P1

IM6P3

LUN0 LUN3 LUN9 LUN12LUN1

Advanced Technical Skills (ATS) North America

Easy Tier and SSD Deployment Considerations –SVC and XIV

• SVC allows for scalable SSD capacity– 1 to 4 SSDs per node

• Maximum capacity of 32 SSDs with 8 nodes

– 200/400GB SSD option– Can ‘pin’ data to SSDs at the volume level– RAID configuration must be defined by the storage administrator

Thin Provisioned Volumes in Storage Pools using EasyTier

© 2013 IBM Corporation

• Thin Provisioned Volumes in Storage Pools using EasyTier must use a grain size of 64 KB or greater– If grain size is default of 32K then all I/Os to TP volumes will be

considered by ET algorithms since even large sequential I/Os from host will be broken up into 32K I/Os resulting in odd ET behaviour and performance issues

– See flash for more details • http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003982

– R6.4 changed default grain size of a Thin Provisioned Volume to 256KB rather than 32KB

16

Advanced Technical Skills (ATS) North America

SVC and XIV - SSD/Easy Tier Deployment

� Use SSD and Easy Tier at the SVC level in the follo wing scenarios:– When XIV Gen2 (A14) is deployed behind SVC– When SSD capacity requirements are significantly less than 7.2TB offered

by XIV– Require volume level pinning capability onto or out of SSD capacity

� Use SSD on XIV Gen3 (114/214) for all other impleme ntations– Utilize real time performance improvements the SSDs on XIV provide

© 2013 IBM Corporation

– Utilize real time performance improvements the SSDs on XIV provide

� Potential performance benefits when combining SVC S SD and Easy Tier as well as XIV SSD– XIV SSD usage provides for real time performance improvements– SVC SSD + Easy Tier provides for historical hot data staying on SSD– Additive benefit of XIV SSD and SVC SSD capacity to optimize more workload

17

Advanced Technical Skills (ATS) North America

XIV Configuration Considerations

� XIV Snapshot, thin provisioning, synchronous and as ynchronous replication, LUN expansion on XIV Mdisks are not su pported– RPQ available to allow use of XIV Thin Provisioning overprovisioning

� No need to reserve Snapshot space on XIV– XIV GUI will default to 10% of the Storage Pool size. Remember to zero it out

� Use XIV host port 1 and 3 on active interface modul es (maximum of 12 paths)

� On XIV Gen2, change fibre channel port-4 personalit y to target– Optimize buffer credits– Gen3 - no need

© 2013 IBM Corporation

– Gen3 - no need� Configuring XIV host connectivity for the SVC clust er – Method 1

– Create a cluster– Add each node as a host in the cluster– Allows for easier logical administration if multiple SVCs– Use ‘default’ host type for SVC– Map all volumes to the cluster

� Configuring XIV host connectivity for the SVC clust er – Method 2– Create one host definition on XIV and include all SVC node WWPNs– Simpler than method 1, but makes performance investigation more difficult– Use ‘default’ host type for SVC– Map all volumes to the SVC host

18

Advanced Technical Skills (ATS) North America

XIV Volume (MDisk) Size for SVC

� Considerations for determining what size volume to c reate on XIV and present to SVC are:– Maximize available space – For pre-SVC R6.3 environments, ensure number of volumes being created is

divisible by the number of ports on XIV zoned to SVC • Provides the best balance of the volumes across all the ports

– With SVC R6.3 software and round-robin multipathing, number of volumes is less important• But still have at least 3-4 volumes per path

– Largest XIV volume that SVC can detect is 2048 GiB• Current as of SVC R6.4 firmware

– Maximum of 511 LUNs from one XIV system can be mapped to a SVC cluster

© 2013 IBM Corporation

– Maximum of 511 LUNs from one XIV system can be mapped to a SVC cluster

� For XIV Gen2 A14 systems utilizing 1TB drives– Recommend creating a LUN size of 1632 GB

� For XIV Gen2 A14 systems utilizing 2TB drives– Recommend creating a LUN size of 1666 GB

� For XIV Gen3 114/214 systems utilizing 2TB drives– Recommend creating a LUN size of 1669 GB

� For XIV Gen3 114 /214systems utilizing 3TB drives– Recommend creating a LUN size of 2185 GB

19

Advanced Technical Skills (ATS) North America

XIV Gen2 A14 Recommended LUN Sizes

Number of XIV Modules Installed

Number of LUNs (MDisks) at 1632 GB

each

IBM XIV System TB used

IBM XIV System TB Capacity Available

6 16 26.1 279 26 42.4 4310 30 48.9 50

11 33 53.9 5412 37 60.4 6113 40 65.3 66

14 44 71.8 73

Number of LUNs using 1632 GB (Decimal) LUNs with XI V Gen2 A14 System using 1TB drives

© 2013 IBM Corporation20

14 44 71.8 7315 48 78.3 79

Number of XIV Modules Installed

Number of LUNs (MDisks) at 1666 GB each

IBM XIV System TB used

IBM XIV System TB Capacity Available

6 33 54.9 55.79 52 86.6 87.810 61 101.6 102.411 66 109.9 111.312 75 124.9 125.713 80 133.2 134.714 88 146.6 148.115 96 159.9 161

Number of LUNs using 1666 GB (Decimal) LUNs with XI V Gen2 A14 System using 2TB drives

Advanced Technical Skills (ATS) North America

XIV Gen3 114/214 Recommended LUN Sizes

Number of XIV Modules Installed

Number of LUNs (MDisks) at 1669 GB

each

IBM XIV System TB used

IBM XIV System TB Capacity Available

6 33 55.1 55.79 52 86.8 88.010 61 101.8 102.611 66 110.1 111.512 75 125.2 125.9

13 80 133.5 134.9

Number of LUNs using 1669 GB (Decimal) LUNs with XI V Gen3 114/214System using 2TB drives

© 2013 IBM Corporation21

14 89 148.5 149.315 96 160.2 161.3

Number of XIV Modules Installed

Number of LUNs (MDisks) at 2185 GB each

IBM XIV System TB used

IBM XIV System TB Capacity Available

6 38 83.0 84.19 60 131.1 132.810 70 152.9 154.911 77 168.2 168.312 86 187.9 190.013 93 203.2 203.614 103 225.0 225.315 111 242.5 243.3

Number of LUNs using 2185 GB (Decimal) LUNs with XI V Gen3 114/214 System using 3TB drives

Advanced Technical Skills (ATS) North America

XIV Gen3 214 – Rack Config Specifications

Gen3 (Model 214) Rack Configuration

Total number of modules

(Configuration type)

6

partial

9

partial

10

partial

11

partial

12

partial

13

partial

14

partial

15

full

Number of data modules 4 5 6 6 7 7 8 9

Number of active interface modules 2 4 4 5 5 6 6 6

Module 9 state Disabled Disabled Enabled Enabled Enabled Enabled Enabled

Module 8 state Enabled Enabled Enabled Enabled Enabled Enabled Enabled

Module 7state Enabled Enabled Enabled Enabled Enabled Enabled Enabled

Module 6state Disabled Disabled Disabled Disabled Disabled Enabled Enabled Enabled

© 2013 IBM Corporation22

Module 5state Enabled Enabled Enabled Enabled Enabled Enabled Enabled Enabled

Module 4state Enabled Enabled Enabled Enabled Enabled Enabled Enabled Enabled

FC ports 8 Gbps 8 16 16 20 20 24 24 24

iSCSI ports

1 Gbps or

10Gbps

6

4

14

8

14

8

18

10

18

10

22

12

22

12

22

12

Number of disks 72 108 120 132 144 156 168 180

Net capacity 1 TB disk drives

(rounded down in full TB)

28 TB 44 TB 51 TB 56 TB 63 TB 67 TB 75 TB 81 TB

Net capacity 2 TB disk drives

(rounded down in full TB)

55 TB 88 TB 102 TB 111 TB 125 TB 134 TB 149 TB 161 TB

Net capacity 3 TB disk drives

(rounded down in full TB)

84 TB 132 TB 154 TB 168 TB 190 TB 203 TB 225 TB 243 TB

Memory (24 GB per module) 144 216 240 264 288 312 336 360

Advanced Technical Skills (ATS) North America

XIV Gen3 114 – Rack Config Specifications

Gen3 (Model 114) Rack Configuration

Total number of modules

(Configuration type)

6

partial

9

partial

10

partial

11

partial

12

partial

13

partial

14

partial

15

full

Number of data modules 4 5 6 6 7 7 8 9

Number of active interface modules 2 4 4 5 5 6 6 6

Module 9 state Disabled Disabled Enabled Enabled Enabled Enabled Enabled

Module 8 state Enabled Enabled Enabled Enabled Enabled Enabled Enabled

Module 7state Enabled Enabled Enabled Enabled Enabled Enabled Enabled

Module 6state Disabled Disabled Disabled Disabled Disabled Enabled Enabled Enabled

© 2013 IBM Corporation23

Module 5state Enabled Enabled Enabled Enabled Enabled Enabled Enabled Enabled

Module 4state Enabled Enabled Enabled Enabled Enabled Enabled Enabled Enabled

FC ports 8 Gbps 8 16 16 20 20 24 24 24

iSCSI ports

1 Gbps 6 14 14 18 18 22 22 22

Number of disks 72 108 120 132 144 156 168 180

Net capacity 1 TB disk drives

(rounded down in full TB)

28 TB 44 TB 51 TB 56 TB 63 TB 67 TB 75 TB 81 TB

Net capacity 2 TB disk drives

(rounded down in full TB)

55 TB 88 TB 102 TB 111 TB 125 TB 134 TB 149 TB 161 TB

Net capacity 3 TB disk drives

(rounded down in full TB)

84 TB 132 TB 154 TB 168 TB 190 TB 203 TB 225 TB 243 TB

Memory (24 GB per module) 144 216 240 264 288 312 336 360

Advanced Technical Skills (ATS) North America

SVC and XIV Configuration Best Practices –XIV Gen2 A14

� Full 15 Module XIV Gen2 (Model A14) Recommendations – 79/161TB Useable– Use 2 interface host ports from each of the 6 interface modules

• Use ports 1 and 3 from each interface module• Change port 4 setting on each of the XIV Interface Modules from Initiator to Target in

order to optimize HBA buffer allocations– Zone these 12 ports with all SVC node ports– Create 48/96 LUNs of equal size each a multiple of 17GB

• 48 LUNs on an XIV System with 1TB drives using 1632GB LUNs

© 2013 IBM Corporation

• 48 LUNs on an XIV System with 1TB drives using 1632GB LUNs• 96 LUNs on an XIV System with 2TB drives using 1666GB LUNs

– Map LUNs to SVC as 48/96 MDisks and add all of them to the one SVC Storage Pool• With 48 MDisks, SVC will drive I/O to 4 MDisks/LUNs per each of the 12 XIV fibre ports• With 96 MDisks, SVC will drive I/O to 8 MDisks/LUNs per each of the 12 XIV fibre ports • This provides good queue depth on SVC to drive XIV adequately

� Create one SVC Storage Pool per XIV using 1GB or la rger extent size– Large extent size ensures effective use of XIV distributed cache

� Create SVC striped volumes using all MDisks in the Storage Pool

24

Advanced Technical Skills (ATS) North America

SVC and XIV Configuration Best Practices –XIV Gen2 A14

� Six Module XIV Gen2 (Model A14) Recommendations – 27 TB / 55TB Useable– Use 2 interface host ports from each of the 2 active Interface Modules

• Use ports 1 and 3 from interface modules 4 and 5 (Module 6 is inactive)• Change port 4 setting on each of the XIV Interface Modules from Initiator to Target in

order to optimize HBA buffer allocations– Zone these 4 ports with all SVC node ports– Create LUNs 1632/1666 GB each

• 16 LUNs on an XIV System with 1TB drives• 34 LUNs on an XIV System with 2TB drives

© 2013 IBM Corporation

• 34 LUNs on an XIV System with 2TB drives– Map LUNs to SVC as 16/34 MDisks and add all of them to the one SVC storage

pool• With 16 MDisks, SVC will drive I/O to 4 MDisks/LUNs per each of the 4 XIV fibre ports• With 34 MDisks, SVC will drive I/O to 8 MDisks/LUNs per each of the 4 XIV fibre ports

with 1 MDisk additional on 2 XIV fibre ports• This provides good queue depth on SVC to drive XIV adequately

� Create one SVC storage pool per XIV using 1GB or la rger extent size– Large extent size ensures effective use of XIV distributed cache

� Create striped SVC volumes using all 16/34 MDisks i n SVC storage pool

25

Advanced Technical Skills (ATS) North America

SVC and XIV Configuration Best Practices –XIV Gen2 A14

� Nine Module XIV Gen2 (Model A14) Recommendations – 4 3TB / 87TB Useable– Use 2 interface host ports from each of the 4 active Interface Modules

• Use ports 1 and 3 from interface modules 4, 5, 7,8 (Modules 6 and 9 are inactive)• Change port 4 setting on each of the XIV Interface Modules from Initiator to Target in order to optimize

HBA buffer allocations

– Zone these 8 ports with all SVC node ports– Create LUNs 1632/1666 GB each

• 26 LUNs on an XIV System with 1TB drives

© 2013 IBM Corporation

• 26 LUNs on an XIV System with 1TB drives• 53 LUNs on an XIV System with 2TB drives

– Map LUNs to SVC as 26/53 MDisks and add all of them to the one XIV MDG• With 26 MDisks, SVC will drive I/O to 3 MDisks/LUNs per each of the 8 XIV fibre ports with 1 MDisk

additional on 2 XIV fibre ports• With 53 MDisks, SVC will drive I/O to 6 MDisks/LUNs per each of the 8 XIV fibre ports with 1 MDisk

additional on 5 XIV fibre ports• This provides good queue depth on SVC to drive XIV adequately

� Create one SVC storage pool per XIV using 1GB or la rger extent size– Large extent size ensures effective use of XIV distributed cache

� Create striped SVC volumes using all MDisks in SVC storage pool

26

Advanced Technical Skills (ATS) North America

SVC and XIV Configuration Best Practices –XIV Gen3 114 / 214

� Full 15 Module XIV Gen3 (Model 114/214) Recommendat ions – 161TB / 243TB Useable– Use 2 interface host ports from each of the 6 interface modules

• Use ports 1 and 3 from each Interface Module– Zone these 12 ports with all SVC node ports– Create LUNs of equal size

• 96 LUNs on an XIV Gen3 System with 2TB drives using 1669GB LUNs• 111 LUNs on an XIV Gen3 System with 3TB drives using 2185GB LUNs

– Map LUNs to SVC as 96/111 MDisks and add all of them to the one SVC storage

© 2013 IBM Corporation

– Map LUNs to SVC as 96/111 MDisks and add all of them to the one SVC storage pool• With 96 MDisks, SVC will drive I/O to 8 MDisks/LUNs per each of the 12 XIV fibre ports• With 111 MDisks, SVC will drive I/O to 9 MDisks/LUNs per each of the 12 XIV fibre

ports • This provides good queue depth on SVC to drive XIV adequately

� Create one Managed Disk Group per XIV using 1GB or larger extent size– Large extent size ensures effective use of XIV distributed cache

� Create striped SVC volumes using all MDisks in SVC storage pool

27

Advanced Technical Skills (ATS) North America

SVC and XIV Configuration Best Practices

� Growing from nine to fifteen modules

– Create additional 1632GB LUNs and add to existing SVC storage pool on XIV• Use SVC rebalancing script to restripe VDisk extents to include new

Mdisks• Google “SVC rebalance script”

– If going directly from 9 to 15 modules you may want to consider creating a second SVC storage pool (in a new XIV storage

© 2013 IBM Corporation

creating a second SVC storage pool (in a new XIV storage pool) and adding the 22 additional 1632GB MDisks to it

– As interface modules 6 and 9 are activated zone ports 1 and 3 with the SVC cluster

28

Advanced Technical Skills (ATS) North America

Reference Section

© 2013 IBM Corporation

Advanced Technical Skills (ATS) North America

Reference Information

� SVC InfoCenter

– http://publib.boulder.ibm.com/infocenter/svc/ic/index.jsp

� XIV InfoCenter– http://publib.boulder.ibm.com/infocenter/ibmxiv/r2/index.jsp

© 2013 IBM Corporation

� System Storage Interoperability Center (SSIC) – http://www-03.ibm.com/systems/support/storage/ssic/interoperability.wss

30

Advanced Technical Skills (ATS) North America

Reference Information Documentation

� SVC Best Practice and Performance Guidelines– http://www.redbooks.ibm.com/redpieces/abstracts/sg247521.html?Open

� Implementing SVC– http://www.redbooks.ibm.com/abstracts/sg247933.html?Open

� Data Migration to IBM Storage– http://www.redbooks.ibm.com/abstracts/sg247432.html?Open

© 2013 IBM Corporation31

� SSD Caching on XIV– http://www.redbooks.ibm.com/abstracts/redp4842.html?Open

� XIV Gen3 Architecture– http://www.redbooks.ibm.com/redpieces/abstracts/sg247659.html?Open

� XIV Host Attachment Guide– http://www.redbooks.ibm.com/abstracts/sg247904.html?Open

Advanced Technical Skills (ATS) North America

Thank You

© 2013 IBM Corporation

Thank You

32

Advanced Technical Skills (ATS) North America

Trademarks

The following are trademarks of the International B usiness Machines Corporation in the United States, other countries, or both.

The following are trademarks or registered trademar ks of other companies.

For a complete list of IBM Trademarks, see www.ibm.com/legal/copytrade.shtml:

*, AS/400®, e business(logo)®, DBE, ESCO, eServer, FICON, IBM®, IBM (logo)®, iSeries®, MVS, OS/390®, pSeries®, RS/6000®, S/30, VM/ESA®, VSE/ESA, WebSphere®, xSeries®, z/OS®, zSeries®, z/VM®, System i, System i5, System p, System p5, System x, System z, System z9®, BladeCenter®

Not all common law marks used by IBM are listed on this page. Failure of a mark to appear does not mean that IBM does not use the mark nor does it mean that the product is not actively marketed or is not significant within its relevant market.

Those trademarks followed by ® are registered trademarks of IBM in the United States; all others are trademarks or common law marks of IBM in the United States.

© 2013 IBM Corporation33

* All other products may be trademarks or registered trademarks of their respective companies.

Notes : Performance is in Internal Throughput Rate (ITR) ratio based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated here. IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.All customer examples cited or described in this presentation are presented as illustrations of the manner in which some customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics will vary depending on individual customer configurations and conditions.This publication was produced in the United States. IBM may not offer the products, services or features discussed in this document in other countries, and the information may be subject to change without notice. Consult your local IBM business contact for information on the product or services available in your area.All statements regarding IBM's future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.Information about non-IBM products is obtained from the manufacturers of those products or their published announcements. IBM has not tested those products and cannot confirm the performance, compatibility, or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.Prices subject to change without notice. Contact your IBM representative or Business Partner for the most current pricing in your geography.

Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries.Cell Broadband Engine is a trademark of Sony Computer Entertainment, Inc. in the United States, other countries, or both and is used under license therefrom. Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office.IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency, which is now part of the Office of Government Commerce.