managing storage requirements in vmware environments october 2009

18
Managing storage requirements in VMware Environments October 2009

Upload: esmond-howard

Post on 18-Dec-2015

221 views

Category:

Documents


1 download

TRANSCRIPT

Managing storage requirementsin VMware Environments

October 2009

Best Practices for Configuring Virtual Storage

1. Optimal I/O performance first

2. Storage capacity second

3. Base your storage choices on your I/O workload

4. Pooling storage resources can lead to contention

2http://www.vmware.com/technology/virtual-storage/best-practices.html

#1 Optimal I/O Performance First#1 Optimal I/O Performance First

3

4

Server Virtualization = Legacy Storage Pain Points

• Application sprawl

• Dynamic performance allocation

• Capacity over-allocation

• Page Sharing Ballooning Swapping

• Caching

5

0

5

10

15

20

25

30

0 25,000 50,000 75,000 100,000 125,000 150,000 175,000 200,000 225,000

IBM DS5300Sep 2008

transaction-intensive applications typically demand response time < 10 ms

SPC-1 IOPS™

Re

sp

on

se

Tim

e (

ms

) IBM DS8300 TurboDec 2006

HDS USP VOct 2007

EMC CLARiiONCX3-40Jan 2008

NetApp FAS3170Jun 2008

3PAR InServ T800 Sep 2008

Scalable Performance: SPC-1 Comparison

Mid Range

High End UtilityHDS AMS 2500

Mar 20093PAR InServF400May 2009

#2 Storage Capacity Second#2 Storage Capacity Second6

Get Thin & Stay Thin

Average 60% Capacity

Savings with 3PAR Thin

Provisioning

Additional 10% Savings with

3PAR Thin Persistence

Legacy volume with poor

utilization rate

7

8

Storage Array

VMware VMFS Volume/Datastore

VMware and 3PAR Thin Provisioning Options

Thin VirtualDisks (VMDKs)

10GB100GB

30GB150GB

Volume Provisioned at Storage Array

Virtual Machines (VMs)

40 GB

Over provisioned VMs: 250 GB 250 GB

Physically Allocated: 200 GB 40 GB

Capacity Savings: 50GB 210 GB

3PAR Array

10GB100GB

30GB150GB

10GB100GB

30GB150GB

10GB100GB

30GB150GB

200 GB

200GB Thick LUN 200GB Thin LUN

Thin VM on Thick Storage Thin VM on Thin Storage

9

#3 Base storage choices on #3 Base storage choices on I/O workloadI/O workload

10

Host Connectivity

Data Cache

Disk ConnectivityLeg

end

Traditional Modular Architecture

3PAR InSpire® Architecture

a finely, massively, and automatically load balanced

cluster

#3 Base your storage choices on your I/O workload

11

3PAR Dynamic Optimization = Balanced Disk Layout

Data Layout After HW Upgrades

Data Layout After Non-Disruptive

Rebalance

Drives

% U

sed

100% 150

IOP

S/

drive

100s00 %

0 Drives0 %

% U

sed

100% 150

IOP

S /

drive

100s0

0

• VMFS workloads rebalanced non-disruptively after capacity upgrades

• Better drive utilization

• Better performance

12

Dynamic Optimization: No complexity or disruption

Performance

Cost per Useable TB

Fibre Channel Drives

Nearline Drives

Transition non-disruptively and with one

commandfrom one configuration to another

until performance and storage efficiency appropriately balanced

RAID 1RAID 5 (2+1)RAID 5

(3+1)

RAID 5 (8+1)

RAID 1RAID 5 (2+1)RAID 5

(3+1)

RAID 5 (8+1)

No migrations !

Smart

13

#4 Pooling storage resources #4 Pooling storage resources can lead to contentioncan lead to contention

14

Unified Processor and/or Memory

Control Processor & Memory

3PAR ASIC & Memory

disk

Heavy throughput workload applied

Heavy transaction workload applied

I/O Processing : Traditional Storage

I/O Processing : 3PAR Controller Node

hosts

hosts

small IOPs wait for large IOPs to be processed

control information and data are pathed and processed separately

Heavy throughput workload sustained

Heavy transaction workload sustained

Disk interface

= control information (metadata)= data

Host interface

Host interface

diskDisk

interface

#4 Pooling storage resources can lead to contention

15

• Oracle, and now VMware, clusters create new storage management challenges

• 1 cluster of 20 hosts and 10 volumes, once set up, requires 200 provisioning actions on most arrays!

– This can take a day to complete– Error-prone

• VMware clusters, in particular, are dynamic resources subject to a lot of growth and change

15

Server Clusters: A Whole New Set of Storage Issues

Vol2Vol1 Vol3 Vol4

Vol6Vol5 Vol7 Vol8

Vol10Vol9 Vol11 Vol12

Vol14Vol13 Vol15 Vol6

200 Provisioning Actions

1616

Autonomic Storage is the Answer

• 3PAR Autonomic Groups– Simplifies and automates volume

provisioning in a single command

• Exports a group of volumes to a host or cluster of hosts

– Automatically preserve same LUN ID for each volume across hosts

• Autonomic Host Groups: When a new host is added into the host group:

– All volumes are autonomically exported to the new host

– When a host is deleted, it will autonomically delete exported volumes

• Autonomic Volume Groups: When a new volume is added into the volume group:

– New volume is autonomically exported to all hosts in the host group

– Volume deletions are applied to all hosts autonomically

Single-command to export LUNs

Storage Volume Group

Cluster Host Group

3PAR Autonomic Groups

Vol2Vol1 Vol3 Vol4

Vol6Vol5 Vol7 Vol8

Vol10Vol9 Vol11 Vol12

Vol14Vol13 Vol15 Vol6

17

Challenge Solution

Optimal I/O performance first Size and plan at aggregate demand

Storage capacity second Control costs with Thin Technologies

Base your storage choices on your I/O workload

Build a platform that scales with VM workload

Pooling storage resources can lead to contention

Rely on technology not tuning

Thank youThank you

Serving InformationServing Information®®.. Simply.Simply.