vmworld 2013: storage drs: deep dive and best practices to suit your storage environments
DESCRIPTION
VMworld 2013 Sachin Manpathak, VMware Mustafa Uysal, VMware Sunil Muralidhar, VMware Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshareTRANSCRIPT
Storage DRS: Deep Dive and Best Practices to Suit
Your Storage Environments
Sachin Manpathak, VMware
Mustafa Uysal, VMware
Sunil Muralidhar, VMware
STO5636
#STO5636
2 2
Disclaimer
This session may contain product features that are
currently under development.
This session/overview of the new technology represents
no commitment from VMware to deliver these features in
any generally available product.
Features are subject to change, and must not be included in
contracts, purchase orders, or sales agreements of any kind.
Technical feasibility and market demand will affect final delivery.
Pricing and packaging for any new technologies or features
discussed or presented have not been determined.
3 3
VMware Vision: Software Defined Storage
Software Defined Storage
Software-Defined Storage Vision
Enable new storage tiers
Enable DAS & server flash for shared
storage along with enterprise SAN/NAS
Enable tight integration with storage
ecosystem
Tighter integrations with broad storage
ecosystem through APIs
Deliver policy-based automated storage
management
Automatically enforce per-VM SLAs for all
apps across different types of storage “Gold”
Array(s)
“Silver”
Array(s)
Distributed
Storage
Hard
disks
SSD Hard
disks
SSD
Availability = 99.99%
DR RTO = 1
“Gold” SLA
Availability = 99%
Throughput = 1000 R/s, 20 W/s
Latency = 95% under 5 ms
DR RPO = 1’, RTO = 10’
Back up = hourly
Capacity res = 100%
Web Server
Database Server
Availability =
99.99%
DR RTO = 1 hour
Max Laten
“Silver” SLA
Availability = 99%
Throughput = 100 R/s,10 W/s
Latency = 90% under 10 ms
DR RPO = 60’, RTO = 360’
Back up = weekly
Security = encryption
Red
uce S
tora
ge C
ost
an
d C
om
ple
xit
y
App Server
Roadmap
4 4
Software-Defined Storage: Summary Roadmap
vSphere storage
features
Storage IO Control,
Storage vMotion,
Storage DRS,
Profile Driven Storage
Enable New Storage Tiers
Policy-based storage management
Virtual Volumes
VM-aware data
management with
enterprise storage
arrays
Tight integration with storage systems
Policy-based storage
management
For local storage
vSphere Storage
Appliance
Low cost, simple shared
storage for small
deployments
Virtual SAN
Policy-driven storage for
cloud-scale deployments
Virtual Flash
Virtual SAN
Data services
Virtual Flash
Write-back caching
Policy-based storage
management
For external storage
H2 2013 / H1 2014 Roadmap Today
Roadmap
5 5
Outline
Introduction
Anatomy of Storage DRS
Best Practices and Deployment Scenarios
Preview from Storage DRS Labs
Summary
Survey: http://bit.ly/siocsdrs
6 6
Ease of Storage Management
Initial Placement
Out of Space Avoidance
IO Load Balancing
Virtual Disk Affinity (Anti-Affinity)
Datastore Maintenance Mode
Add Datastore
Brief Introduction to Storage DRS
Datastore
Cluster
Storage vMotion
•••
7 7
Storage DRS details
VMworld talks
Storage DRS Whitepapers
VMware Technical Journal (2012)
“Storage DRS: Automated Management of
Storage Devices in a Virtualized Datacenter”
8 8
Outline
Introduction
Anatomy of Storage DRS
Best Practices and Deployment Scenarios
Preview from Storage DRS Labs
Summary
9 9
Storage DRS Recommendations
Recommendation: best datastore for a virtual disks in a VM
VM requirements, virtual disk type, capacity, IO load, rules
Datastore capabilities, capacity, performance, connectivity
Predicted resource usage
10 10
What Really Happened?
Simulated placement of virtual disks to datastores
• Space utilization, IO latency, CPU and memory
Rank is based on cluster wide metrics after placement
• All resources contribute to balance metric
11 11
Thin Provisioned VMDKs
Space entitlement = Allocated + ƒ(Idle)
Explicit control for the degree of space over-commitment
• Initial placement also uses the same controls
Online model to predict space usage growth over time
Datastore A Datastore B
VMDK VMDK
Big VMDK
Allocated space
Provisioned space
“Idle” space
10 100
Headroom
30
12 12
Datastore Cluster Fragmentation
Enough room at cluster level
Big VMDK does not fit to any of the datastores
Pre-requisite migrations to make room for the Big VMDK
All dependent actions executed before placement
Datastore A Datastore B Datastore C
VMDK VMDK VMDK
Big VMDK
13 13
Linked Clones
Storage DRS supports linked clones
Initial placement and load balancing
vSphere vCloud Director (VCD)
1 TB
20GB 70GB
40GB
Datastore
10GB
VM1
VM2
VM3
14 14
1 TB 1 TB 500 GB
20GB 70GB
40GB
10GB
Datastore A Datastore B Datastore C
10GB
VM1
VM2
VM3 VM5
30GB
VM4 VM6
10GB
Linked Clones
10GB
VM1
500 GB
VM6
10GB 10GB
VM1
1 TB 1 TB
10GB
VM5
30GB
VM4
15 15
Recommendations, continued…
16 16
Why is a Recommendation Generated?
Storage DRS runs periodically for resource management
Storage DRS threshold violation in a datastore
• Not enough free space
• I/O latency was high for an extended period of time
One of the affinity rules are broken
• A rule changed or a new rule added
Storage DRS estimates the benefits exceed the costs
• Cluster resources are balanced across multiple metrics
17 17
Outline
Introduction
Anatomy of Storage DRS
Best Practices and Deployment Scenarios
Preview from Storage DRS Labs
Summary
18 18
Datastore Cluster Best Practices
Identical storage profiles
Silver Disk Pool Gold Disk Pool
Data
store1
Data
store2
Data
store3
Data
store4
Cluster-A (Tier2 VMs) Cluster-B (Tier1 VMs)
Similar datastore performance
May not be identical
Similar capabilities
Data management
Backup
Stay Tuned
for Labs
Section
✔ Cluster1: Wide Perf
Variation
Cluster2: Similar
Datastores
19 19
Datastore and Host Connectivity
Maximum possible host and datastore connectivity
Improves DRS and Storage DRS performance
Partially Connected Datastore Cluster Fully Connected Datastore Cluster
More datastores in cluster better space and I/O balance
Larger datastore size better space balance
DRS Cluster DRS Cluster
20 20
Deployment with Shared Disk Pools
Common scenario
• Recommended by vendors
• Improves IO performance Common Diskpool
Logical LUNs
share disks
Storage DRS discovers correlations
VASA or automatic detection
Storage DRS respects correlations
IO Load balancing
Rule enforcement
⤬VM IO Performance correlated
• VMs reside on different LUNs
High I/O
High Latency
21 21
Deployment with Thin Provisioned LUNs
Storage array feature
Add capacity on demand
Configured 9TB
Backing 3TB
Configured 9TB
Backing 6TB
Lun-1 on 08/29/13 Lun-1 on 10/29/13
Data
⤬Problem:
⤬ Backing space can run out
⤬ LUN has spare capacity!
Configured 9TB
Lun-1 on 08/29/13
Backed by Disks
Configured 9TB
Lun-1 on 09/29/13
Storage Array signals condition using VASA
Storage DRS stops placing VMs on such LUN
Stay
Tuned
for Labs
Section
22 22
Deployment with Auto-Tiered Arrays
Multiple storage tiers
VM data across tiers
Tier use changes with workload Capacity Tier
Performance Tier
Logical LUN of Auto-tier Array
Storage DRS IOPS prediction
• Maybe inaccurate
Storage DRS is valuable in
auto-tier array deployments!
Automatic initial placement
Space load balancing
Rule enforcement
Maintenance mode
Storage IO Control
IO priority
23 23
Deployment with Deduplication
Provides space efficiency
Dedupe pool can span across
multiple LUNs Dedupe
Storage DRS uses free space in LUN Stay Tuned
for Labs
Section
⤬Problem: LUN appears to
store more data than capacity!
Total Virtual Disk
Size: 4TB
LUN Capacity:
1TB
24 24
Summary: Storage Array Feature Compatibility
Feature Compatibility
Shared diskpool backing LUNs YES
LUN Thin Provisioning YES
LUN Auto-tiering YES
Space Deduplication YES
25 25
Outline
Introduction
Anatomy of Storage DRS
Best Practices and Deployment Scenarios
Preview from Storage DRS Labs
Summary
26 26
Preview from the Storage DRS Labs
Evolve Storage DRS with vSphere storage solutions
Evolve Storage DRS with storage innovations
I/O reservation support
Fine grain controls
27 27
vSphere SRM: Array-based Replication
Storage DRS identifies replicated datastores
All recommendations are in sync with replication policies:
• Automated moves within the same consistency group
• Manual moves for all VMs residing on replicated datastores
Accounting of replication overhead due to Storage vMotion
28 28
vSphere Replication (VR)
Storage DRS discovers VR-replicas in datastores
Storage DRS understands space usage of replica disks
Storage coordinates moves with VR
• Space balancing
• Maintenance mode
29 29
vSphere Storage Policy based Management
Current: datastores with same
storage profile
Silver Disk Pool Gold Disk Pool
Data
store1
Data
store2
Data
store3
Data
store4
Cluster-1 (Tier2 VMs) Cluster-2 (Tier1 VMs)
Future: datastores with
any storage profile
Silver Disk Pool Gold Disk Pool
Data
store1
Data
store2
Data
store3
Data
store4
Cluster-1 (Tier1 + Tier2 VMs)
30 30
Support for IO Reservations
Per VM Resource Controls
• Reservation, Limit, Shares
Enforced at datastores
Enforced at datastore clusters
Storage DRS initial placement
Storage DRS load balancing
IO Capacity estimation
• Reference workload
SIOC SIOC
R=100IOPs R=150 IOPs
Storage DRS
R=300 IOPs
C=400 IOPs C=1500 IOPs
31 31
Tighter Integration with Storage Arrays
1. Discover storage capabilities using VASA
• E.g. LUNs with auto-tiering/dedupe/thin provisioning
• Indicate LUNs with common diskpool.
2. Intelligent decisions in Storage DRS
• Proactively manage backing capacity for thin provisioning
• Keep deduplicated VMs together
• Don’t interfere with auto-tier I/O optimizations
• Storage DRS fixes I/O overload conditions
32 32
Fine Grain Controls
All aspects of Storage DRS
can be controlled to suit your
environments
33 33
Summary
Easier Storage Management
Effective initial placement
Out-of-space avoidance and load balancing
Many exciting features in pipeline!
THANK YOU
Survey: http://bit.ly/siocsdrs
Storage DRS: Deep Dive and Best Practices to Suit
Your Storage Environments
Sachin Manpathak, VMware
Mustafa Uysal, VMware
STO5636
#STO5636