vnx series-technical-review-110616214632-phpapp02
DESCRIPTION
TRANSCRIPT
1© Copyright 2011 EMC Corporation. All rights reserved.
INTRODUCING VNX SERIESTechnical Presentation
Givonn JonesConsultant
February 2011
2© Copyright 2011 EMC Corporation. All rights reserved.
IT Challenges: Tougher than Ever Four central themes facing every decision maker today
Overcome flat budgets
Manage escalating complexity
Cope with relentless data growth
Meet increased business demands
3© Copyright 2011 EMC Corporation. All rights reserved.
IT Challenges: Tougher than Ever Four central themes facing every decision maker today
Overcome flat budgets
Manage escalating complexity
Cope with relentless data growth
Meet increased business demands
Affordable
Simple
Efficient
Powerful
4© Copyright 2011 EMC Corporation. All rights reserved.
Unisphere™
Next Generation Unified StorageOptimized for today’s virtualized IT
Affordable. Simple. Efficient. Powerful.
VNXe3100 VNX7500
VNX5700
VNXe3300 VNX5100
VNX5500
VNX5300
5© Copyright 2011 EMC Corporation. All rights reserved.
2010-2014 CAGR
NAS + iSCSI + FCoE
13.9%
Fibre Channel SAN
1.3%
Network-attached NAS
5.4%
iSCSI SAN
18.2%
External DAS -
8.8%
Fibre Channel over Ethernet
104.6%
Switched SAS
31.9%
Storage Connectivity Profile is Changing• Increasing emphasis on Ethernet-based connectivity
options
• EMC offers all major storage connectivity options today
Source: IDC, (7/10) and EMC
2008 2009 2010 2011 2012 2013 20140
2,000
4,000
6,000
8,000
10,000
12,000
14,000
Reve
nue (
$M
)
6© Copyright 2011 EMC Corporation. All rights reserved.
Click. Automate. Done.
7© Copyright 2011 EMC Corporation. All rights reserved.
Powerful, Flexible Modular ArchitectureMore processing power. Self-optimizing pools. Any network.
H A R D W A R E
Multi-controller scale*• Add X-blades for the right amount of file sharing power• Add storage processors for more storage pool scale• Scales to 96 CPU cores and 4,000 drives
Self-optimizing storage pools• Active Data is automatically moved to FLASH for fastest
performance• Inactive Data is automatically moved out of FLASH to
large disks for lowest capacity cost• Fully Automated. Always on. No management
intervention needed. Set-it-and-forget-it.• Lowest transaction cost and lowest capacity cost—
simultaneously!
Unified multi-protocol• Full support for any network• Unified block, file and object support• Share volumes and files • Fully provisioned LUNs
SSD HDD
SAN
BLOCKiSCSIFC
FCoE
NAS
FILECIFSNFSpNFSMPFS
CLOUD
OBJECTRESTSOAP
*2 SP’s requires Gateway with multi back end
8© Copyright 2011 EMC Corporation. All rights reserved.
Hardware Architectural Overview
Storage tiersSupport a mix of ultra-performance, performance, and capacity drives for optimal economicsModular designScale as business needs grow
Optimized for FlashHigh performance architecture designed for the Flash drive age
Latest technologyNewest Intel multi-core CPU, large memory; SAS architecture
Flexible IO modulesFC, FCoE, 1Gb and 10Gb IP; future proofed with plug in architecture for next generation connectivity
Advanced failoverAlways on, no compromise availability while maintaining application service levels
Optimized packagingEfficient packaging with dense disk options and built in energy efficiency
POWERFULFLEXIBLE
AVAILABLEECONOMICAL
H A R D W A R E
9© Copyright 2011 EMC Corporation. All rights reserved.
Modular Architecture
• Proven architecture that extends existing Celerra and CLARiiON investment
• Scalable capacity and performance– Dedicated system elements– Flash, SAS and Near-Line for optimal
capacity/performance balance– Optimized standalone gateway option
• Native NAS and SAN Implementation
• Object delivered via integrated solution
• Simple, powerful, intuitive consolidated management
• Future proofed– Flexible IO options– Plug and play
Designed for optimal flexibility
Managem
ent
(Sin
gle
UI, 3
rd party
managem
en
t inte
gra
tion)
File Protocol
Object Technology (Atmos VE)
H A R D W A R E
Native File Services (NFS, CIFS, FTP, HA IP networking,
IP V6 , AVM, deduplication and compression)
Block Protocol
Base Native Block Services(Storage pooling, automation,
security, RAID, SAN, etc.)
Disk Expansion(Flash, SAS, Near-Line SAS)
10© Copyright 2011 EMC Corporation. All rights reserved.
VNX: Modular Unified and GatewayImplementation models
• Easy to deploy, simple to manage
• Scale capacity
• Multi protocol– File: NFS (including pNFS), CIFS,
MPFS– Block: iSCSI, Fibre Channel, FCoE– Object: REST, SOAP
• Leverage existing storage investment
• Scale performance and capacity
• Shared storage
• Add to existing block implementation– File: NFS (including pNFS), CIFS, MPFS– Object: REST, SOAPH A R D W A R E
UNIFIED STORAGE GATEWAY
VNX series
File
ObjectBlock
Servers
Gateway
FibreChannel SAN
VNX seriesSymmetrix CLARiiON
FileObject
11© Copyright 2011 EMC Corporation. All rights reserved.
VNX System Architecture
LCCLCC
Flash drivesSAS drives
SPSSPS
Power Supply
Power Supply
LAN
Near-Line SAS drives
VNX SP Failover
Clients Oracle serversExchange servers
VNX Unified Storage
Application servers
SAN
FailoverVNX X-Blade
VNX SP
VNX OE FILE
VNX OE BLOCK
Virtual servers
FC iS FC iSFCoE
FCoE
VNX X-BladeVNX X-BladeVNX X-Blade VNX X-BladeVNX X-BladeVNX X-BladeVNX X-Blade
10GbEnet
10GbEnet
Object:Atmos VE
H A R D W A R E
12© Copyright 2011 EMC Corporation. All rights reserved.
VNX Series
MODULAR UNIFIED GATEWAY
VNX5100VNX530
0VNX5500 VNX5700 VNX7500 VG2 VG8
Min. Form Factor 4U 7U 7U-9U 8U - 11U 8U – 15U 2U 2U
Max. Drives 75 125 250 500 1000 4000 4000
Drive types 3.5” Flash, SAS and NL-SAS and 2.5” 10K SASBE
dependentBE
dependent
FILE
Configurable I/O Slots per X-Blade
n/a 3 4 4 5 3 5
X-Blades n/a 1 or 2 1 or 2 or 3 2 or 3 or 4 2 to 8 1 or 2 2 to 8
Capacity per X-Blade (in TBs)
n/a 200 256
System Memory n/a 6 GB/blade 12 GB/blade 12
GB/blade24 GB/blade 6 GB/blade 24 GB/blade
Protocols n/a NFS, CIFS, MPFS, pNFS
BLOCK
Configurable I/O Slots per SP
0 2 2 5 5 N/A
Embedded I/O ports
4 FC ports, 2 Back-End SAS ports 0 0 N/A
System Memory 4 GB /SP 8 GB/SP 12 GB/SP 18 GB/SP 24 GB/SP N/A
Protocols FC FC, iSCSI, FCoE, N/A
H A R D W A R E
13© Copyright 2011 EMC Corporation. All rights reserved.
VNX Unified StorageMaximum CPU cores of unified data processing power
NL-SAS(Highest capacity)
SAS(Good
performance)
Flash (Highest
performance)
NAS
H A R D W A R E
VNX supports all protocols—today and in the future
CLOUD
OBJECTBuild a single cloud with “N” number
of systems
FILE AND OBJECT48 cores dedicated to object, networked file system management
and data sharing
AUTOMATIC SERVER OPTIMIZATION
StorageProcessor
StorageProcessor
SANX-
bladeX-
bladeX-
bladeX-
bladeX-
bladeX-
bladeX-
bladeX-
blade
AUTOMATIC DATA OPTIMIZATION
BLOCK12 cores dedicated to
storage pool management and high
performance block serving
MPFS/pNFS Host
14© Copyright 2011 EMC Corporation. All rights reserved.
VNX Form Factors
H A R D W A R E
VNX5100
DPE (Disk Processor Enclosure)
3U
DAE-15x 3.5” & 2.5” drives(Disk Array Enclosure)
3U
DAE-25x 2.5” drives(Disk Array Enclosure)
2U
CS (Control Station) 1 U
DME(Data Mover Enclosure)
2U
DAE-25(Disk Array Enclosure)
2U
SPE (C500, C1000)(Storage Processor Enclosure)
2U
VNX5700 and VNX7500
DAE-25x 2.5” drives(Disk Array Enclosure)
2U
DAE-15x 3.5” & 2.5” drives(Disk Array Enclosure)
3U
DAE-25x 2.5” drives(Disk Array Enclosure)
2U
DAE-25x 2.5” drives(Disk Array Enclosure)
2U
VNX5300 and VNX5500
DPE (Disk Processor Enclosure)
3U
CS (Control Station) 1 U
DME(Data Mover Enclosure)
2U
DAE-25x 2.5” drives(Disk Array Enclosure)
2U
DAE-25x 2.5” drives(Disk Array Enclosure)
2U
DAE-15x 3.5” & 2.5” drives(Disk Array Enclosure)
3U
Block Only Base DAE = drives only
DPE = storage processors + drives
SPE = storage processors
Drives Add DAEs up to the
maximum capacity allowed
Can mix drive types in the same DAE (e.g. 7.2K rpm + 15K rpm
Can mix different DAES in a system (e.g. 15 drives and 25 drives DAEs)
File Only or Unified Base* Need block hardware +
file hardware
SPS (Standby PS) 1 U
SPS (Standby PS) 1 U
SPS (Standby PS) 1 U
SPS (Standby PS) 1 U
SPS (Standby PS) 1 UFiller
* File and Unified options not available on 5100
15© Copyright 2011 EMC Corporation. All rights reserved.
VNX Unified Storage Components Architecture and packaging
File implementation: X-Blade enclosure– VNX operating Environment for File– From 2 to 8 blades supported with configurable
failover options– Flexible IO connectivity options:
Block implementation: storage or data processor enclosure*
– VNX operating environment for block– Dual active storage processors– Automatic failover– Flexible IO connectivity options:
Standby power supplies (battery backup)
Control stations (1 or 2) Disk array enclosures Upgradeable to add Fibre Channel
ports and/or native 1 or 10 Gigabit Ethernet iSCSI
Start with FC or iSCSI or NAS
Add other protocols seamlessly, as needed
* DPE contains disks, SPE does not contain disks
H A R D W A R E
Flexible IO options:
4-port 8Gb FC, 4 port 6Gb SAS, 2 port 10GbE, 4 port Copper 1GbE, 2 Port 10Gb FCoE
SAN
�
�
�
Ž
Œ
DPE (Disk Processor Enclosure)
3U
Standby Power Supply
Control Station
Data Mover Enclosure (X-Blade enclosure)
25x 2.5”Disk Array Enclosure
25x 2.5” Disk Array Enclosure
15x 3.5” & 2.5” Disk Array Enclosure
Disk Array Enclosure
Disk Processor Enclosure/Storage
Processor Enclosure
16© Copyright 2011 EMC Corporation. All rights reserved.
Simplified Upgradeability to Unified• Unified is the default
configuration
• Block only:– SPE/DPE– “File Ready Option”
• Reserve 4U-6U Rack Space• Reserve FC ports or I/O Slot
– Upgrade to Unified• Add Data Movers (1 to 8)• Add Control Stations (1 or 2)• Add SFP or FC I/O module• Add Unisphere File Enabler
• File only:– Same HW as Unified– Add FE SLIC to array– Add Unisphere block enabler
Block Only
File Only or Unified
Storage Processors(SPE/DPE)
Power Supply
Control Stations
Data Movers
25x 2.5” drives
15x 3.5” & 2.5” drives
Disk
H A R D W A R E
17© Copyright 2011 EMC Corporation. All rights reserved.
VNX Storage Processor
• Block services– Flexible RAID options (10,3,5,6) provide optimal
performance AND protection– RAID Groups and Virtual Pools
• Administration/management– Through storage processor Ethernet ports– Aggregated to single file/block view in
Unisphere
• Single point of management/control
• High availability Active/Active controller failover option
• Connects to Hosts via FC, FCoE, iSCSI for flexibility of connection
• Connects to storage via 4 lane 6Gbit SAS for up to 24Gb/SAS bus (6x prior generation)
Dedicated processing for block services
VNX
H A R D W A R E
Storage Processors
PrivateNetwork
Administrator
FC/iSCSISAN
Unisphere
18© Copyright 2011 EMC Corporation. All rights reserved.
Flexible Storage Tiers
SAS back-end connect for performance and reliability
• Up to 24Gb (4x6 Gb) per SAS bus
• Point-to-point, robust interconnect
Flash (SSD) options
• Highest performing drives
• 3.5” – 100 GB, 200 GB
– ~3,000 IOs per second
Optimize TCO with tiered service levels
H A R D W A R E
AUTOMATIC DATA OPTIMIZATION
Highest capacity
Good performance
Highest performance
Near-line SAS(7.2K rpm)
SAS(10K /15K
rpm)
Flash
SAS (HDD) options
• 3.5” drives (195 drives/ rack)
– 300 GB and 600 GB– 10K and 15K RPM– ~ 140 (10K) to ~ 180
(15K) IOs per second
• 2.5” Drives (500 drives/ rack)
– 300 GB and 600 GB – 10K RPM
Near-line SAS (HDD) options
• 3.5” drives (195 drives/ rack)
– 2 TB– 7.2K RPM– ˜ 90 IOs per second
19© Copyright 2011 EMC Corporation. All rights reserved.
Flash Drives for Higher Service LevelsFlash drives introduce a paradigm shift in the storage industry
Key benefits• Faster performance
– Up to 30 times IOPS and less than 1 millisecond response time
• More energy efficient– Uses 38% less energy per terabyte and 95% less energy per I/O
• Better reliability– No moving parts, faster RAID rebuilds
Ideal for:• Oracle (NFS), and Microsoft SQL and Exchange
(iSCSI)
• VMware iSCSI and NFS (particularly VMware View)
• File sharing, software engineering, and CAD/CAM environments
Complements high-capacity, cost-effective, energy-efficient NL-SAS drives
Flashdrives
I/O per second (IOPS)
Resp
on
se T
ime
1 Flash drive
1 15K FibreChannel drive
10 15K Fibre Channeldrives
30 15KFibre
Channel drives
H A R D W A R E
20© Copyright 2011 EMC Corporation. All rights reserved.
VNX – Designed for FlashOptimized for all of your virtual applications
• Flash fully leverages the power of the VNX system
• End-to-end throughput improvements enable 2-3x performance improvements
All claims are subject to validation testing
H A R D W A R E
BETTER BANDWIDTH PERFORMANCE
BETTER PERFORMANCE
FOR MIXED WORKLOADS
BETTER PERFORMANCE
FOR FILE SERVING
VNX5100 VNX5300CX4-120
VNX5500CX4-240
VNX5700CX4-480
VNX7500CX4-960
CX4
VNX Series
Bandwith - Typical DSS
MB/s
Platform
VNX5100 VNX5300CX4-120
VNX5500CX4-240
VNX5700CX4-480
VNX7500CX4-960
CX4VNX Series with Rotating drivesVNX Series with Flash drives
IOPS - Mixed Workloads
Platform
# ofUsers
VNX5300CX4-120
VNX5500CX4-240
VNX5700CX4-480
VNX7500CX4-960
CX4
VNX Series with SAS drives
VNX Series with Flash drives
NFS File Type Workload
Platform
KOPs
21© Copyright 2011 EMC Corporation. All rights reserved.
Massively Scalable Performance with VNX
Multi-controller scale
• Add VG8 X-blades for the right amount of file sharing power
• Add storage processors for more storage pool scale
• Scales to 96 CPU cores, 4000 drives and 384GB DRAM memory
Independently scale file and block infrastructure
FC SAN
VG8
4 x VNX
VNX series
H A R D W A R E
22© Copyright 2011 EMC Corporation. All rights reserved.
VNX series
VNX X-Blades
• Up to eight independent file servers contained in a single system
– Scale by adding enclosures: 2 X-Blades per X-Blade enclosure
• Managed as one, high-performance, high-availability server
• Connects data to the network
• VNX operating environment for file– No performance impact after failover– Concurrent Network File System (NFS), Common
Internet File System (CIFS), File Transfer Protocol file access, MPFS/pNFS
• Hot-pluggable
• Flexible N-to-M failover options
• Continues to operate even if control station fails
• No internal disks in the Gateway
Run the world’s most mature NAS operating system
X-Blade
X-Blade
X-Blade
X-Blade
Control Station
Control Station
X-Blade Enclosure
VNX, Symmetrixor CLARiiON
Network
H A R D W A R E
23© Copyright 2011 EMC Corporation. All rights reserved.
VNX Control Station
• Installation
• Administration/management– Through X-Blade and Storage Processor
Ethernet ports
• Configuration changes
• Monitoring and diagnostics– Heartbeat pulse of X-Blade
• Monitors and manages X-Blade failover
• Enterprise Linux-based (RHEL 5)
• Initiates communications with X-Blades for greater security
• Single point of management/control
• Failover redundancy option
Secure management and control for VNX for file
Control Station
Control Station
H A R D W A R E
VNX seriesControl Station
PrivateNetwork
Administrator
Unisphere
24© Copyright 2011 EMC Corporation. All rights reserved.
VNX series
VNX X-Blade Failover
• Configurable X-Blade failover options– N-to-M, Automatic, manual, none
• Failover triggers:– Software panic or hang – Internal
network failure– Power failure – Memory
error– Non-responsive X-Blade
• Failed X-Blade shut down to avoid “split-brain syndrome”
• IP, Media Access Control (MAC), and Virtual LAN (VLAN) addresses are transferred
• Automatic call-home of event
• No performance impact after failover
• Automatic control station failover
• Configuration dependent failover times of
~15 secs to ~100 secs
High availability architecture with no performance impact
Network
Data pathtransferred
X-Blade
X-Blade
Data remainsaccessible
Control Station
No client-performance
impact
X-Blade
X-Blade
X-Blade
Control Station
H A R D W A R E
25© Copyright 2011 EMC Corporation. All rights reserved.
VNX High AvailabilityDesigned to deliver 5x 9’s availability
Platform No single-point-of-failure N+1 power and battery backup Redundant, hot-pluggable components
Function RAID protection N+M X-Blades with advanced failover Automatic control station failover Quick X-Blade reboots
Service Simple, customer-driven VNX OE updates Secure remote maintenance, call-home, automatic
diagnosticsH A R D W A R E
26© Copyright 2011 EMC Corporation. All rights reserved.
VNX Object SupportA SINGLE CLOUD BUILT WITH
VNX SYSTEMS
Automated location, protection, and efficiency services
Multi-tenancy securely isolates data
No limits on namespace or location
REST, SOAP, HTTP or Web access
New YorkTokyo London
Object Technology (Atmos/VE)
UNISPHERE GUIINTEGRATION
H A R D W A R E
VMware vSphere
NFS/FC/iSCSI Storage
Atmos Virtual Edition
Flexibility
SOFTWARE ON VMWARE
Certified with EMC Unified Storage (NFS, FC, iSCSI); VMware-supported servers
and third-party storage
Starts at 10 TB Up to 960 TB per
site
NEW
27© Copyright 2011 EMC Corporation. All rights reserved.
Atmos VE on VNX • Storage
– vSphere-supported FC/iSCSI/NFS – Storage can be from more than one
array
• vSphere – vSphere HCL supported servers – Minimum 2 servers required
• Virtual machines– Atmos SW is installed on the VMs– Access methods are configured on
VMs
• Atmos access/integration layer
– Customer web application using Atmos REST/SOAP API
– Pre-integrated ISV application (e.g. Documentum)
IP/FC
Atmos Virtual Edition
Custom Web Application
Atmos ISV Application
File System Access
Policies automate data services
Multi-tenancy securely segregates data
Global scale namespace spans locations
REST and SOAP access methods
H A R D W A R E
ESX Server
Atmos
Node
Atmos
Node
ESX Server
Atmos
Node
Atmos
Node
ESX Server
Atmos
Node
Atmos
Node
ESX Server
Atmos
Node
Atmos
Node
VNX7500VNX5700VNX5100 VNX5500VNX5300
Scales large and fast
Self-optimizes for performance and capacity efficiency
MOSTPOWERFULSYSTEM
28© Copyright 2011 EMC Corporation. All rights reserved.
Atmos VE on VNX Requirements Component Base Configuration Options
VM Configuration
• Min of two VMs• 8 GB – 12 GB RAM per VM• 2 vCPU per VM • 2 vNic per VM• DRS/vMotion• VMware HCL approved HW • Supports shared
infrastructure
• 12GB – 24 GB RAM per VM
• Dual quad CPU per ESX• 4 GigE ports per ESX; 2
vNic per VM• 10 GBE
Unified Storage Fibre Channel, NFS, iSCSI
Access Methods
REST and SOAP
MDS: SS 1:14 (metadata drive to data drive ratio)
Any Atmos supported
# of vDisks Up to 30 RDMs (raw device mappings)
vDisk Size 100 GB to 2 TB 200 GB, 500 GB, 1 TB and 2 TB
# of sites 1 1 or 2 (>2 is supported via RPQ)*
*Build a single cloud with “N” number of VNX systems
• Guest OS should support 64-bit configuration
• Each VM corresponds to an Atmos node; All VMs should share the same configuration
• Max Atmos virtual nodes supported per site is 32
• Max capacity per virtual node is 30 TB
• Maximum capacity per location 960 TB
• Hybrid configurations are supported via RPQ only
H A R D W A R E
29© Copyright 2011 EMC Corporation. All rights reserved.
3-Times More Cost EffectiveGain 3X more storage with VNX capacity efficiency
E F F I C I E N C Y
3Xmore efficient
FAST Suite
File Deduplicatio
n & Compressio
n
Thin Provisioning
ClassicProvisioning
30© Copyright 2011 EMC Corporation. All rights reserved.
Enhanced Virtual ProvisioningStorage pool virtualizes the storage provisioning model
E F F I C I E N C Y
Drives
RAID Group
LUNs
IncludedFeatures
LUN Migration, MetaLUNs
LUN Migration, LUN Expansion,
LUN Shrink (Windows 2K8 only)
Thin LUNThick
LUNClassic LUN
Meta LUN
OptionalFeatures
Replication Features, UQM, Analyzer, FAST Cache
Replication Features, UQM, Analyzer, Virtual
Provisioning, Compression, FAST VP, FAST Cache
Storage
Pool
Traditional RAID Groups Flexible Pools
Flash SAS NL-SAS
31© Copyright 2011 EMC Corporation. All rights reserved.
RAID Group RAID Group RAID Group
Tiered Storage Pool
Near-Line SAS15K SASFLASH
VNX Modular Architecture—Pools
Virtual Blended Storage Pool
• Block services provided by VNX storage processor
• Tiered storage– Choose storage tiers
• Extreme Performance(Flash), Performance (SAS), Capacity (NL-SAS)
– Choose protection type (RAID 1, 5, 6)
• Pool has to have a single RAID type
– System builds RAID groups
• Tiers are aggregated into virtual blended storage pool
E F F I C I E N C Y
32© Copyright 2011 EMC Corporation. All rights reserved.
Virtual Blended Storage Pool
VNX Modular Architecture—LUNs and File Systems
Near-Line SAS15K SASFLASH
• Thin or Thick LUNs are simply built from the Virtual Pool
• LUNs can be shared to block connected Hosts (FC, iSCSI, FCoE)
• File services added
• LUNs are configured for file and consumed by X-Blade
• File systems optimally built via Automated Volume Management and shared via NFS or CIFSSAN
Block Storage Processor
X-Blade File Storage Pool
X-Blade File Services
Thin LUN
Thick LUN
Thin LUN
Thick LUN
CIFS or NFS
E F F I C I E N C Y
33© Copyright 2011 EMC Corporation. All rights reserved.
VNX THIN PROVISIONING
VNX Thin Provisioning
• Capacity oversubscription allows intelligent use of resources
– File systems– FC and iSCSI LUNs– Logical size greater than physical
size
• VNX Thin Provisioning safeguards to avoid running out of space
– Monitoring and alerting
• Automatic and dynamic extension past logical size
– Automatic NAS file system extension
– FC and iSCSI dynamic LUN extension
Only allocate the actual capacity required by the application
Logical application and user view
Physical allocation
User B10 GB
User A10 GB
User C10 GB
4 GB
2 GB2 GB
Physical consumed storage
Capacity on demand
E F F I C I E N C Y
34© Copyright 2011 EMC Corporation. All rights reserved.
VNX Virtual Provisioning• Thick pool LUN
– Full capacity allocation– Near RAID-Group LUN
performance– Capacity reserved at LUN
creation– 1 GB chunks allocated as
relative block address is written
• Thin pool LUN– Only allocates capacity as
data is written by the host– Capacity allocated in 1 GB
chunks– 8 KB blocks contiguously
written within 1 GB– 8 KB mapping incurs some
performance overhead E F F I C I E N C Y
35© Copyright 2011 EMC Corporation. All rights reserved.
Thickly Provisioned Data
Freed CapacityReturned to pool for usage
by other LUNs
Thinly Provisioned Data
Reduced storage capacity
Migrate to ThinUsing LUN Migration or SAN Copy
Space Reclamation with Thin
E F F I C I E N C Y
Storage Pool
36© Copyright 2011 EMC Corporation. All rights reserved.
Storage Pool
Fully Provisioned Data
Freed CapacityReturned to pool for usage
by other LUNs
Thinly Provisioned Data
Reduced storage capacity
Compressed DataMax storage savings
Enable Compression
Compression for More Capacity Savings
E F F I C I E N C Y
Migrate to ThinUsing LUN Migration or SAN Copy
37© Copyright 2011 EMC Corporation. All rights reserved.
File Deduplication and Compression
E F F I C I E N C Y
• Intelligent data selection– Typically avoids active data
Option to target active files Compression support for VMs through
the Celerra Plug-in for VMware End-user file-level activation
– Tunable options by file system Size of files Age of files File extension Directory filtering
• Internal policy engine– Runs in the background– Throttles to avoid negative
impact on client services
• Leverages EMC technologies– Compression engine– Deduplication engine
Capacity Savings
Traditional file data
1 TB
Up to 50%savings
File-level Deduplication- enabled file system
~ 500 GB
Activedata
~100 GB
Up to 500 GB savingsDedupe and
compressed data
~400 GB
Activedata
~100 GB
Inactive or aged data orspecifically targeted data
~900 GB
38© Copyright 2011 EMC Corporation. All rights reserved.
Software SuitesTechnical Presentation
39© Copyright 2011 EMC Corporation. All rights reserved.
Software. Simple.Powerful.
40© Copyright 2011 EMC Corporation. All rights reserved.
Advanced Software: VNX Total Efficiency Pack
All software managed via Unisphere
• Optimize for both the lowest cost and the highest application performance automatically
• Keep data safe from data corruption, changes, deletions, and malicious activity
• Achieve safe data protection and re-purposing
• Protect data against localized failures, outages and disasters at all times
• Automate application copies and prove compliance to corporate policies
Simplified ordering, maximum cost effectiveness
VNX Packs VNX Suites
VNX Total Efficiency Pack
FAST Suite
Security and Compliance Suite
Total Protectio
n Pack
Local Protection Suite
Remote Protection Suite
Application Protection Suite
41© Copyright 2011 EMC Corporation. All rights reserved.
VNX Series Software ComponentsSoftware solutions made simple
Attractively Priced Packs and Suites
VNX5100 does not support FAST VP, VEE, FLR, Replicator, or SnapSure. It also does not support the Total Efficiency Pack, but a Total Value Pack instead.
VNX SuitesVNX Packs
FAST Suite
VNX Total Efficiency Pack
Security and Compliance Suite
Local Protection Suite
Total Protection Pack
Remote Protection Suite
Application Protection Suite
Management software Unisphere
Base software (no additional charge)
File deduplication & compression, block compression, virtual provisioning, SAN Copy, and protocols
FAST VP, FAST Cache, Unisphere Analyzer, Unisphere Quality of Service Manager
Event Enabler (anti-virus, quota management, auditing), File-level retention, host encryption
SnapView, SnapSure, RecoverPoint/SE CDP
Replicator, MirrorView A/S, RecoverPoint/SE CRR
Replication Manager, Data Protection Advisor for Replication
42© Copyright 2011 EMC Corporation. All rights reserved.
New FLASH 1st Data StrategyHot data on fast Flash SSDs—cold data on dense disks
“Hot”high activity As data ages, activity
falls, triggering automatic movement to high capacity disk
drives for lowest cost
Highly active data is stored on
Flash SSDs for fastest
response time
Movement Trigger
FLASHSSD
“Cold”low activity
Data
Act
ivit
y
High Cap.HDD
Data Age
43© Copyright 2011 EMC Corporation. All rights reserved.
The FAST Suite
• FAST Cache continuously ensures that the hottest data is served from high performance FLASH SSDs
• FAST VP (Virtual Pools)* optimizes storage pools automatically, ensuring that active data is being served from SSDs, while cold data is moved to lower cost disk tiers
• Together they deliver a fully automated FLASH 1st storage strategy for optimal performance at the lowest cost attainable
• Monitor and Tune the whole system with the complementary Unisphere QoS Manager and Unisphere Analyzer
Highest performance and capacity efficiency—automatically!
* Not available for VNX5100.
FA S T S U I T E
FlashSSD
High- Performance
HDD
High- capacity
HDD
Real-time caching with FAST Cache
Scheduled optimizationwith FAST VP
44© Copyright 2011 EMC Corporation. All rights reserved.
FAST Cache Overview
• Support for file and block
• Extends mid-tier cache using Flash drives
– Adds up to 2 TB of cache
• Less than a third of the cost of DRAM
• Hot data automatically ends up in FAST Cache
• RAID 1 for Read/Write protection
• Transparent to SP failure, no need to warm up the cache
• Applicable to most workloads
FAST Cache
Disk Drives
DRAM308
1
Fastest
Performance
Capacity
Exchange SharePointOracle
DatabaseFileVMwareSAP
Run SQL and Oracle up to 3X faster
FA S T S U I T E
45© Copyright 2011 EMC Corporation. All rights reserved.
FAST Cache Approach Page requests satisfied from
DRAM if available
If not, FAST Cache driver checks map to determine where page is located
Page request satisfied from disk drive if not in FAST Cache
Policy Engine promotes a page to FAST Cache if it is being used frequently
Subsequent requests for this page satisfied from FAST Cache
Dirty pages are copied back to disk drives as background activity
MAPPolicyEngine
Driver
Exchange SharePointOracle
DatabaseFileVMwareSAP
DRAM
FAST Cache Disk Drives
FA S T S U I T E
46© Copyright 2011 EMC Corporation. All rights reserved.
FAST Cache Configuration• FAST Cache supported for all VNX based
platforms– Applicable to VNX for File (V7) and VNX for Block
(R31)
• Highly scalable– Up to 2.1 TB EMC FAST Cache (using 100 GB drives)
extending System Cache by a factor of 90 – Reads and writes supported
• Applies to classic LUNs and pool LUNs– Thick and Thin pool LUNs
• A system-wide resource that benefits many workloads
– Host application data : VMware, Oracle, SQL; OLTP/DW etc
– Array-based data services (e.g. Snaps, etc.)
• Two-click configuration in Unisphere
System Model
EMC FAST Cache Max Capacity*
VNX5100 100 GB/NA
VNX5300
500 GB/400 GB
VNX5500 1 TB/1 TB
VNX5700 1.5 TB/1.4 TB
VNX7500 2.1 TB/2.0 TB
* The two numbers are when using 100 GB/200 GB drives, e.g., 5100 does not support 200 GB Flash drives
FA S T S U I T E
47© Copyright 2011 EMC Corporation. All rights reserved.
FAST VP for Block and File Access
Automates movement of hot or cold blocks
Optimizes use of high performance and high capacity drives
Improves cost and performance
Optimize VNX for minimum TCO
LUN 2
LUN 1
Tier 2
Tier 1
Pool
BEFORE AFTER
Tier 0
Most activity Neutral activity Least activity
FA S T S U I T E
48© Copyright 2011 EMC Corporation. All rights reserved.
SSD HDD
LUN
Highest Tier PreferredMaximize performancee.g. high-performance OLTP system
LUN LUNLowest Tier PreferredReduce TCOe.g., archived or infrequently accessed data
Auto-TierOptimize TCO and performancee.g. databases with varied levels of activity among tables, or file systems with varied levels of activity across files
FAST VP PoliciesPolicy ensures storage service levels are met
FA S T S U I T E
1
1
2
2
3
3
49© Copyright 2011 EMC Corporation. All rights reserved.
FAST VP Operational Process• Statistics collection—cumulative I/O history (reads and writes)
– Weights recent I/O history above longer-term I/O history– Maintains relative ranking of all data in pool, based on tier preference and I/O history:
• Highest tier preference and high activity level get highest priority• Highest tier preference with less activity level get next highest priority• No tier preference (Auto-Tier) with high activity level gets next highest priority
• When a pool is created, it is detected in the next poll cycle for inclusion in statistics collection
– LUNs created in pool are likewise detected in the next poll cycle for inclusion in statistics collection
– Poll occurs every hour– Relocation estimate—“amount to move up/down”—updated every hour
• Tier utilization– Algorithm attempts to gain greatest utility from highest tiers
• Data is demoted as space is needed in top tiers
• Relocation granularity– Sub-LUN “slices” are 1 GB granularity
FA S T S U I T E
50© Copyright 2011 EMC Corporation. All rights reserved.
FAST Virtual Pool
NL-SASNL-SASNL-SASNL-SASNL-SASNL-SASNL-SASNL-SAS
FAST Suite in Action—FAST VP
SAS SAS SAS SAS
FLASH FLASH
FAST VPtiers across drives in pool Optimizes drive
utilization Relative ranking over
time 1 GB slices ideal for
deterministic data
FA S T S U I T E
51© Copyright 2011 EMC Corporation. All rights reserved.
FAST Suite—FAST VP + FAST Cache
DRAM Cache
FAST Cache
FLASHFLASH FLASHFLASH
FAST Virtual Pool
NL-SAS
SAS SAS SAS SAS
FLASH FLASH
NL-SASNL-SASNL-SASNL-SAS NL-SASNL-SASNL-SAS
FAST VP tiers across drives in pool Optimizes drive
utilization Relative ranking over
time 1 GB slices ideal for
deterministic data
FAST Cache copies hottest data to Flash Optimizes Flash
utilization Dynamic movement in
near real time 64 KB sub-slices ideal
for bursty data
FA S T S U I T E
52© Copyright 2011 EMC Corporation. All rights reserved.
FAST Suite—FAST VP + FAST Cache
DRAM Cache
FAST Cache
FLASH FLASHFLASHFLASH
FAST Virtual Pool
NL-SAS
SAS SAS SAS SAS
FLASH FLASH
NL-SASNL-SASNL-SASNL-SASNL-SASNL-SASNL-SAS
FAST VP tiers across drives in pool Optimizes drive
utilization Relative ranking over
time 1 GB slices ideal for
deterministic data
FAST Cache
copies hottest data to Flash Optimizes Flash
utilization Dynamic movement in
near real time 64 KB sub-slices ideal
for bursty data
FA S T S U I T E
53© Copyright 2011 EMC Corporation. All rights reserved.
3-Times the PerformanceSupercharge applications with VNX and FAST Suite
FA S T S U I T E
• 3X the number of users
• 3X the number of transactions
• 1/3 View boot time
• 4X faster View response time
• 3X the number of users
• 3X the number of transactions
54© Copyright 2011 EMC Corporation. All rights reserved.
Accelerate SQL Server with VNX Series
• 4.5x performance improvement vs. previous generation
• Achieves optimized performance without expensive data base and application tuning
• Configuration: – VNX5700 utilized 20 SAS and
4 Flash drives with FAST Cache
– vs CX4-480 with 20 FC drives
More than 3x performance improvement with VNX and FAST Cache
Rela
tive T
ransa
ctio
ns
per
seco
nd
CX4 1 year ago VNX series with FAST Cache
0
1
2
3
4
5
1
4.5
Virtualized SQL Server with VNX and FAST Cache
FA S T S U I T E
55© Copyright 2011 EMC Corporation. All rights reserved.
Improved Scale and Availability for Virtual Desktop
• Boot storm– 3x faster: Boot and settle
500 desktops in 8 min vs. 27 min.
– FAST Cache absorbs the majority of the Boot work-load (i.e. I/O to spinning drives)
• Desktop refresh– Refresh 500 desktops in 50
min. vs. 130 min.– Fast Cache serviced the
majority of the IO during refresh and prevents linked clones from overloading
4x the number of virtual desktop users with VNX Series, FAST VP and FAST Cache at sustained performance Boot
Refresh
FA S T S U I T E
Before Config• NS-120
• 30FC + 15 SATA drives
After Config• VNX5300
• 5xFlash, 21x SAS, 15xNL-SAS
• 2 x Flash as Fast Cache
• 2x Flash for VM Replica storage
• SAS and NL-SAS with FAST VP for linked clones
The 2 configurations are comparably priced
56© Copyright 2011 EMC Corporation. All rights reserved.
VNX with FAST Suite vs CX4 with HDD Only:Optimize TCO for Virtual Desktop Solutions
• Boot storm– Boot and settle 500 desktops in 8
min.
• Desktop refresh– Refresh 500 desktops in 50
min.
• Flash enabled VNX versus NS with conventional HDD to deliver 500 desktop SLA– Conventional Solution requires
NS-480 performance and 183 FC drives
– Optimally tiered solution requires VNX5300 with 5x Flash, 21 x SAS and 15xNL-SAS
Up to 70% TCO benefit compared to same performance conventional storage
Celerra NS-480
183x 300GB 15K FC Disks
VNX5300
5x 100GB Flash
21x 300GB 15K SAS
15x 2TB NL-SAS
Up to 70% reduction in storage cost for same
I/O performance
FA S T S U I T E
57© Copyright 2011 EMC Corporation. All rights reserved.
Accelerate Oracle OLTP with VNX Series
• Improved Oracle transaction time by 3.7x
– Using FAST Cache and virtualized Oracle (with vSphere 4.1)
• Increased performance comes at only a 18% increase in storage solution cost
• Configuration: – VNX5300 with 20 15K SAS and
7 Flash drives (two used as FAST Cache, five in a tiered pool with FAST VP)
– Vs. CX4-120 with 45 FC drives
>3x utility from your Oracle OLTP using FAST Cache and FAST VP
If you hear Oracle OLTP, VNX series is a great
solution
Rela
tive T
ransa
ctio
ns
per
min
ute
CX4 1 year ago VNX series with FAST Cache
0
1
2
3
4
1
3.7
Virtualized Oracle with VNX Series and FAST Cache
FA S T S U I T E
58© Copyright 2011 EMC Corporation. All rights reserved.
Accelerate Oracle DSS with VNX SeriesOracle Decision Support Systems (DSS) benefits from VNX• Up to 4.5x increase in
bandwidth
• Before configuration– CX4-120 with 30 x 300GB 15K
FC drives
• After configuration– VNX 5300 with 75 x 300GB 15K
SAS drives– No FAST VP or FAST Cache was
enabled due to limited workload benefit
Rela
tive P
erf
orm
ance
Oracle DSS with VNX
CX4 1 year ago VNX series -
1
2
3
4
5
1.0
4.5
FA S T S U I T E
If you hear Oracle DSS, VNX series is a great
solution
59© Copyright 2011 EMC Corporation. All rights reserved.
Unisphere Quality of Service ManagerApplication-based service level management
• Manage block resources based on service levels
– Monitor and achieve performance objectives for applications
• Optimize performance based on policy management
– Set performance goals for critical applications– Set limits on lower-priority applications– Schedule policies to run at different intervals
• Measure and control storage based on different metrics
– Response time (e.g., Exchange) – Bandwidth (e.g., backup to disk)– Throughput (e.g., OLTP applications)
• Complements FAST VP and FAST Cache– Adds additional dynamic service level management
to FAST VP and FAST Cache
Availa
ble
Perf
orm
ance
Applications
Before Unisphere Qualityof Service Manager
MediumPriority
HighPriority
Low Priority
Availa
ble
Perf
orm
ance
Applications
With Unisphere Qualityof Service Manager
FA S T S U I T E
60© Copyright 2011 EMC Corporation. All rights reserved.
Unisphere Analyzer
• Provides real-time and historical performance data
• Pinpoints performance bottlenecks
• Easy, one-step access to charts and reports
• Provides the flexibility to customize the analytical focus by time period, elements, and metrics
Block data trend analysis, reporting, and capacity management
FA S T S U I T E
61© Copyright 2011 EMC Corporation. All rights reserved.
VNX Security and Compliance SuiteKeep data safe from changes, deletions, and malicious activityEMC VNX Host EncryptionMaintains data confidentiality for data at rest, provides compliance
Encrypts data where it is created—providing protection anywhere outside the server
S E C U R I T Y A N D C O M P L I A N C E S U I T E
EMC VNX File-level Retention (FLR)• Provides the ability to
lock down (WORM) file systems to avoid malicious or accidental changes
• Supports file-level retention periods
• VNX File-level Retention Compliance Option (FLR-C) meets SEC Rule 17a-4(f) compliance requirements
EMC VNX Event Enabler (VEE)• Delivers alerts
upon file system actions
• Allows integration with third-party anti-virus checking, quota management, and auditing applications
62© Copyright 2011 EMC Corporation. All rights reserved.
EMC VNX Host Encryption• Provides host-based data security solution for
VNX environments with Windows, Linux or Solaris hosts
• Integrates with Emulex hardware-assist HBA encryption option
– Offloads encryption to HBA, resulting in near-zero impact to host CPU
– Addresses software-based encryption performance concerns
• Complements PowerPath Encryption with Emulex HBA option for Symmetrix, non-EMC, and mixed-array environments
• Protects data when it leaves a protected area such as a secure data center
– E.g., disk migrations, rotations, or equipment upgrades
VNX Host Encryption
S E C U R I T Y A N D C O M P L I A N C E S U I T E
63© Copyright 2011 EMC Corporation. All rights reserved.
VNX File Level RetentionFile Level Retention Enterprise Option (FLR-E)• Provides for retention periods per file
• Enables adherence to good business practices – Tamper proof clock– Activity Logging
File Level Retention Compliance Option (FLR-C)• Meets SEC Rule 17a-4(f) compliance requirements
– Prevents file system deletions with locked files– “Hard” default retention periods– Data verification to validate committed content
• Retention periods cannot be modified
• File systems can only be deleted when retention period has expired
• Third-party compliance validation paper
S E C U R I T Y A N D C O M P L I A N C E S U I T E
64© Copyright 2011 EMC Corporation. All rights reserved.
File Level Retention Workflow
• Not-locked files– Traditional non-FLR files
• Locked files (WORM)– Retention periods are set on a per-file
basis– Retention period is set to “infinite” if
left unspecified – Cannot be deleted, renamed, or
modified– Retention periods can be extended
• Append files– Protected files to which content can
be added but not modified or deleted
• Expired files– Files can be deleted after retention
time expires
Optional VNX for File functionality for CIFS and NFSFile Level Retention Workflow
Non-File Level Retention-enabled filesSet retention periods—enable File Level Retention and commit to “WORM” stateRetention period extendedRetention period expiresLocked or expired empty filesFile deleted
1
2
3
4
5
6
Not-lockedLocked(WORM)
1 2Expired
3
Append
2
4
6
3
5
S E C U R I T Y A N D C O M P L I A N C E S U I T E
65© Copyright 2011 EMC Corporation. All rights reserved.
VNX Event Enabler OverviewVNX integrates with best-of-breed third party enterprise applications
• Best-of-breed third party software integration
– Anti-virus– Enterprise quota management– Unstructured data auditing
• CIFS and NFS File based alerting
• Extensible architecture
• Highly available
S E C U R I T Y A N D C O M P L I A N C E S U I T E
66© Copyright 2011 EMC Corporation. All rights reserved.
Virus-checking request
�VNX Anti-Virus Support
• Shared bank of virus-checking servers
• Can deploy multiple vendors’ engines concurrently
• Virus-checking server only reads part of files
• File access is blocked until it is checked– Scan after update– Scan on first read– Automatic access-time update
• Notification on virus detect
• Anti-virus sizing tool
• Runs over VNX Event Enabler infrastructure
McAfee NetShieldSymantec AntiVirus for NAS and EndpointTrend Micro ServerProtect for EMC VNXCA eTrust AntivirusSophos Anti-VirusKaspersky Anti-Virus
Virus-checkingserver
User
Write/close or first read after new virus-definition file
Œ
VNX5300
This is the only anti-virus checking method supported by major virus-checking vendors for network shares
Virus-checking signatures
S E C U R I T Y A N D C O M P L I A N C E S U I T E
67© Copyright 2011 EMC Corporation. All rights reserved.
VNX Event Publishing Agent• Event notification for integration
with third-party applications– Quota management – Auditing and indexing
• Alerts agents upon VNX file actions
– Files: create/open/delete/close (unmodified or modified)/rename
– Directories: create/delete/rename– Files and directories: any attempt to
modify the security metadata (access control list modification)
• Can deploy multiple agents concurrently for high availability
• Runs over VNX Event Enabler infrastructure
Third-party applications:Northern Parklife—NSSNTP Software—QFSVaronis—DatAdvantage
Server withCEPA
User VNX5300
User creates file stored on VNX
Œ
Event sent to third-party application on server through VNX Event Publishing Agent integration
�
Response from third-party application to VNX
Response from VNX to user
S E C U R I T Y A N D C O M P L I A N C E S U I T E
68© Copyright 2011 EMC Corporation. All rights reserved.
Total Protection—Better than EverUnified replication with the Total Protection Pack
Click icon to add picture • Local and remote data recovery with DVR-like roll-back
• Restore individual or multiple virtual machines with a single click
• Define and enforce custom RPOs and SLAs across virtual infrastructure
• Automated failover and failback
• Proven reference architectures
69© Copyright 2011 EMC Corporation. All rights reserved.
SnapViewithSnapSureSnapshots/Clones: Recovery point every 3 hours
Traditional BackupDaily Backup: Recovery point every 24 hours
RecoverPoint/SE CDP and CRRContinuous Data Protection: DVR like recovery
Unlimited recovery points, application bookmarks
(T) TIME
Checkpoint Patch Post-Patch Cache Flush QuarterlyClose
HotBackup
CheckpointPre-Patch
MirrorViewithReplicatorDisk Mirroring: Recovery point latest image replicated
Options for Data Protection
70© Copyright 2011 EMC Corporation. All rights reserved.
VNX Local Protection Suite
SnapView and SnapSure• Production data copies for
instant recovery on file and block storage
• Streamline data protection and repurposing use cases
– Development/QA testing– Reporting/decision support
tools– Backup acceleration
RecoverPoint/SE CDP• DVR-like roll-back of
production applications to any point-in-time
• Granular recovery to the I/O level for VNX for Block storage
• Self-service recovery with tighter RPO and flexibility
• Streamline data repurposing
L O C A L P R O T E C T I O N S U I T E
Practice safe data protection and repurposing
71© Copyright 2011 EMC Corporation. All rights reserved.
SnapView and SnapSure OverviewAccelerate data protection with point-in-time replicas
• Provide near-instant recovery
• Increase application availability while reducing downtime
– Improve RTOs and RPOs– Eliminates downtime during backup window
• Facilitates source data restore
• Enable parallel processing for:– Data-warehouse refreshes– Decision support– Application development and testing
• Application integration and advanced monitoring via the optional Application Protection Suite
Production File System
L O C A L P R O T E C T I O N S U I T E
Productiondata
DB check-point replica
Test and devt. replica
Recoveryreplica
Backupreplica
RecoveryReplica
Test Replica
DevReplica
72© Copyright 2011 EMC Corporation. All rights reserved.
Point-in-Time Views with SnapsLogical PIT Copies
• Pointer-based copy of data– Takes only seconds to create a complete snap
• Requires less space than a full copy, but has performance overhead
– Only need space for modified data—“Copy on First Write”– Could result in spindle contention for concurrent reads (from
source and snap)
Physical PIT Copies*
• Physically independent point-in-time copies of source volume
– Require the same space as the source data– Available after initial copy– No performance impact on source data– Can be used to replace source after hardware or software error
• Can be incrementally re-established
Logical point-in-time view
Full-image copies
* Physical PIT copies for file systems requires VNX Replicator
L O C A L P R O T E C T I O N S U I T E
Snap
Snap
Snap
Snap
Sourcedata
SourceLUN
Clones
Clones
Clones
Clones
73© Copyright 2011 EMC Corporation. All rights reserved.
Block C
Block A
Block B
Updated
Block D
COFW Storage
Secondary Server
Production Server
Source data
Tracking Bitmap Snap
L O C A L P R O T E C T I O N S U I T E
Logical Snap: Copy-on-First-Write Process
OriginalBlock C
Memory
Block C
Block A
Block B
Block D
0 0 1
0 0 00 000 0 0
0 00
00 0
00
1
74© Copyright 2011 EMC Corporation. All rights reserved.
Instant Restore• Restore source data contents to
different point-in-time version– Any snap or clone may be selected– Does not affect other replicas
• Change in source data appears instantaneously to host
– Data copied to source data “behind the scenes”
– During restore, source data is accessible for server reads and writes
– Other replicas available for server I/O
• Ideal for operational recovery, to recover quickly
• Supported for LUNs and file systems
L O C A L P R O T E C T I O N S U I T E
Productionserver
Backupserver
Sourcedata
Snap1:00 a.m.
Snap2:00 a.m.
Clone10:00 p.m.
Snap9:00 p.m.
Recover
Back up
75© Copyright 2011 EMC Corporation. All rights reserved.
Federated, clustered, cloud applications Protect physical and virtualized application Integrated with VMware and VMware SRM
RECOVERPOINT/SE CDP
EMC RecoverPoint/SE CDP• Protects block data for
physical and virtual servers
• Provide affordable data protection
– Recovery to last write for VNX Unified block storage
• Enables any point-in-time recovery
– DVR-like roll-back to minimize data loss
– Customized RPOs– Self-service recovery
Prod
LUNs
CopyApplication
servers
SAN
RecoverPointappliance
Storagearrays
Host-based splitter
VNX based write-splitter
Journals
L O C A L P R O T E C T I O N S U I T E
76© Copyright 2011 EMC Corporation. All rights reserved.
EMC RecoverPoint�
ŒŽ
Array-based write splitter runs on VNX
series, no host agent
required
�
L O C A L P R O T E C T I O N S U I T E
RecoverPoint/SE CDP
RecoverPoint write splitter – Intercepts all server writes (block-level)– Resides on the VNX series array (or
optionally on host or Virtual Machine)
RecoverPoint Appliance– Runs RecoverPoint software, offloads
replication processing for better scale and performance
– Performs protection and recovery for VNX Unified block storage
– Handles monitoring, management, and control
– Attached to FC for storage access or can be directly attached to FC ports on VNX series
– Maintains write-order fidelity
Journal– Tracks all data changes to every protected
LUN– Utilizes bookmarks for application-aware
recovery
Roll back production applications to any point in time
iSCSI SAN FC SAN
77© Copyright 2011 EMC Corporation. All rights reserved.
RecoverPoint/SE Local Protection ProcessContinuous Data Protection (CDP)
L O C A L P R O T E C T I O N S U I T E
/ A / C/ B
r A r Cr B
1. Data is split by the VNX splitter and sent to the RecoverPoint appliance
4. The appliance writes data to the journal volume, along with time stamp and application-specific bookmarks
5. Write-order-consistent data is distributed to the replica volumes
Production volumes Replica volumes Journal volume
3. Writes are acknowledged back from the RecoverPoint appliance
2. VNX splitter
78© Copyright 2011 EMC Corporation. All rights reserved.
VNX Remote Protection SuiteEnsure your data is physically protected all the time
R E M O T E P R O T E C T I O N S U I T E
RecoverPoint/SE Continuous Remote Replication (CRR)
• One solution to protect any host, any application
– Unified block and file replication
• Efficiently protect all your data
– WAN deduplication for bandwidth reduction up to 90%
• Customize RPOs from zero to hours for improved quality of service
• Immediate DVR-like recovery
Replicator*
• Scheduled asynchronous file system level replication including one to many and cascading replication
* Not available for VNX5100
MirrorView
• Synchronous or asynchronous one to many block replication solution
79© Copyright 2011 EMC Corporation. All rights reserved.
RecoverPoint/SE Remote Protection Process—Continuous Remote Replication (CRR)
5. Data is sequenced, checksummed, compressed, and replicated to the remote RecoverPoint appliances over IP or SAN
1. Production Data sent to storage and split by embedded VNX splitter
3. Writes are acknowledged back from the RecoverPoint appliance
4. Appliance functions
• Fibre Channel-IP conversion
• Deduplication• Data reduction
and compression
• Monitoring and management
6. Data is received, uncompressed, sequenced, and checksummed
R E M O T E P R O T E C T I O N S U I T E
r A r Cr B/ A / C
/ B
Local site
7. Data is written to the journal volume
Remote site Journal volume8. Consistent
data is distributed to the remote volumes
2. VNX splitter
80© Copyright 2011 EMC Corporation. All rights reserved.
RecoverPoint/SE Consistency Groups (CG)
• RecoverPoint ensures consistency across production and targets
• Maintain SLAs by assigning priorities to independent applications
• RecoverPoint enables independent replication of various applications
• RecoverPoint utilizes consistency groups to recover and prioritize data
• Supports single and multiple server (federated) applications
VNX with RecoverPoint/SE supports federated applications
CG 3
CG 1
CG 2
E-mail CRR
CRRCDPSCM
CRROE
CRRCRM CDP
R E M O T E P R O T E C T I O N S U I T E
81© Copyright 2011 EMC Corporation. All rights reserved.
RecoverPoint/SE File System Replication
Use case/environment• Unified replication between 2 VNX
systems with RecoverPoint/SE
• Bi-directional also supported
Value• Full failover of protected file
systems
• Improve DR site ROI
Feature details• Single consistency group for File
System replication per array
• On NAS failover: – All primary site Data Movers will failover
or shutdown– Non-file system storage on primary site
remains available
RecoverPoint/SE support for NAS file system disaster recovery on VNXApp/DBServers
DR Site
R E M O T E P R O T E C T I O N S U I T E
Storage Storage
DM1(DR stby)
Primary Storage
DM1(prod)
DM2 (dev/test)
RecoverPointFile CG
RecoverPointBlock CG(s)
82© Copyright 2011 EMC Corporation. All rights reserved.
VNX Replicator
• Service-level specifications– Automated, business-oriented policy definitions
for recovery point objectives (RPOs)– Set interconnect Quality of Service (QoS) by
scheduled bandwidth throttling
• Advanced functionality– 1-to-N replication for data distribution– Cascading Replication for multi-site disaster
recovery– Many to 1 replication for consolidation
• Scalability– Up to 1,024 replication sessions
• Common replication management with Unisphere
• Integrates with writeable NAS snaps
• Supported concurrently with RecoverPoint
Native file system level replication focuses on ease-of useCASCADING REPLICATION
Easily specify replication RPO and interconnect Quality of
Service
Productionsite
Disaster recovery local
site
Disaster recovery
remote site
10-minute RPO 2-hour RPO
LAN WAN
Snaps
FS/LUNVDM
Snaps
FS/LUNVDM
Snaps
FS/LUNVDM
Network Network Network
R E M O T E P R O T E C T I O N S U I T E
83© Copyright 2011 EMC Corporation. All rights reserved.
0 0 0 00 0 0 00 0 0 00 0 0 0
Primary
F’J’O’R’E J O R
LUN level Disaster Recovery – MirrorViewithSynchronous
• Cost effective block replication
– Supports multi-site replication
• Tracks host writes while link to secondary is down
– Uses a bitmap to map the entire primary mirror
– When secondary is available again, sends only changed data
• Enables partial sync; avoids full re-sync
– Minimizes customer’s exposure to out-of-sync data
Cost effective synchronous block replication
11
11
00
00
R E M O T E P R O T E C T I O N S U I T E
Secondary
E J O RE’J’O’R’
Fracture Log
84© Copyright 2011 EMC Corporation. All rights reserved.
VNX Application Protection SuiteAutomate application copies and prove compliance
Replication Manager• Automated “application-consistent”
copy management• User privileges enable self-service
replication
Data Protection Advisor for Replication• Increased visibility for all application
recovery points• Monitor, alert, troubleshoot, and report• Prove applications are recoverable
A P P L I C A T I O N P R O T E C T I O N S U I T E
85© Copyright 2011 EMC Corporation. All rights reserved.
Replication Manager Local or Remote Protection
• Automates the creation, management, and use of all EMC disk-based, point-in-time replication technologies
• Auto-discovers the environment
• Intelligence to orchestrate replicas with deep application awareness
• Easy to use single GUI with advanced wizards
• Assignable roles for self-service replication
Improved business continuity with automated disk-based replicas
Integrated with major enterprise applications
A P P L I C A T I O N P R O T E C T I O N S U I T E
Replica 1
Replica 2
Replica 3
Replica 4
86© Copyright 2011 EMC Corporation. All rights reserved.
WANSAN SAN
Replication Manager server thaws application
Local CDP Copy
Exchange Production
Exchange Mount
Host
Local CDP
Journal
Production LUN
Replication Manager Server
Replication Manager server freezes application Replication Manager server requests VSS-compliant bookmark
RemoteCRR
Journal
RecoverPoint RecoverPoint
RemoteCRRCopy
LUNLUN
BOOKMARK
Replication Manager Example: RecoverPoint
A P P L I C A T I O N P R O T E C T I O N S U I T E
BOOKMARK
87© Copyright 2011 EMC Corporation. All rights reserved.
Replication Manager Example: NFS and Oracle Replicas
• Automate local and remote replicas with application consistency for backup acceleration or business repurposing
– Local (in-frame) uses VNX SnapSure– Remote (out-of-frame) uses VNX Replicator
• NFS-based storage configurations on Linux
– VNX NFS file systems mounted as network file systems on a Linux host (physical or VMware Linux guest operating system)
– Oracle data on a network file system or dNFS mounted on a VNX NAS file system
• Oracle support includes:– Oracle 10g, 11g , 11g R2 on Linux– Real Application Cluster (RAC) to single instance
cloning – Backs up database or tablespace– Optionally backs up Flash Recovery Area (FRA)
archive logs, parameter files– Application consistency through Oracle hot
backup• Backs up control file, data files, archive logs
Productionhost
Secondaryhost
DestinationVNX
Snaps Snaps
NFSprotocol
NFSprotocol
SourceVNX
IPnetwork
Destinationfile
system
Source file
system
A P P L I C A T I O N P R O T E C T I O N S U I T E
88© Copyright 2011 EMC Corporation. All rights reserved.
Replication Manager for VMware Environments
• Common management console for EMC snaps, clones and CDP replicas
• Application and VM consistency– Exchange, SQL Server, SharePoint, Oracle
• Eliminates costly, error prone scripting of replicas with point-and-click wizard-driven GUI
– Backup acceleration eliminates backup windows
– Business continuity for instant restore or surgical repairs to production
– Repurposing to dev/test environments
• No impact on production performance
• Faster, low impact on VMware ESX Server
vSphere VI ClientReplicationManager
proxy server
VNX
Replica
SnapView
Production
A P P L I C A T I O N P R O T E C T I O N S U I T E
89© Copyright 2011 EMC Corporation. All rights reserved.
Application integrationExchangeSQL ServerOracleFile Systems
Data Protection Advisor for ReplicationAlways know you can recover
• Define and measure protection policies
• Visibility and alerts for:– All recovery points– Recovery gaps– Replication lag– Configuration changes
• Integrated reporting– Chargeback services– Service level compliance
• Host-less option
REPLICATION ANALYSIS
Instant status by application or device
CompletenessConsistencySLA compliance
Status
A P P L I C A T I O N P R O T E C T I O N S U I T E
90© Copyright 2011 EMC Corporation. All rights reserved.
Recovery point—A collection of images that can be used to recover an application or storage object
DPA Application MappingDATABASELOCAL
SYSTEMSTORAGE
Oracleserver
Datafiles
Control files
Archivelogs
Vol1
Vol2
Rem
ote
replica
tion
Vol1
Vol2
Application Recovery Logic Awareness
Data Protection Process Awareness
Application Awareness
Host/Storage Mapping
Awareness
Right Backup Mode Lo
cal re
plica
tion
Required with Hot Backup
Always required
Prim
ary
stora
ge
REMOTE SYSTEM
RecoveryServerRedo logs
Required with Cold Backup
A P P L I C A T I O N P R O T E C T I O N S U I T E
91© Copyright 2011 EMC Corporation. All rights reserved.
Customize Protection Policies
• Create protection policies– Continuous– Point-in-time
• Assign policy to server, group of servers, application, or line of business
• Monitor policy compliance
• Schedule reports to prove compliance to defined policies
Measure and monitor protection policy compliance
A P P L I C A T I O N P R O T E C T I O N S U I T E
92© Copyright 2011 EMC Corporation. All rights reserved.
Next-Generation Unified StorageOptimized for today’s virtualized IT
VNX7500
VNX5700
VNX5100
VNX5500
VNX5300
Scales large & fast Self-optimizes for
performance and capacity efficiency
Works on any network
Everything fully automated
MOST POWERFULSYSTEM
MOST POWERFULSOFTWARE
Affordable. Simple. Efficient. Powerful.
93© Copyright 2011 EMC Corporation. All rights reserved.
THANK YOU