Cisco Hyper Converged Infrastructure - Hyperflex
Silvo Lipovšek
Sistemski inženir
Cisco
© 2017 Cisco and/or its affiliates. All rights reserved .
Infrastructure Landscape
© 2017 Cisco and/or its affiliates. All rights reserved .
© 2017 Cisco and/or its affiliates. All rights reserved .
Best of Breed Solutions Converged Infrastructure
Hyper-Converged Infrastructure
MDS 9000Storage
HyperFlexNEXUSNetworking
ASA 5000 NGFWSecurity
UCSCompute
Hardware Platforms
ASAP Data Center and Cloud Architecture
Software and Orchestration SDN
UCSDirector
Cisco CloudCenter
Cisco Prime Services Catalog
© 2017 Cisco and/or its affiliates. All rights reserved .
Hyper Flex
© 2017 Cisco and/or its affiliates. All rights reserved .
Software Modules Inside a Server
Controller VM Has Direct Control of Drives
VAAI Plugin Offloads Snapshots
and Clone Operations
IO Visor Module Presents NFS to ESX and Stripes IO
DATASTORE/VOLUME
HYPERVISOR
HDD
HDD
SDD
SDD
SSD SSD SSD SSD
CONTROLLERVM
VAAI
stHyp
VM VM VM VM VM
IO Visor
stHypervisorSvc Coordinates Native
Recovery Snapshots and Replication
© 2017 Cisco and/or its affiliates. All rights reserved .
Hyperconverged Scale Out and Distributed File System
CONTROLLERVMHYPERVISOR
VM VM VM
HYPERCONVERGED DATA PLATFORMHYPERCONVERGED DATA PLATFORMHYPERCONVERGED DATA PLATFORM
Start with As Few As
Three Nodes
Hyperconverged Data Platform
Installs in Minutes
Add Servers, One or More
at a Time
Network Fabric Policy Configures
QoS Settings
Distribute and Rebalance Data Across Servers Automatically
Retire Older Servers
HYPERCONVERGED DATA PLATFORM
CONTROLLERVMHYPERVISOR
VM VM VM
CONTROLLERVMHYPERVISOR
VM VM VM
CONTROLLERVMHYPERVISOR
VM VM VM
CONTROLLERVMHYPERVISOR
VM VM VM
© 2017 Cisco and/or its affiliates. All rights reserved .
Building on the Right FoundationCisco HX Data Platform
Built From the Ground Up for
Hyperconvergence
Distributed Log-Structured File
System Designed for Scale-out,
Distributed Storage
Advanced Data Services (Snapshots,
Clones) and Data Optimization
(Inline Dedupe, Compression) Without
Trade-offs
Better Flash Endurance and
Disk Performance
Computing, Storage,
Networking, and Hypervisor Integration
Eliminates Management
Silos
Distributed File system
Local FS Local FS Local FS Local FS
UniqueArchitecture
© 2017 Cisco and/or its affiliates. All rights reserved .
Dynamic Data Distribution
• HX Data Platform stripes data across all nodes simultaneously, leveraging cache across all SSDs for fast writes
• Balanced space utilization: no data migration required following a VM migration
Systems built on conventional file
systems write locally, then replicate,
creating performance hotspots
CONTROLLERHYPERVISOR CONTROLLERHYPERVISOR CONTROLLERHYPERVISOR
VM VMVM VM VMVM VM VMVM
HX Data Platform
VM VMVM
CONTROLLERHYPERVISOR CONTROLLER
HX Data Platform
© 2017 Cisco and/or its affiliates. All rights reserved .
HYPERVISOR
Capacity and Network Utilization
HYPERVISOR HYPERVISOR
VM VM
1 2 3
VM
DATASTORE
CONTROLLER CONTROLLER CONTROLLER
Balanced Space UtilizationNo Data Migration on VM Migration
Less Stress on Network
HX balances space utilization: no data migration required following a VM migration
© 2017 Cisco and/or its affiliates. All rights reserved .
High Resiliency, Fast Recovery
Platform can sustain simultaneous 2 node failure without data loss; replication factor is tunable
If a node fails, the evacuated VMs re-attach with no data movement required
Replacement node automatically configured via UCS Service Profile
HX Data Platform automatically re-distributes data to node
CONTROLLERHYPERVISORCONTROLLERHYPERVISOR CONTROLLERHYPERVISOR CONTROLLERHYPERVISOR
VM VMVM VM VMVM VM VMVM VM VMVM
HX Data PlatformHX Data Platform
© 2017 Cisco and/or its affiliates. All rights reserved .
Data Protection and High AvailabilityData Protected by Replication of Data Across the Cluster Nodes
VM
x2
Write IO
Replication Factor 3 (RF3)
Default and Recommended is Replication Factor = 3
Every block is written to 3 different nodes in the cluster
Higher availability to survive multi-point failures; Higher device protection
Reduces raw disk capacity to 33% x1
Replicate
x3
Replication Factor 3 or 4 Node Cluster 5+ Node Cluster
3Simultaneous Failures Supported:
1 node / 2 drives*Simultaneous Failures Supported:
2 nodes / 2 drives*
*drives across different nodes
© 2017 Cisco and/or its affiliates. All rights reserved .
Independent Scaling of Compute and Capacity
HX Data Platform
Add NodesScale Cache or Capacity Within Nodes
HX Data Platform
Scale Compute-Only
BladesIOVisorIOVisor
IOVisorIOVisor
IOVisorIOVisor
IOVisorIOVisor
VM VM VM VM
Non-HyperFlex Hosts Can Connect to
Storage with IOVisor
CONTROLLERHYPERVISOR
VM VMVM
ScaleCompute-
Only Racks
VM VM VM VM
IOVisor
IOVisor
CONTROLLERHYPERVISOR
VM VMVM
CONTROLLERHYPERVISOR
VM VMVM
CONTROLLERHYPERVISOR
VM VMVM
HX Data Platform
© 2017 Cisco and/or its affiliates. All rights reserved .
• HX Converged node supports mounting external storage arrays via NFS, iSCSI, FC, FCoE :
• Storage array must be on Cisco HCL
• VM migration and data transfer
• Co-existence with Existing SAN environment
• Present RDM from FC SAN to VMs
• Use VMware storage vMotion for migrations over Ethernet
• Use for backup and other application support
• All Direct Connect modes Except FC direct connect. (NFS, iSCSI with Appliance ports)
• HX Datastore is not exportable outside the HX cluster
Supporting External Storage Connectivity
HX UCS Domain
iSCSI/NFS or FC Storage Volume*HX DATASTORE
FC
VM1 VM2 VM3
NFS/iSCSI
© 2017 Cisco and/or its affiliates. All rights reserved .
DATASTORE DATASTORE
Data Distribution and Non-Disruptive Operations
• Stripes blocks of virtual disk files across servers using a hashing algorithm
• Replicate one or two additional copies to other servers
• Handle entire server or disk failures
• Restore back to original number of copies
• Rebalance VMs and data post replacement
• Rolling “one-click” software upgrades
CONTROLLERHYPERVISOR
VM VM VM
CONTROLLERHYPERVISOR
VM VM VM
CONTROLLERHYPERVISOR
VM VM VM
CONTROLLERHYPERVISOR
VM VM VM
File.vmdk
D1 E1A1 B1 C1B2 A2 A3C2 C3 D2D3 E2E3 D1E1 B3 B3
EDCBA
© 2017 Cisco and/or its affiliates. All rights reserved .
• Pointer-Based Writeable Snapshots (Instantaneous Clones)
• VAAI integrated
• VM-level granularity
Native VM Clones for Rapid Provisioning
• Batch creation GUI
• Apply unique names
• Use customization spec to apply IP
• Powerful tool to rapidly setup a large set of VMs using just VC (without scripting or View composer); Up to 256 clones in parallel per job
• Golden/Base VM can be a template, powered on or powered off
© 2017 Cisco and/or its affiliates. All rights reserved .
Integrated Management and
Data Services
Dynamic Data Distribution
Continuous Data Optimization
Independent Scaling of Compute and Capacity
Real Innovation
#$%“&/=
© 2017 Cisco and/or its affiliates. All rights reserved .
© 2017 Cisco and/or its affiliates. All rights reserved .
Usable Cluster Capacity (TB) - HX M5 Nodes
HX220c M5
HX240c M5
Notes: For max capacity, assume Full SSD/HDD population (23 on HXAF240) and assume RF3.
HX Edge Capacity is using RF2 and can be configured with 3 drives or 6.
The above calculations are before deduplication & compression. Effective capacity will be higher.
Additional capacity headroom recommended. Consult your Cisco CSE for the latest sizing & design guidance.
HXAF220 M5
HXAF240 M5
Edge w/ RF212.03
6.01 8.02 10.03 12.03 14.04 16.05
12.03 16.05 20.06 24.07 28.09 32.10
6.01 8.02 10.03 12.03 14.04 16.05
34.61 46.14 57.68 69.22 80.75 92.29
Edge w/ RF238.52
4.81 6.42 8.02 9.63 11.23 12.84 25.68
25.68 34.24 42.80 51.36 59.92 68.48 136.97
4.81 6.42 8.02 9.63 11.23 12.84 25.68
73.83 98.44 123.05 147.67 172.28 196.89 393.78
3 Nodes 4 Nodes 5 Nodes 6 Nodes 7 Nodes 8 Nodes 16 Nodes
960G
3.84TB
960G
3.84TB
RF3
1.2TB
1.8TB
1.2TB
1.8TB
© 2017 Cisco and/or its affiliates. All rights reserved .
• Extends virtualization management seamlessly
• No switching between management consoles
• View storage alerts/alarms alongside with ESX alerts/alarms
• Command line interface for automation
VMware vCenter Plug-In
© 2017 Cisco and/or its affiliates. All rights reserved .
Native HyperFlex Connect HTML5 GUI
© 2017 Cisco and/or its affiliates. All rights reserved .
REST APIs
• Well-documented syntax and examples with REST API explorer
• Secure token based access with RBAC and auditing
• Eco-system: UCS Mgmt., Services (CMS), 3rd party (Veeam, etc.)
• Expanded coverage by adding AAA, Encryption, Data Protection, Support, and Performance APIs
API example
Accessed Via: http://<Cluster-IP>/apiexplorer
© 2017 Cisco and/or its affiliates. All rights reserved .
Non-Volatile Memory Express (NVMe)
• Replaces SAS write caching disk
• More efficient streamlined protocol
• Uses PCI-Express instead of legacy serial bus
• Multi-queue, more commands per queue, high parallelism
• Higher performance, lower latency
Disk Options
Self Encrypting Disk (SED)
• Used for write caching disk and capacity disks
• Disk hardware encrypts all data written to the device
• Can use UCSM derived local encryption keys, or keys from a centralized KMS via KMIP
• Secure erase, re-key
© 2017 Cisco and/or its affiliates. All rights reserved .
Intel Optane Support
I N T E L O P T A N E S U P P O R T
• Caching drive option available with HX 3.0
• Available on M5 only
• All Flash only configurations
• Part number: HX-NVMEXP-I375
• 375G drive capacity, 3D XPoint™
• Advantages:
• Higher Endurance: 20.5PBW (~ 30DWPD)
• Higher drive level performance
HX 3.0
Intel® Optane™ SSD DC P4800X Series(Formerly known as Cold Stream)
(Specs)
© 2017 Cisco and/or its affiliates. All rights reserved .
Comparison of Hybrid Options
P E R F O R M A N C E O P T I M I Z E D
Drive options 1.2TB, 1.8TB –2.5” 10Krpm SAS
Max Capacity (per node)
37.7TiB raw (11.5TiB usable)
Cost Capacity ($/GB)Multiple
1X
Performance Better
Features Parity
SMALL FORM FACTOR (SFF) LARGE FORM FACTOR (LFF)
C A P A C I T Y O P T I M I Z E D
Drive options 6TB, 8TB –3.5” 7.2Krpm
Max Capacity (per node)
87.3TiB raw (26.8TiB usable)
Cost Capacity ($/GB) Multiple
< 0.5X
Performance Lower
Features Parity
© 2017 Cisco and/or its affiliates. All rights reserved .
HX Replication Overview
Primary Secondary
WAN
1. Replicate
2. Bi-directional
VM1’VM1
VM2VM2’
Use Case: Replication for Disaster Recovery
Point-to-Point (1-to-1) Replication Different number of Cluster nodes
on each site Replicate between HX220 and
HX240 Clusters (New in 2.6) Replicate between
Hybrid and All Flash clusters
Supported Configuration Recover from the latest Snapshot copy Supports planned and unplanned
recovery Active – Active and Active-Passive DR
via bi-directional replication Enabled to integrate with 3rd party DR
orchestration products
Underlying Technology Disaster Recovery Snapshot based, Periodic Replication Scale-out, performant, reliable and network
optimized VM Centric Replication Single PIT (latest) image for Recovery Flexible RPO of 15mins to 24 hours HTML 5 based , REST API & CLI based
operations and monitoring
New in 2.6Support for All-Flash to Hybrid Replication for
DR
© 2017 Cisco and/or its affiliates. All rights reserved .
Disaster Avoidance Zero RPO Automated DR Maximum Uptime
HyperFlex Stretched Cluster Cloud Scale Data Platform
Power Mission Critical Apps with
Site-BSite-A
HX Data Platform
DBAPPAPPDB
Synchronous Replication
SSDSSDSSDSSDSSDSSD
HX 3.0
© 2017 Cisco and/or its affiliates. All rights reserved .
HyperFlex Stretched Cluster ZERO RPO ! NEAR ZERO RTO!
Site-B
VM VM VM VM
VM VM VM VM
Site-A
3rd site Configuration Support
Single Stretched Cluster across 2 sites
Symmetric Configuration Site to host a “Witness
Server” (small VM)
8 HX nodes on each site
IO Path Active-Active sites – VMs
Active on each site VM Read IOs served locally VM Write IOs Sync-Writes
across sites 2x copies on each site
HA Operations
Recover from a Site failure Recover from a Local failure Failover of VM vMotion of VM Split Brain handling
Management Cross site Cluster creation Non disruptive online rolling
upgrade Site awareness in HX Connect Site specific Alarm and Events
on a single Dashboard
Witness Server
HX 3.0
© 2017 Cisco and/or its affiliates. All rights reserved .
Automated Availability Zones
Up to 32 HX
nodes
Up to 32 compute
nodes
64 Node Scale with Resiliency Cloud Scale Data Platform
64 Node & Capacity Scalability
IOVisor IOVisor
IOVisor IOVisor
IOVisor
IOVisor
IOVisor
IOVisor
Availability
Grp 3
Availability
Grp 2
Availability
Grp 1
HX 3.0
© 2017 Cisco and/or its affiliates. All rights reserved .
• Cluster Scale With High Availability
• Increased resiliency without added manageability overhead
• How does it work?
• HX nodes grouped into logical “availability groups” (N/A for compute nodes)
• HXDP never places 2 copies of the data in the same availability group
• Clusters with LAZ can survive > 2 simultaneous node failures without data loss or loss of availability
• Tolerate more independent failures
Logical Availability Zones (LAZ)
H X D A T A P L A T F O R M
Availability Grp 1
Availability Grp 2
IOVisor
IOVisor
IOVisor
IOVisor
Availability Grp 3
IOVisor
IOVisor
IOVisor
IOVisor
IOVisor
IOVisor
IOVisor
IOVisor
IOVisor
HX 3.0
© 2017 Cisco and/or its affiliates. All rights reserved .
LAZ Failure Scenario Cluster State: Offline
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVM
Cluster State: Online
CONTROLLER
HYPERVISOR
VMVMVMCONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVMCONTROLLER
HYPERVISOR
VMVMVM
LAZ: Off
CONTROLLER
HYPERVISOR
VMVMVMCONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVMCONTROLLER
HYPERVISOR
VMVMVM
© 2017 Cisco and/or its affiliates. All rights reserved .
LAZ Failure Scenario Cluster State: Offline
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVMCONTROLLER
HYPERVISOR
VMVMVM
Cluster State: Online
CONTROLLER
HYPERVISOR
VMVMVMCONTROLLER
HYPERVISOR
VMVMVM
LAZ: On
Zo
ne
01
Zo
ne
02
Zo
ne
03
Zo
ne
04
CONTROLLER
HYPERVISOR
VMVMVMCONTROLLER
HYPERVISOR
VMVMVM
CONTROLLER
HYPERVISOR
VMVMVMCONTROLLER
HYPERVISOR
VMVMVM
Zo
ne
05
© 2017 Cisco and/or its affiliates. All rights reserved .
Veeam Backup & Replication
• Backups and restores via a Veeam proxy VM
• CBT aware, compressed and deduped backups
• Cluster to Cluster replication for disaster recovery
• Full VM, file and item level recovery
• WAN acceleration, replica traffic encryption and throttling
Recover Time & Point
Objectives< 15 for apps and data
Fabric
Interconnect
LAN/SAN
frontend
HX Data Platform
Fabric
Interconnect
LAN/SAN
frontend
© 2017 Cisco and/or its affiliates. All rights reserved .
Automation / Orchestration Multi-SiteOperations
HyperconvergedRemote / Branch / Edge Performance
Purpose built Management Tools
© 2017 Cisco and/or its affiliates. All rights reserved .
One Interface, One API
Cisco Intersight
© 2017 Cisco and/or its affiliates. All rights reserved .
HX Tools
• End to End Sizing
• Includes – compute (CPU & Memory), Storage performance and capacity
• Application templates to aid application based sizing
HX Sizer
• OVA deployed in existing environments
• Quantify usage for sizing
• Integration with sizer to automate end to end sizing of environments
HX Workload Profiler
• OVA with benchmarking built on vdbench
• Benchmarking made easy
• Follows industry standard benchmarking practices for a realistic readout
HXBench
© 2017 Cisco and/or its affiliates. All rights reserved .
37
Cisco HX Bench User Interface
© 2017 Cisco and/or its affiliates. All rights reserved .
38
Benchmark 1 Performance Results Using Vdbench Tests8K 100% Sequential Write Workload
• Target Write Latency < 10ms
• 16 OIO Per Disk
• 27% Latency Reduction and IOPs Gain with 40 GbE
• 6% Latency Reduction and IOPs Gain on 2.5 + NVMe
• 8% Latency Reduction and IOPs Gain with HX 2.6 on M5
96.426
122.514130.015
140.397
10,61
8,357,87 7,30
0,00
2,00
4,00
6,00
8,00
10,00
12,00
14,00
16,00
18,00
20,00
0
20.000
40.000
60.000
80.000
100.000
120.000
140.000
160.000
2.0 2.0 + 40GbE 2.5 + 40GbE 2.6 M5
La
ten
cyIO
Ps
Maximum Sequential Write
IOPs Latency
© 2017 Cisco and/or its affiliates. All rights reserved .
39
Benchmark 2 Performance Results Using Vdbench Tests8K 100% Random Read Workload
• Target Read Latency < 5ms
• 32 OIO Per Disk
• 7% Latency Reduction and IOPs Gain with 40 GbE
• 23% Latency Reduction and IOPs Gain on 2.5 + NVMe
• 3% Latency Reduction and IOPs Gain with HX 2.6 on M5
389.526415.449
511.992 529.241
5,26 4,934,00 3,87
0,00
2,00
4,00
6,00
8,00
10,00
12,00
14,00
16,00
18,00
20,00
0
100.000
200.000
300.000
400.000
500.000
600.000
2.0 2.0 + 40GbE 2.5 + 40GbE 2.6 M5
La
ten
cyIO
Ps
Maximum Random Read
IOPs Latency
© 2017 Cisco and/or its affiliates. All rights reserved .
40
Benchmark 3Performance Results Using Vdbench Tests100% Random 70/30% R/W with Varying Block Sizes
4K 8K 16K 64K 256K
IOPs R Latency W Latency IOPs R Latency W Latency IOPs R Latency W Latency IOPs R Latency W Latency IOPs R Latency W Latency
2.0 150,673 2.12 6.36 130,511 2.40 7.47 92,130 3.49 10.35 34,965 10.75 23.70 10,079 45.60 62.80
2.0 + 40 GbE 152,434 2.22 6.00 142,721 2.36 6.47 111,905 3.13 8.03 51,146 6.96 17.16 19,712 11.68 59.28
2.5 + 40 GbE 161,131 2.24 5.37 160,290 2.09 5.75 136,433 2.45 6.78 56,339 6.30 15.74 20,036 9.17 63.61
2.6 M5 180,037 1.72 5.49 173,193 1.91 5.39 140,842 2.36 6.61 61,227 5.45 15.18 20,732 8.96 61.37
• Target Read Latency < 5ms and Write Latency < 10ms for 4-16K blocks
• 8 OIO Per Disk
• ~6% Latency Reduction and IOPs Gain with 40 GbE at 4-8K, 50-100% Gains for Larger Blocks
• 10-22% Latency Reductions and IOPs Gains with HyperFlex 2.5 and NVMe Caching Disks
• 3-11% Latency Reductions and IOPs Gains with HyperFlex 2.6 on M5 Generation Servers
© 2017 Cisco and/or its affiliates. All rights reserved .
41
Benchmark 4Performance Results Using Vdbench Tests8K 100% Random 70/30% R/W IOPs Curve
• Curve test measures IOPs stability by calculating the standard deviation of IOPs as a percentage of the final average result, at various performance levels, starting with a 100% unthrottled test, then again at a percentage of that maximum
• Maximum speed, unthrottled tests will always have more variability due to more aggressive background tasks
• 20% through 60% values are typically very low because the system is not being pushed very hard
• 65% improvement in IOPs stability with 40 GbE at 80% of maximum HX 2.5 shows a further 78% improvement
• HX 2.5 also shows a 54% improvement of IOPs stability in the 100% unthrottled test
• HX 2.6 on M5 generation servers shows improvements in all tests, including 28% at 100%, and 45% at 80%
100% 20% 40% 60% 80%
IOPsIOPs
STDV
Avg
LatencyIOPs
IOPs
STDV
Avg
LatencyIOPs
IOPs
STDV
Avg
LatencyIOPs
IOPs
STDV
Avg
LatencyIOPs
IOPs
STDV
Avg
Latency
2.0 138,190 19.0% 3.71 27,666 0.5% 1.07 55,299 0.5% 1.57 82,960 0.5% 2.20 110,594 6.6% 3.14
2.0 + 40 GbE 151,568 18.6% 3.38 30,331 0.5% 0.99 60,621 0.5% 1.37 90,957 0.5% 2.08 121,250 2.3% 2.63
2.5 + 40 GbE 169,983 8.6% 3.01 34,031 0.4% 0.86 68,058 0.5% 1.17 102,023 0.5% 1.66 136,018 0.5% 2.28
2.6 M5 173,193 6.2% 2.95 34,699 0.2% 0.80 69,331 0.2% 1.21 103,953 0.2% 1.68 138,580 0.3% 2.38
© 2017 Cisco and/or its affiliates. All rights reserved .
42
Benchmark 5Performance Results Using Vdbench TestsI/O Blender 100% Random 70/30% R/W IOPs Curve
100% 20% 40% 60% 80%
IOPsIOPs
STDV
Avg
LatencyIOPs
IOPs
STDV
Avg
LatencyIOPs
IOPs
STDV
Avg
LatencyIOPs
IOPs
STDV
Avg
LatencyIOPs
IOPs
STDV
Avg
Latency
2.0 104,624 25.6% 4.89 20,967 0.5% 1.31 41,898 0.5% 1.72 62,825 0.5% 2.53 83,756 4.4% 3.89
2.0 + 40 GbE 114,354 22.2% 4.47 22,902 0.5% 1.04 45,792 0.5% 1.47 68,664 0.5% 2.47 91,435 3.1% 2.97
2.5 + 40 GbE 137,078 13.6% 3.73 27,466 0.5% 0.96 54,897 0.5% 1.45 82,292 0.5% 2.13 109,727 0.5% 2.77
2.6 M5 137,416 12.1% 3.72 27,496 0.2% 0.80 54,990 0.3% 1.49 82,456 0.3% 1.96 109,955 0.4% 2.89
• Test measures IOPs stability by calculating the standard deviation of IOPs as a percentage of the final average result, at various performance levels, starting with a 100% unthrottled test, then again at a percentage of that maximum
• Block Size Mix: 4K 40%, 8K 40%,16K 10%, 64K 10%
• 20% through 60% values are typically very low because the system is not being pushed very hard
• 30% improvement in IOPs stability with 40 GbE at 80% of maximum, HX 2.5 shows a further 84% improvement
• HX 2.5 also shows a 39% improvement of IOPs stability in the 100% unthrottled test
• HX 2.6 on M5 generation servers shows improvements in all tests, including 11% at 100%, and 20% at 80%
© 2017 Cisco and/or its affiliates. All rights reserved .
0
5000
10000
15000
20000
25000
30000
1 7
13
19
25
31
37
43
49
55
61
67
73
79
85
91
97
103
109
115
121
127
133
139
145
151
157
163
169
175
181
Intervals
Vendor Y
VM1 VM2 VM3 VM4 VM5 VM6
VM7 VM8 VM9 VM10 VM11 VM12
43
Per VM Test Results Variation
IOPS in a 70/30 Workload – 1 Hour Duration
0
5000
10000
15000
20000
25000
30000
1 8
15
22
29
36
43
50
57
64
71
78
85
92
99
106
113
120
127
134
141
148
155
162
169
176
Intervals
HX
VM1 VM2 VM3 VM4 VM5 VM6
VM7 VM8 VM9 VM10 VM11 VM12
© 2017 Cisco and/or its affiliates. All rights reserved .
HyperFlex Performance Testing Findings
Read the full ESG report:
cisco.com/go/hyperflex
3x Higher VM Density
3x Reduced Read/Write Latency
7:1 Reduced IOPS Variability
3x Better TCO
Predictable End-User Experience
More Workloads on Hyperconverged
© 2017 Cisco and/or its affiliates. All rights reserved .
HyperFlex All-Flash Performance Consistency
0
200
400
600
800
1.000
1.200
0 20 40 60 80 100 120 140 160
IOP
S
VM
Cisco HyperFlex IOPS per VM
Each dot represents a Virtual Machine and its average IOPS over an hour of load
Consistent Performance for
All VMs on the Cluster
Wildly Inconsistent VM
Performance Across the Cluster
0
200
400
600
800
1000
1200
0 50 100 150
IOP
S
VM
Vendor B IOPS per VM
© 2017 Cisco and/or its affiliates. All rights reserved .
HyperFlex Solutions / Workloads Portfolio
Cisco HyperFlex: HX Data Platform
Data Services and Storage Optimization
Current
WORKLOADS / APPLICATIONS
VMWare HorizonCitrix
XenDesktop
VSI on HX
Cisco UC
MS SQLSAP (Non – Prod
/ Prod)
Oracle RAC Splunk MS Exchange
ACI
N1K
Resources: HX homepage (external)
INTEGRATION / INTEROP
CommvaultVeeam
Cloud Center UCSD
3rd Party Arrays NVIDIA
HX Tools (for sizing)
Oracle Single Instance
© 2017 Cisco and/or its affiliates. All rights reserved .