hyper-v - windows 2012 r2 - overview - · pdf fileblock storage file based storage vhdx vhdx...
TRANSCRIPT
Alessandro CardosoInsight Practice ManagerMicrosoft MVP
Twitter: @edvaldocardosoBlog: htpp://cloudtidings.com
Technology passionate and evangelist; Subject matter expert in cloud, virtualization and management.
He has experience managing, solutioning, planning, organizing, and leading complex global projects acquired in 20+ years of experience in IT, working in segments spanning from government, health, education and IT sectors.
MVP VM since 2009 and a well-known speaker at IT-related for more than 10 years. Recently wrote the book System Center VMM 2012 http://wp.me/p15Fu3-mK and reviewed the following books:Windows 2012 Hyper-V Cookbook, Windows 2012 Cluster , VMware vSphere 5.1 Cookbook
Blog : http://cloudtidings.com/Twitter : @edvaldocardoso
Design for Hyper-V on SMB 3.0 storage with RDMA (SMB Direct)
New-VMSwitch -Name “External” -NetAdapterName “VM-TEAM” -AllowManagementOS $false
New-VMSwitch -Name “MGMT-LM-CSV-SWITCH” -NetAdapterName “MGMT-LM-CSV-TEAM” -AllowManagementOS 0 -MinimumBandwidthMode Weight
Set-VMSwitch “MGMT-LM-CSV-SWITCH” -DefaultFlowMinimumBandwidthWeight 100Add-VMNetworkAdapter -ManagementOS -Name Management –SwitchName MGMT-LM-CSV-SWITCHAdd-VMNetworkAdapter -ManagementOS -Name LM -SwitchName MGMT-LM-CSV-SWITCHAdd-VMNetworkAdapter -ManagementOS -Name CSV -SwitchName MGMT-LM-CSV-SWITCHSet-VMNetworkAdapter -ManagementOS -Name LM -MinimumBandwidthWeight 40Set-VMNetworkAdapter -ManagementOS -Name CSV -MinimumBandwidthWeight 5Set-VMNetworkAdapter –ManagementOS –Name Management –MinimumBandwidthWeight 5
Legacy Devices Removed Replacement Devices EnhancementsIDE Controller Virtual SCSI Controller Boot from VHDx (64TB max size, online resize)
IDE CD-ROM Virtual SCSI CD-ROM Hot add/remove
Legacy BIOS UEFI firmware Secure Boot
Legacy NIC Synthetic NIC Network boot with IPv4 & IPv6
Floppy & DMA Controller No floppy support
UART (COM Ports) Optional UART for debugging Faster and more reliable
i8042 keyboard controller Software based input No emulation – reduced resources
PS/2 keyboard Software based keyboard No emulation – reduced resources
PS/2 mouse Software based mouse No emulation – reduced resources
S3 video Software based video No emulation – reduced resources
PCI Bus VMBus
Programmable Interrupt Controller (PIC) No longer required
Programmable Interrupt Timer (PIT) No longer required
Super I/O device No longer required
PING is a poor tool to evaluate a live migrationICMP works at the IP layer and TCP is what makes a live migration seamless
Live Migrate VM and Storage to Clusters
Live Migrate VM and Storage Between
Clusters
Live Migrate VM and Storage to Stand-Alone Server
You can move a VM anywhere in your datacenter with zero downtime!
• Ability to do multiple live migrations in parallel• Unlimited number of live migrations can be
performed in parallel• Default configuration of 2 simultaneous LM’s per host
• Wield this power wisely• Excessive number of simultaneous migrations may actually result in overall longer
times than serially• Lets discuss on the next slide…
0
2
4
6
8
10
12
14
16
18
20
1 3 32
Time to Drain 32 VMs
Disclaimer Results will vary based on hardware
Network
Migrate VM
Migrate VM
Migrate VM
• Streamlines ‘Patch Tuesday’• Zero downtime patching!
• Coordinates with Windows Update Agent (WUA)• Updates in a rolling fashion, 1 node at a time
• Serially steps through all nodes
• Coordinator can be made clustered, for Self-Updating mode
UpdateCoordinator
Admin
Initiate Cluster-Aware Updating
• Move a VHD or VHDX from one host to another with zero downtime
• Storage Migration between hosts without shared storage is done over SMB protocol
• Storage Migration accelerated by arrays that support Offloaded Data Transfer (ODX)
• Enables draining a storage array for planned maintenance
Storage Migrate this VHDX to another disk
Client accessing VM
VM stays running servicing clients
VHDX
• New VHDX created on destination storage
New VHDX created on new storage
VHDXVHDX
• Reads are from Source VHDX• Writes are done to Source VHDX and also synchronously to the
Destination VHDX
Writes mirrored to new Destination VHDX
Reads are from Source VHDX
VHDX VHDX
• Source VHDX data is copied over to Destination VHDX
• Only unchanged blocks are copied over
VHDX data copied from Source to Destination
VHDXVHDX
• Once all data is synchronized VM is switched to new VHDX• Source VHDX is only removed once verified to be running on
Destination VHDX• Enables roll-back
Reads and Writes transitioned to new VHDX
VHDX
• Protects from unplanned downtime:• Hardware• Host OS• VM• Guest OS • Apps in VMs
• Session state lost
NetFT
Node 1
ClusSvc
VMMS
User Mode
Kernel Mode
RHS
Guest OS
VDev
NetFT
Node 2
ClusSvc
VMMS
RHS
Guest OS
VDev
vmclusres.dllvmclusres.dll
• VMs restarted on another node
• VM OS restarted on same node
• VM OS restarted on same node
• Live migrate VMs to other hosts
• Live migrate VMs to different servers to load balance
FC SAS RBOD
iSCSI FCoE
SAS JBOD
Shared Storage
RAID HBA Software Replication
Hardware Replication
SMB
Data Replication
Application Replication
Spaces
• Enables all servers in a Failover Cluster to access a common NTFS volume• Provides a layer of abstraction above NTFS
• No dismounting and remounting of volumes• Faster failover times (aka. less downtime)
VM running on Node 2 is unaffected
Coordination Node
SAN Connectivity Failure
I/O Redirected via network
VM’s can then be live migrated to another node with zero client downtime
VHDX
• Application within VM crashes, application automatically restarts or fails over
• Guest OS needs patching or VM needs maintenance, application moved to other node
Cluster
File Based StorageBlock Storage
VHDX VHDX
Guest Clustering
Guest Clustering with commodity storage
Sharing VHDX files provides shared storage for Hyper-V Failover Clustering
Maintains separation between infrastructure and tenants
Virtual SAS
VM presented a shared virtual SAS disk
Appears as shared SAS disk to VM
Cluster Shared Volumes (CSV) on block storage
Scale-Out File Server for file based storage
• VM high-availability & mobility between physical nodes• Application & service high-availability & mobility between VMs
• Must pass Validate
CLUSTER CLUSTERiSCSI or FC
Guest Cluster
SAN SAN
VHDX VHDX
• Automatic and manual failover for DR• Supports 3rd party hardware and software based
replication
Hyper-V Replica• RPO = 5 minutes• RTO = Manual (longer)• Cost = In-box in Windows Server• Complexity = Low
Multi-Site Clustering• RPO = 0 minutes*• RTO = Automatic (fast)• Cost = High• Complexity = High
*depending on 3rd party replication solution
Planned Downtime
Unplanned Downtime Disaster Recovery