building the foundation.. server virtualisation and management julius davies datacenter technology...
Post on 21-Dec-2015
218 Views
Preview:
TRANSCRIPT
Building the Foundation..Server Virtualisation and Management
Julius DaviesDatacenter Technology SpecialistMicrosoft UKJulius.Davies@microsoft.com
Clive WatsonDatacenter Technology SpecialistMicrosoft UKClive.Watson@microsoft.com
Where are we today?
How can we optimise?
HYPER-V
Thin Provisioning
VHDVM
Guest OS needs to see 100GBbut may only consume % of that
With Fixed VHDs, a 100GB VHDwould consume 100GB on SAN
With Dynamic VHDs, the physicalspace consumed is only equal tothat consumed by Guest OS
VHD Performance Whitepaper: Link here
Dynamic StorageFlexible solution for adjusting available VM storage without downtimeUtilises SCSI Controller for Hot-Add and Hot-Remove of VHD/PTDEach VM can have up to 4 SCSI ControllersEach SCSI Controller can have up to 64 disks attached
Hyper-v Networking3 types of network:Private, Internal, External
Private = VM 2 VMInternal = VM 2 VM & VM 2 HostExternal = VM 2 VM, VM 2 HostVM 2 VM across Hosts
Each VM can have up to 12 vNICs8 Synthetic & 4 Legacy (PXE)Each with different VLAN ID
• Teaming Support provided by NIC Vendor• Intel = PROSet, Broadcom = BACS, HP = NCU• Best practice: install/enable Hyper-V, then install networking
utilities
Hyper-V Networking for Clusters
Great guide here: http://technet.microsoft.com/en-us/library/ff428137(WS.10).aspxBest Practice Suggests:
1 Network for Host Management1 Network for Cluster Heartbeat1 Network for Cluster Shared Volumes1 Network for Live Migration1 Network for Virtual Machine TrafficIf using ISCSI: 2 Networks for iSCSI Storage with MPIO
The above numbers represent networks, not ports. You may wish to team certain ports to provide resiliency.
High availability – Clustering
1. 2 Hyper-V R2 Nodes in a Failover Cluster. Each Node has 2 VMs running. VMs are stored on the SAN.
2. Node 1 Fails, and also brings down 2 VMs3. Failover Clustering in Hyper-V R2 ensures that VMs restart
on Node 2 of the Hyper-V Cluster
SAN
Cluster Shared VolumesEnabling multiple nodes to concurrently access a single ‘truly’ shared LUNProvides VM’s complete transparency with respect to which nodes actually own a LUNGuest VMs can be moved without requiring any drive ownership changesNo dismounting and remounting of volumes is required
Cluster Shared Volumes
1. We’ve set up a WS2008 R2 Cluster, and created 4 LUNs on the SAN.
2. We’ve made the LUNs available to the Cluster3. In Failover Clustering MMC, we mark the LUNs as CSV’s.4. Each Node in our Cluster then has a Consistent Namespace
for accessing the LUNs. We can now drop as many VMs on each CSV as we like
SAN
C:\ClusterStorage\Volume1C:\ClusterStorage\Volume2C:\ClusterStorage\Volume3C:\ClusterStorage\Volume4
C:\ClusterStorage\Volume1C:\ClusterStorage\Volume2C:\ClusterStorage\Volume3C:\ClusterStorage\Volume4
Live Migration
1. 2 Hyper-V R2 Nodes in a Failover Cluster. Each Node has 2 VMs running. VMs are stored on the SAN.
2. We decide we’d like to Live Migrate a VM from Node 1 to Node 2.
3. Live Migration in Hyper-V R2 ensures that VMs aremigrated with no downtime.
SAN
Dynamic MemoryAutomatic, dynamic balancing of memory between running VMsUnderstands the needs of the Guest OSAvailable as part of WS2008 R2 SP1 at no cost“on the hardware I was testing with, I saw an increase from 64 VMs
(Windows 7 on Hyper-V R2) to 133 VMs (Windows 7 on Hyper-V R2 SP1), we also ran performance testing against this so this wasn't a case of "let's see how many VMs we can fire up“ Matt Evans, Quest Software
RemoteFXNot a replacement for RDP!
Enhancement to the graphical capabilities of RDP 7.1vGPU (WDDM)
Single GPU for multiple Hyper-V Guests
Host Side RenderingApps run at full-speed on host
Intelligent Screen Capture & Hardware-Based EncodeScreen deltas sent to client based on network/client availability
Bitmap Remoting & Hardware-Based DecodeFull range of client devices – HW and SW manifestations by design
Hyper-V R2 SP1 – SummaryBusiness Continuity - High Availability & Live MigrationHost Scalability - 64 Cores & 1TB RAM VM Scalability - 64 GB RAM & 4vCPUs Per VMDensity – Dynamic Memory included with SP1Power Efficiency - Core Parking & Many Power ImprovementsDynamic Storage - Add/Remove disks with no downtimeThin Provisioned VHDs - Use Less StorageNetworking Improvements - NIC Teaming via NIC Vendor, Jumbo Frames, TCP Offload, VMq, vLANs etc.Familiarity - Based on Windows, managed through Windows and System CenterHardware Optimised – Takes advantage of latest h/w innovations (e.g. SLAT)Huge HCL – http://www.windowsservercatalog.comOS Support - In-lifecycle Windows Server/Clients & Linux (SUSE/RHEL/CENTOS)
How can we better manage?
VMware
Self Service Portal 1.0SQL DATABASELIBRARY SERVER
VMM SERVER
SCVMM 2008 R2 SP1ADMIN CONSOLE
SCVMM 2008 R2 SP1Multi-HypervisorP2V & V2VLive Migration SupportQuick Storage MigrationOpsMgr Integration:
Unlocks PRO CapabilitiesRapid ProvisioningIntelligent PlacementLibrary & Web PortalAD IntegrationGranular ManagementPowerShellMaintenance Mode
Self Service Portal 1.0
DEMO:SCVMM 2008R2 SP1
SCVMM 2012 - key pillars
HA VMM SERVER
UPGRADE
CUSTOM PROPERTIES
POWERSHELL
DEPLOYMENT
HYPER-V BARE METAL
HYPER-V MANAGEMENT
VMWARE MANAGEMENT
XENSERVER MANAGEMENT
CLUSTER CREATION
DYNAMIC OPTIMIZATION
POWER MANAGEMENT
NETWORKMANAGEMENT
MONITORINGINTEGRATION
STORAGEMANAGEMENT
FABRIC
SERVICELIFECYCLE
APP DEPLOYMENT
IMAGE BASED SERVICING
CLOUD
CAPACITY & CAPABILITY
DELEGATION & QUOTA
SELF-SERVICE
APP OWNER USAGE
SERVICES
SCVMM 2012 - interface
SCVMM 2012 in action:System Center Virtual Machine Manager 2012:Fabric Management for the Private Cloud
http://www.msteched.com/2010/Europe/MGT306System Center Virtual Machine Manager 2012:Service Lifecycle Management for the Private Cloud
http://www.msteched.com/2010/Europe/MGT206
How can we provideSELF SERVICE?
VMware
Self Service Portal V2
Datacenter and Line-of-Business Administrators
Datacenter Administra
tor
FocusedSolutions
Flexible Management
LOB Administrat
or
End Users/Consumers
SLA Driven
24x7 InfrastructureManagement & Monitoring
ProcurementSecurity
VMM Self-Service Portal 2.0Step 1- Configuration and Extensibility
Pool Infrastructure Assets in toolkit
Extend Virtual Machine actions through Extensibility UI
Step 2- Onboarding and Infrastructure Request
Onboard Business Unit
Create Infrastructure Request (i.e. request a sandbox)
Step 3- Approval /Provisioning
Verify Asset Availability and Capacity
Assign Assets
Approve Infrastructure Request and Provision
Step 4- Self Service VM Provisioning
Manage Environment
Manage VMs
Access Reports
LegalSales FinanceHR
Infrastructure Service AProduction Environment
Corporate NetworkResources: Network access, storage allocation & quotas, access control
Infrastructure Service BTest/Dev Environment
Devevelopment NetworkResources: Network access, storage allocation & quotas, access control
Shared Resource Pool ofNetworkStorageCompute
Web Front Ends
Service Role
Reporting Servers
Service Role
VMM Self-Service Portal 2.0
Infrastructure
Web Front Ends
Service Role
Reporting Servers
Service Role
DEMO:SELF SERVICE PORTAL V2
Learn More
SYSTEM CENTER: http://www.microsoft.com/systemcenter
HYPER-V: http://www.microsoft.com/hyperv
PRIVATE CLOUD: http://microsoft.com/privatecloud
APPLICATION VIRTUALISATION: http://microsoft.com/appv
© 2008 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
top related