executive summary

45
Executive Summary This document provides the physical design and configurations for the Windows Server 2012 with Hyper-V (Hyper-V) and the System Center 2012 Virtual Machine Manager (SCVMM) Technology streams for the Public Protector South Africa (PPSA) platform upgrade project. The design and configuration of these two (2) components will provide a standard for extending their virtualization capacity based on future requirements as the business grow. The PPSA already purchased a pre-designed and configured Dell vStart 200 which will deployed and configured as their first scale-unit in the their datacenter. The virtualization capabilities will be made available through the deployment of Windows Server 2012 with Hyper-V on the standardized scale-unit in the PPSA. The management layer will be build using System Center 2012 SP1 and this document will include the design components for System Center 2012 Virtual Machine Manager. Scale Unit Design A scale-unit is a set of servers, network and storage capacity deployed as a single unit in a datacenter. It’s the smallest unit of capacity deployed in the datacenter and the size of the scale-unit is dependent on the average new capacity required on a quarterly or yearly basis. Rather than deploying a single server at a time, Public Protector South Africa must deploy a new scale-unit when they need additional capacity to fulfill the need and leave room for growth. The pre-configured scale-unit for the Public Protector South Africa will consist out of one (1) Dell vStart 200 that has six(6) Dell R720 hosts that will be configured as a single cluster (6 nodes), four (4) Dell PowerConnect 7048 1GBe switches and three (3) Dell EquaLogic PS6100 SANs with 14.4TB (24 x 600GB drives) each. The scale-unit is also configured with two(2) uninterruptable power supply units (UPS) and one(1) Dell R620 host which is already configured as the scale-unit component management server.

Upload: yohanna-monsalvez

Post on 24-Oct-2015

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Executive Summary

1 Executive SummaryThis document provides the physical design and configurations for the Windows Server 2012 with Hyper-V (Hyper-V) and the System Center 2012 Virtual Machine Manager (SCVMM) Technology streams for the Public Protector South Africa (PPSA) platform upgrade project. The design and configuration of these two (2) components will provide a standard for extending their virtualization capacity based on future requirements as the business grow.

The PPSA already purchased a pre-designed and configured Dell vStart 200 which will deployed and configured as their first scale-unit in the their datacenter.

The virtualization capabilities will be made available through the deployment of Windows Server 2012 with Hyper-V on the standardized scale-unit in the PPSA.

The management layer will be build using System Center 2012 SP1 and this document will include the design components for System Center 2012 Virtual Machine Manager.

2 Scale Unit DesignA scale-unit is a set of servers, network and storage capacity deployed as a single unit in a datacenter. It’s the smallest unit of capacity deployed in the datacenter and the size of the scale-unit is dependent on the average new capacity required on a quarterly or yearly basis. Rather than deploying a single server at a time, Public Protector South Africa must deploy a new scale-unit when they need additional capacity to fulfill the need and leave room for growth.

The pre-configured scale-unit for the Public Protector South Africa will consist out of one (1) Dell vStart 200 that has six(6) Dell R720 hosts that will be configured as a single cluster (6 nodes), four (4) Dell PowerConnect 7048 1GBe switches and three (3) Dell EquaLogic PS6100 SANs with 14.4TB (24 x 600GB drives) each. The scale-unit is also configured with two(2) uninterruptable power supply units (UPS) and one(1) Dell R620 host which is already configured as the scale-unit component management server.

Page 2: Executive Summary

2.1 Host ConfigurationThe host configuration will be based on the Dell PowerEdge R720 model. Each host will be configured as shown in Figure 1:

Figure 1: Host Configuration

The primary operating system (OS) installed on the host will be Windows Server 2012 Datacenter Edition with the following roles and features enabled.

2.1.1 Required Roles

The following roles will be required on each of the hosts:

Hyper-V with the applicable management tools that’s automatically selected.

2.1.2 Required Features

The following features will be required on each of the hosts:

Failover Clustering with the applicable tools that’s automatically selected Multipath I/O

Page 3: Executive Summary

2.1.3 Host Bios Configuration

The bios of the individual hosts need to be upgraded to the latest release version and the following options needs to be enabled:

Processor Settings Virtualization Technology must be Enabled Execute Disable must be enabled

2.1.4 BMC Configuration

The baseboard management controller (BMC) needs to be configured to allow for out-of-band-management of the hosts and to allow System Center 2012 Virtual Machine Manager (SCVMM) to discover the physical computer. This will be used for bare-metal provisioning of the host and management from SCVMM. The BMC must support any one of the following out-of-band management protocols:

Intelligent Platform Management Interface (IPMI) versions 1.5 or 2.0 Data Center Management Interface (DCMI) version 1.0 System Management Architecture for Server Hardware (SMASH) version 1.0 over WS-

Management (WS-Man)

DRAC Configuration

The following table provides the detailed DRAC configuration:

Model Host Name IP Subnet Gateway VLAN Enable Protocol

R 620 OHOWVSMAN 10.131.133.47 255.255.255.0 10.131.133.1 1 IPMI

R 720 OHOWVSHV01 10.131.133.13 255.255.255.0 10.131.133.1 1 IPMI

R 720 OHOWVSHV02 10.131.133.14 255.255.255.0 10.131.133.1 1 IPMI

R 720 OHOWVSHV03 10.131.133.15 255.255.255.0 10.131.133.1 1 IPMI

R 720 OHOWVSHV04 10.131.133.18 255.255.255.0 10.131.133.1 1 IPMI

R 720 OHOWVSHV05 10.131.133.19 255.255.255.0 10.131.133.1 1 IPMI

R 720 OHOWVSHV06 10.131.133.20 255.255.255.0 10.131.133.1 1 IPMI

Table 1: Host DRAC Configuration

The following details have been configured to gain access to the DRAC controller for the individual hosts:

Page 4: Executive Summary

Username Password

root Can be obtained from Dell documentation.

Table 2: DRAC Credentials

Page 5: Executive Summary

2.1.5 Host and Cluster Network Configuration

The following table provides the detailed Host and Hyper-V cluster network configuration once the LBFO team is established:

Model Host Name Host Type Management Interface VLAN

R620 OHOWVSMAN Management IP: 10.131.133.39

Subnet: 255.255.255.0

Gateway: 10.131.133.1

1

R720 OHOWVSHV01 Virtualization Host IP: 10.131.133.41

Subnet: 255.255.255.0

Gateway: 10.131.133.1

1

R720 OHOWVSHV02 Virtualization Host IP: 10.131.133.42

Subnet: 255.255.255.0

Gateway: 10.131.133.1

1

R720 OHOWVSHV03 Virtualization Host IP: 10.131.133.43

Subnet: 255.255.255.0

Gateway: 10.131.133.1

1

R720 OHOWVSHV04 Virtualization Host IP: 10.131.133.44

Subnet: 255.255.255.0

Gateway: 10.131.133.1

1

R720 OHOWVSHV05 Virtualization Host IP: 10.131.133.45

Subnet: 255.255.255.0

Gateway: 10.131.133.1

1

R720 OHOWVSHV06 Virtualization Host IP: 10.131.133.46

Subnet: 255.255.255.0

Gateway: 10.131.133.1

1

Hyper-V OHOWVSCV01 Hyper-V Cluster Name IP: 10.131.133.40

Subnet: 255.255.255.0

Gateway: 10.131.133.1

1

Page 6: Executive Summary

Table 3: Host and Hyper-V Cluster Network Configuration

2.1.6 Private Network Configuration

The following table provides the detailed private network configuration for the Cluster and Live Migration Networks that will be created as virtual interfaces once the LBFO team is established. The private network interfaces will be disabled from registering in DNS.

Hosts Cluster Network Cluster VLAN

Live Migration Network Live Migrate VLAN

OHOWVSHV01 IP: 10.10.6.1

Subnet: 255.255.254.0

6 IP: 10.10.7.1

Subnet: 255.255.252.0

7

OHOWVSHV02 IP: 10.10.6.2

Subnet: 255.255.254.0

6 IP: 10.10.7.2

Subnet: 255.255.252.0

7

OHOWVSHV03 IP: 10.10.6.3

Subnet: 255.255.254.0

6 IP: 10.10.7.3

Subnet: 255.255.252.0

7

OHOWVSHV04 IP: 10.10.6.4

Subnet: 255.255.254.0

6 IP: 10.10.7.4

Subnet: 255.255.252.0

7

OHOWVSHV05 IP: 10.10.6.5

Subnet: 255.255.254.0

6 IP: 10.10.7.5

Subnet: 255.255.252.0

7

OHOWVSHV06 IP: 10.10.6.6

Subnet: 255.255.254.0

6 IP: 10.10.7.6

Subnet: 255.255.252.0

7

Table 4: Private Network Configuration

Page 7: Executive Summary

2.1.7 Hyper-V Security Design

The following security design principles needs to be taken into consideration when designing a virtualization solution build using Hyper-V. The section below provides the details on the decisions taken for the PPSA that’s based on their skill and requirements.

Security Consideration Design Decision

Reducing the attack footprint of the Windows Server operating system by installing Windows Server Core.

The PPSA does not have the required knowledge of PowerShell to manage Windows Server Core.

Create and apply Hyper-V specific group policies to disable any unnecessary ports and/or features.

The recommended Windows Server 2012 Hyper-V group policy will be extracted from the Microsoft Security and Compliance Manager and applied to all the Hyper-V hosts. The group policy will be imported into Active Directory and applied on an organization unit where all the Hyper-V hosts resides.

Limit the Hyper-V operators to only manage the virtualization layer and not the operating system itself by adding the required users to the Hyper-V Administrators group on the local Hyper-V server.

The following group will be created in Active Directory: GG-HyperV-Admins. The group will be added to the Hyper-V group policy discussed earlier to add it to the local Hyper-V Administrators group for each of the Hyper-V hosts. This group will have only the required Hyper-V administrators in the PPSA.

Install antivirus on the Hyper-V servers and add exclusions for the locations where the hypervisor stores the virtual machine profiles and virtual hard drives.

System Center 2012 Endpoint Protection (SCEP) will be deployed and managed by System Center 2012 Configurations Manager. When SCEP is installed on a Hyper-V host it will automatically configure the exclusions for the virtual machine data locations as it inherits it from the Hyper-V host.

Encrypt the volumes using BitLocker where the virtual machine data is stored. This is required for virtualization hosts where physical security is a constraint.

The clustered shared volumes (CSV) where the virtual machine data will reside will not be encrypted.

Table 5: Hyper-V Security Design Decisions

The creation and deployment of the required group policies and organization units needs to go through the standard change process to make sure that’s in a managed state and created in the correct location.

Page 8: Executive Summary

2.2 NetworkThe following section provides the detailed design for the network configuration of the hosts and the switches used in the solution. The design and configuration is aimed at simplifying the management of the networks while providing a load balanced network for the management OS and virtual machines.

2.2.1 Host Network Configuration

The six (4) available network cards for dataflow per host will be used to create a single network team that’s switch independent and it will be configured to use the Hyper-V switch port traffic distribution algorithm. This will allow for the offload of Virtual Machine Queues (VMQs) directly to the NIC and will distribute inbound and outbound traffic evenly across the team members because there will be more VMs than available networks on the hosts.

Figure 2 provides a logical view of how the network team will be configured and what virtual interfaces the management OS requires to enable the hosts to be configured as a failover cluster. The network team will also be used for the virtual machine traffic and a virtual network switch will be created to allow communication from the virtual environment to the production environment.

Figure 2: Host Network Configuration

The following classic networks architecture will be required for implementing a failover cluster:

Page 9: Executive Summary

Figure 3: Network Architecture for Failover Cluster

Each cluster nodes have at least four network connections:

The Host Management Network: This network is used for managing the Hyper-V host. This type of configuration is recommended because it allows managing Hyper-V services whatever the type of network workload generated by hosted virtual machines.

The Cluster Heartbeat Network: This network is used by Failover Cluster Services to check that each node of the cluster is available and working perfectly. This network can also be used per a cluster node to access to its storage through another node if direct connectivity to the SAN is lost (Dynamic IO Redirection). It is recommended to use dedicated network equipment for the Cluster Heartbeat Network to get the best availability of the failover cluster service.

The Live Migration Network: This network is used for Live Migration of virtual machines between two nodes. This live migration process is particularly interesting for planned maintenance operation on host because it allows to move virtual machines between two cluster nodes with no or few network connectivity lost. The network bandwidth directly influence the time needed to Live Migrate a Virtual Machine. For this reasons it is recommended to use the fastest possible connectivity and, like the Cluster Heartbeat Network, it is recommended to use dedicated network equipment.

Virtual Machines Networks: These networks are used for Virtual Machine Connectivity. Most of the time, Virtual Machines require multiples networks. This can be addressed by using several network cards dedicated to virtual machines workloads or by implementing V-LAN Tagging and V-LAN Isolation on a high-speed network. Most of the time, the host parent partition is not connected to those networks. This approach prevents using unnecessary TCP/IP Address and re-enforce the isolation of the Parent Partition from host Virtual Machines.

Page 10: Executive Summary

The following steps need to be follow to create the LBFO team with the required virtual adapters per Hyper-V host:

1. Create the switch independent Hyper-V Port LBFO Team called: Team01 using the Windows Server 2012 NIC Teaming Software

2. Create the Hyper-V switch called: vSwitch and do not allow the Management OS to create an additional virtual adapter using the Hyper-V Manager

3. Create the virtual adapters for the Management OS as illustrated in Figure 2 and assign the required VLANs.

4. Configure the network interfaces with the information described in Table 3.

There will be a minimum network bandwidth assigned to the network interface and managed by the QoS Packet Scheduler in Windows. The traffic will be separated by VLANs which will allow for optimal usage of the available network connections. The following table provides the network configuration per host:

Network Interface Name Minimum Bandwidth IP VLAN

Management Management 5 Table 3 1

Cluster Cluster 5 Table 4 6

Live Migration LM 20 Table 4 7

Virtual Switch vSwitch 1 none Native (1)

Table 6: Network Bandwidth Management

The following IP addresses will be assigned to the iSCSI network on each host to allow for communication to the SANs.

Host Name iSCSI NIC 1 iSCSI NIC 2 iSCSI NIC 3 iSCSI NIC 4 Subnet VLAN

OHOWVSHV01 10.10.5.51 10.10.5.52 10.10.5.53 10.10.5.54 255.255.255.0 5

OHOWVSHV02 10.10.5.61 10.10.5.62 10.10.5.63 10.10.5.64 255.255.255.0 5

OHOWVSHV03 10.10.5.71 10.10.5.72 10.10.5.73 10.10.5.74 255.255.255.0 5

OHOWVSHV04 10.10.5.81 10.10.5.82 10.10.5.83 10.10.5.84 255.255.255.0 5

OHOWVSHV05 10.10.5.91 10.10.5.92 10.10.5.93 10.10.5.94 255.255.255.0 5

OHOWVSHV06 10.10.5.101 10.10.5.102 10.10.5.103 10.10.5.104 255.255.255.0 5

Table 7: Host iSCSI Configuration

Jumbo Frames will be enabled on the iSCSI network cards and SAN controllers to increase data performance through the network. The frame size will be set to 9014 bytes.

Page 11: Executive Summary

2.2.2 Switch Configuration

The Dell vStart 200 has four (4) Dell PowerConnect 7048 switches which will be used for Management and iSCSI traffic.

Two (2) of the switches will connected to each other and need to be configured as trunk ports with a native VLAN ID of 5 because the switches will be used as the Storage Network.

The other two (2) switches also needs to be connected to each other but the switch ports needs to be configured as trunk ports with encapsulation and with dot1q. The native VLAN needs to be set per port and a VLAN ID range needs to be tagged per port to allow for isolated communication between the required management OS interfaces and the production workloads.

The following figure provides the detail on the switch and connection layout:

Figure 4: Network Switch Design

The connections from each source device is divided between the required destination switches. This is why the LBFO team needs to be created as switch independent because the team can’t be managed nor created on the switch.

Page 12: Executive Summary

The following table provides the rack mount switch information:

Type/Usage Connection Name IP Subnet Gateway

PC7048 iSCSI Stack OoB Management BG-ISCSI-01 10.131.133.48 255.255.255.0 10.131.133.1

PC7048 LAN Stack OoB Management BG-LAN-01 10.131.133.49 255.255.255.0 10.131.133.1

Table 8: Switch Stack Configuration

The following network VLANs will be used in the Dell vStart 200 for isolating network traffic:

VLAN ID Name

5 iSCSI

1 OoB Management

1 Management

6 Cluster

7 Live Migration

1 Virtual Machines

Table 9: Network VLANs

The following network parameters have been identified for the platform upgrade project:

Network Parameters

Primary Secondary

DNS 10.131.133.1 10.131.133.2

NTP 10.131.133.1

SMTP 10.131.133.8

Table 10: Additional Network Parameters

Page 13: Executive Summary

2.3 StorageThe Dell vStart 200 shippes with three (3) Dell Equalogics PS6100 iSCSI SANs with 24 x 600GB spindles each. That provides a total RAW capacity of 14.4TB per SAN and a total of 43.2TB RAW storage for PPSA to use. The recommended raid set configuration for each SAN will be the 50 Raid set across each SAN. This is to have a balance between storage capacity and acceptable read/write speed.

Each SAN array is connected with four (4) iSCSI connections to the storage network as demonstrated in Figure 4. This allow for four (4) iSCSI data paths to flow from the SAN array to the Hyper-V hosts. This helps with connection redundancy and data performance because of the multiple iSCSI paths.

Each of the Hyper-V host will be connected to the SAN array through four (4) network connections that’s connected to the storage network as demonstrated in Figure 4. Multipath I/O (MPIO) will be enabled to allow for redundancy and to increase performance with an active/active path through all four (4) iSCSI connections. The Dell HIT Toolkit will be used to establish and manage MPIO on each host.

The following diagram provides a logical view of how the storage will be configured:

Figure 5: Storage Configuration

After applying Raid 50 to the SAN arrays PPSA will only have 9TB available per array. There will be two (2) LUNs carved per SAN array with the size of 3TB each. The six (6) LUNs will be presented to each of the six (6) Hyper-V hosts for storing the virtual machine data.

The following table provides the three (3) Equalogic SAN array configuration detail:

Page 14: Executive Summary

EQL Storage Array

Name IP Subnet Gateway VLAN

EQL Group Name BG-EQL-GRP01

Group Management 10.10.5.10 255.255.255.0 5

EQL Array Name BG-EQL-ARY01

eth0 10.10.5.11 255.255.255.0 5

eth1 10.10.5.12 255.255.255.0 5

eth2 10.10.5.13 255.255.255.0 5

eth3 10.10.5.14 255.255.255.0 5

Management 10.131.133.24 255.255.255.0 10.131.133.1 1

EQL Array Name BG-EQL-ARY02

eth0 10.10.5.21 255.255.255.0 5

eth1 10.10.5.22 255.255.255.0 5

eth2 10.10.5.23 255.255.255.0 5

eth3 10.10.5.24 255.255.255.0 5

Management 10.131.133.25 255.255.255.0 10.131.133.1 1

EQL Array Name BG-EQL-ARY03

eth0 10.10.5.31 255.255.255.0 5

eth1 10.10.5.32 255.255.255.0 5

eth2 10.10.5.33 255.255.255.0 5

eth3 10.10.5.34 255.255.255.0 5

Management 10.131.133.27 255.255.255.0 10.131.133.1 1

Table 11: SAN Array Configuration

The SAN access information is as follow:

Equalogic Access Configuration

CHAP Username HyperV

Page 15: Executive Summary

Passsword Password can be obtained from the Dell documentation.

Table 12: SAN Access Information

The storage will be carved and presented to all the host in the six (6) node cluster as discussed in Table 3. This will allow for the storage to be assigned as Clustered Shared Volumes (CSV) where the virtual hard drives (VHD) and virtual machine profiles will reside. A cluster quorum disk will be presented to allow for the cluster configuration to be stored. The following table provides the storage configuration for the solution:

Disk Name Name Storage Array Size Raid Set Preferred Owner

HyperV-Quorum Witness Disk BG-EQL-ARY01 1GB Raid 50 OHOWVSHV06

HyperV-CSV-1 CSV01 BG-EQL-ARY01 3TB Raid 50 OHOWVSHV01

HyperV-CSV-2 CSV02 BG-EQL-ARY01 3TB Raid 50 OHOWVSHV02

HyperV-CSV-3 CSV03 BG-EQL-ARY02 3TB Raid 50 OHOWVSHV03

HyperV-CSV-4 CSV04 BG-EQL-ARY02 3TB Raid 50 OHOWVSHV04

HyperV-CSV-5 CSV05 BG-EQL-ARY03 3TB Raid 50 OHOWVSHV05

HyperV-CSV-6 CSV06 BG-EQL-ARY03 3TB Raid 50 OHOWVSHV06

Table 13: Storage Configuration

Page 16: Executive Summary

3 Combined Virtualization OverviewThe following figure provides the overview of the final virtualization solution by combining the Scale-Unit design elements discussed in this document.

Figure 6: Final Solution Overview

This allows for six (6) nodes to be configured in a Hyper-V failover cluster that’s connected to all the shared storage and networks that’s configured correctly. In the failover cluster will be configured with five (5) active nodes and one (1) passive/reserve node for failover of virtual machines and for patch management of the Hyper-V hosts.

Page 17: Executive Summary

4 Management Layer DesignSystem Center 2012 SP1 will be the management infrastructure for the solution and includes the following products to deploy and operate the infrastructure in most cases:

System Center 2012 Virtual Machine Manager System Center 2012 Operations Manager

The management infrastructure itself will be hosted on the scale units deployed for the solution and a highly available SQL server will be deployed for the System Center databases.

4.1 SQL Server Architecture for Management LayerMicrosoft System Center 2012 products used in the management solution rely on Microsoft SQL Server databases. Consequently, it is necessary to define the Microsoft SQL Server architecture used by the management solution.

The SQL Server databases for the management solution will be deployed on SQL Server 2008 R2 Enterprise Edition SP1 and CU6. Enterprise Edition will allow for the deployment of two (2) virtual machines that will be clustered to provide high availability and redundancy of the Management solution databases. Each of the components of the Management solution will use a dedicated SQL Server instance with disks presented through iSCSI to optimized performance.

To establish the SQL Server Cluster, 2 (two) virtual machines with the following specifications will be required:

Component Specification

Server Role SQL Server Node

Physical or Virtual Virtual

Operating System Windows Server 2008 R2 SP1 64-bit

Application SQL Server 2008 R2 SP1 with CU6 Enterprise Edition

CPU Cores 8 Cores

Memory 16 GB RAM

Network 2 x Virtual NICs (Public and Cluster Networks)

Disk 1 80 GB – Operating System

Disk 2 1GB – Quorum Disk

Disk n Disks presented to the SQL virtual machines as outlined in section 4.1.1.

Page 18: Executive Summary

Table 14: SQL Server VM Requirements

4.1.1 Management SQL Servers Instances

Microsoft System Center 2012 components are database-driven applications. This makes a well-performing database platform critical to the overall management of the environment.

The following instances will be required to support the solution:

Management Tool Instance Name Primary Database Authentication

System Center 2012 Virtual Machine Manager

VMM VirtualMachineManagerDB Windows

System Center 2012 Operations Manager

OM_OPS OperationsManager Windows

OM_DW OperationsManagerDW Windows

Table 15: SQL Instance Names

The following disk configuration will be required to support the Management solution:

SQL Instance LUN Purpose Size Raid Set

VMM LUN 1 Database Files 50 GB Raid 50

LUN 2 Database Log Files 25 GB Raid 50

LUN 3 TempDB Files 25 GB Raid 50

OM_OPS LUN 4 Database Files 25 GB Raid 50

LUN 5 Database Log Files 15 GB Raid 50

LUN 6 TempDB Files 15 GB Raid 50

OM_DW LUN 7 Database Files 800 GB Raid 50

LUN 8 DB Log Files 400 GB Raid 50

LUN 9 TempDB Files 50 GB Raid 50

Table 16: SQL Instance Disk Configuration

Page 19: Executive Summary

4.1.2 Management SQL Server Service Accounts

Microsoft SQL Server requires service accounts for starting the database and reporting services required by the management solution. The following service accounts will be required to successfully install SQL server:

Service Account Purpose Username Password

SQL Server SQLsvc The Password will be given in a secure document.

Reporting Server SQLRSsvc The Password will be given in a secure document.

Table 17: SQL Server Service Accounts

4.2 System Center 2012 Virtual Machine ManagerSystem Center 2012 Virtual Machine Manager (SCVMM) helps enable centralized management of physical and virtual IT infrastructure, increased server usage, and dynamic resource optimization across multiple virtualization platforms. It includes end-to-end capabilities such as planning, deploying, managing, and optimizing the virtual infrastructure.

4.2.1 Scope

System Center 2012 Virtual Machine Manager will be used to manage Hyper-V hosts and guests in the datacenters. No virtualization infrastructure outside of the solution should be managed by this instance of System Center 2012 Virtual Machine Manager. The System Center 2012 Virtual Machine Manager configuration is only considering the scope of this architecture and therefore may suffer performance and health issues if that scope is changed.

4.2.2 Servers

The SCVMM VM specifications are shown in the following table:

Servers Specs

1 x VM 1 Virtual Machine dedicated for running SCVMM

Windows Server 2012

4 vCPU

8 GB RAM

1 x vNIC

Storage: One 80GB operating system VHD

Additional Storage: 120GB SCSI VHD storage for Library

Page 20: Executive Summary

Table 18: SCVMM Specification

4.2.3 Roles Required for SCVMM

The following roles are required by SCVMM:

SCVMM Management Server SCVMM Administrator Console Command Shell SCVMM Library SQL Server 2008 R2 Client Tools

4.2.4 SCVMM Management Server Software Requirements for

The following software must be installed prior to installing the SCVMM management server.

Software Requirement Notes

Operating System Windows Server 2012

.NET Framework 4.0 Included in Windows Server 2012.

Windows Assessment and Deployment Kit (ADK) for Windows 8

Windows ADK is available at the Microsoft Download Center.

Table 19: SCVMM Management Server Software Requirements

4.2.5 SCVMM Administration Console Software Requirements

The following software must be installed prior to installing the SCVMM console.

Software requirement Notes

A supported operating system for the VMM console Windows Server 2012 and/or Windows 8

Windows PowerShell 3.0 Windows PowerShell 3.0 is included in Windows Server 2012.

Microsoft .NET Framework 4 .NET Framework 4 is included in Windows 8, and Windows Server 2012.

Table 20: SCVMM Administration Console Software Requirements

Page 21: Executive Summary

4.2.6 Virtual Machine Hosts Management

SCVMM supports the following as virtual machine hosts:

Microsoft Hyper-V VMware ESX Citrix XenServer

Only the six (6) node Hyper-V Cluster will be managed by the SCVMM solution as described in Figure 6: Final Solution Overview.

Hyper-V Hosts System Requirements

SCVMM supports the following versions of Hyper-V for managing hosts.

Operating System Edition Service Pack System Architecture

Windows Server 2008 R2

(full installation or Server Core-MiniShell installation)

Enterprise and Datacenter

Service Pack 1 or earlier x64

Hyper-V Server 2008 R2 Not applicable Not applicable x64

Windows Server 2012 N/A

Table 21: Hyper-V Hosts System Requirements

4.2.7 SCVMM Library Placement

Libraries are the repository for VM templates and therefore serve a very important role. The Library share itself will reside on the SCVMM server in the default architecture; however, it should have its own logical partition and corresponding VHD whose underlying disk subsystem is able to deliver the required level of performance to service the provisioning demands.

This level of performance depends on:

Number of tenants Total number of templates and VHDs Size of VHDs How many VMs may be provisioned simultaneously What the SLA is on VM provisioning Network constraints

Page 22: Executive Summary

4.2.8 Operations Manager Integration

In addition to the built-in roles, SCVMM will be integrated with System Center 2012 Operations Manager. The integration will enable Dynamic Optimization and Power Optimization in SCVMM.

SCVMM can perform load balancing within host clusters that support live migration. Dynamic Optimization migrates virtual machines within a cluster according to settings you enter.

SCVMM can also help to save power in a virtualized environment by turning off hosts when they are not needed and turning the hosts back on when they are needed.

SCVMM supports Dynamic Optimization and Power Optimization on Hyper-V host clusters and on host clusters that support live migration in managed VMware ESX and Citrix XenServer environments. For Power Optimization, the computers must have a baseboard management controller (BMC) that enables out-of-band management.

The integration into Operation Manager will be configured with the default thresholds and dynamic optimization will be enabled. Dynamic Power Optimization schedule will be enabled from 8PM to 5AM.

4.2.9 SCVMM Service Accounts

The following service account will be required for SCVMM and to integrate into Operation Manager and to manage the Hyper-V Hosts:

Purpose Username Password

SCVMM Service Account SCVMMsvc The Password will be given in a secure document.

Table 22: SCVMM Service Account

The service account will also be made local Administrator on each Hyper-V and SCVMM machine to allow for effective management.

Page 23: Executive Summary

4.2.10 Update Management

SCVMM provides the capability to use a Windows Server Update Services (WSUS) server to manage updates for the following computers in your SCVMM environment:

Virtual machine hosts Library servers VMM management server PXE servers The WSUS server

PPSA can configure update baselines, scan computers for compliance, and perform update remediation.

The SCVMM will use WSUS that will be deployed with System Center 2012 Configurations Manager. Additional configuration will be required and discussed in the deployment and configuration guides.

Page 24: Executive Summary

4.2.11 Virtual Machine Templates Design Decision

The following base templates will be created for the deployment in PPSA. Each template will have their own assigned hardware and guest operating system profile.

Template – Profile Hardware Profile Network Operating System

Template 1 – Small 1 vCPU2 GB RAM80 GB HDD

VLAN 1 Windows Server 2008 R2 SP1 64-bit

Template 2 – Medium 2 vCPU4 GB RAM80 GB HDD

VLAN 1 Windows Server 2008 R2 SP1 64-bit

Template 3 – Large 4 vCPU8 GB RAM80 GB HDD

VLAN 1 Windows Server 2008 R2 SP1 64-bit

Template 4 – Small 1 vCPU2 GB RAM80 GB HDD

VLAN 1 Windows Server 2012

Template 5 – Medium 2 vCPU4 GB RAM80 GB HDD

VLAN 1 Windows Server 2012

Template 6 – Large 4 vCPU8 GB RAM80 GB HDD

VLAN 1 Windows Server 2012

Table 23: Virtual Machine Templates

Page 25: Executive Summary

5 Appendix A – New Host Design to support Guest ClusteringThe requirement to provide guest clustering services on the six (6) Node Hyper-V cluster build on the vStart 200 will require an additional two (2) network cards. These network cards will be shared as dedicated virtual iSCSI networks in the virtual environment. This will change the host design as follow:

Figure 7: Scale Unit Network Extension

If additional network cards cannot be acquired by PPSA then the current host design stays valid and all four (4) of the iSCSI network adapters’ needs to be share with the virtual environment.

The Jumbo Frame size must also be set in the guest cluster virtual machines on each of the virtual iSCSI interfaces to 9014 bytes as well to take advantage of the performance benefits.

Virtual iSCSI target providers will not be implemented in the solution because of the performance impact on the other guest machines.

Page 26: Executive Summary

6 Appendix B – Updated SQL Server DesignThe current SQL Server design has a requirement for shared storage to be able to form a SQL Cluster but can’t be fulfill by the current environment because of the network adapter constrain as described in Section 5.

When implementing a guest cluster in a virtual environment it’s required that there’s enough network capacity/adapters to allow for the virtual machines to connect directly to the SAN and/or fiber channel cards to provide virtual fiber channel adapters to the virtual machines for SAN connectivity. This allows for the capability to assign shared storage to the virtual machines to build guest clusters.

Because of the constrained mention above it is recommended to consider using SQL Server 2012 AlwaysOn availability group to achieve resiliency if SQL Server cannot be implemented on physical hardware. The deployment of the SQL Server on the virtual environment will already make the SQL Server hosts redundant but not the application databases hosted on SQL Server.

The SQL Server 2012 AlwaysOn Availability groups is a new integrated feature that can provide data redundancy for databases and to improve application failover time to increase the availability of mission-critical application. It also helps with ensuing the availability of application databases enabling zero data loss through log-based data movement for data protection without shared disks.

The following illustration shows an availability group that contains the maximum number of availability replicas, one primary replica and four secondary replicas.

Figure 8: SQL Server 2012 AlwaysOn Availability Group Maximum Configuration

The remainder of this section will provide the detail design for the PPSA SQL Server 2012 AlwaysOn Availability group.

Page 27: Executive Summary

6.1 Virtual Machine ConfigurationTo allow for the SQL Server AlwaysOn availability group to be establish PPSA must deploy two (2) virtual machines of the following specification and configuration:

Component Specification

Server Role SQL Server Node

Physical or Virtual Virtual

Operating System Windows Server 2012 Enterprise o higher

Features .net Framework 3.5

Failover Clustering

Application SQL Server 2012 SP1

CPU Cores 8 Cores

Memory 16 GB RAM

Network 2 x Virtual NICs (Public and Cluster Networks)

Disk 1 80 GB – Operating System

Disk n Disks presented to the SQL virtual machines as outlined in section 6.2.2.

Table 24: SQL Server 2012 VM Requirements

Page 28: Executive Summary

6.1.1 Virtual Machine Clustering

The newly created virtual machines must be clustered to allow SQL Server 2012 to create an AlwyasOn availability group. The following table provides the management and cluster network details.

Name Type Management Network VLAN Cluster Network VLAN

OHOWSQLS01 Virtual Machine IP: 10.131.133.xx

Subnet: 255.255.255.0

Gateway: 10.131.133.1

1 IP: 192.168.0.1

Subnet: 255.255.255.252

6

OHOWSQLS02 Virtual Machine IP: 10.131.133.xx

Subnet: 255.255.255.0

Gateway: 10.131.133.1

1 IP: 192.168.0.2

Subnet: 255.255.255.252

6

OHOWSQLC01 Cluster Name IP: 10.131.133.xx

Subnet: 255.255.255.0

1 None

Table 25: SQL Server 2012 Virtual Machine Network Configuration

The Quorum configuration will be configured as node majority after establishing the Windows Server 2012 cluster because shared storage isn’t available. This is however not optimal and the Witness disk must be configured using node and file share majority. This will allow the Windows Cluster to save the cluster configuration and to vote for the cluster health. The file share requires only 1024MB of storage and can be located on the new file services. The following file share can be created: \\fileserver\ OHOWSQLC01\Witness Disk. Both the SQL Server virtual machines names and Windows cluster name must have full read/write access to the share.

Page 29: Executive Summary

6.2 SQL Server 2012 ConfigurationThe following section provides the detailed design and configuration information for the SQL Server 2012 SP1 implementation.

6.2.1 Feature Selection

The following features will be required when installing SQL Server 2012 SP1 to allow for AlwaysOn availability Groups:

Instance Features:

Database Engine Services

SQL Server Replication Full-Text and Semantic Extraction for Search

Shared Features Client Tools Connectivity Client Tools Backwards Compatibility Management Tools – Complete

The shared features will be installed on C:\Program Files\Microsoft SQL Server\.

6.2.2 Management SQL Servers 2012 Instances

The following instances will be required to support the solution:

Management Tool

Instance Name

Primary Database Authentication Memory Allocation

Primary SQL Host

System Center 2012 Virtual Machine Manager

SCVMM VirtualMachineManagerDB Windows 4GB SQL01

System Center 2012 Operations Manager

SCOM_OPS OperationsManager Windows 4GB SQL01

SCOM_DW OperationsManagerDW Windows 7GB SQL01

System Center 2012 Configuration Manager

SCCM ConfigurationsManagerDB Windows 4GB SQL02

SharePoint 2010 SP1

SharePoint SharePointDB Windows 8GB SQL02

Page 30: Executive Summary

Table 26: SQL Server 2012 Instance Names

The instance root directory for all instances will be C:\Program Files\Microsoft SQL Server\.

The following disk configuration will be required and must be presented to both the virtual machines as fixed disks.

SQL Instance LUN Purpose Size Raid Set Drive Letter

SCVMM LUN 1 Database and Temp Files 50 GB Raid 50 E

LUN 2 Database and Temp Log Files 25 GB Raid 50 F

SCOM_OPS LUN 3 Database and Temp Files 25 GB Raid 50 G

LUN 4 Database and Temp Log Files 15 GB Raid 50 H

SCOM_DW LUN 5 Database and Temp Files 800 GB Raid 50 I

LUN 6 Database and Temp Log Files 400 GB Raid 50 J

SCCM LUN 7 Database and Temp Files 700 GB Raid 50 K

LUN 8 Database and Temp Log Files 350 GB Raid 50 L

Table 27: SQL Server 2012 Instance Disk Configuration

When installing the individual instances the data root directory will be C:\Program Files\Microsoft SQL Server\ and the individual database, Temp DB, database logs and Temp DB logs directories will be installed to the correct drive letters as outline in Table 27.

Page 31: Executive Summary

6.2.3 SQL Server 2012 Service Accounts and Groups

Microsoft SQL Server requires service accounts for starting the database and reporting services and SQL administrator groups for allowing management of SQL. The following service accounts and Group will be required to successfully install SQL server:

Service Account / Group Purpose Name Password

SQL Server SQLsvc The Password will be given in a secure document.

Reporting Server SQLRSsvc The Password will be given in a secure document.

SQL Admin Group SQL Admins None

Table 28: SQL Server 2012 Server Service Accounts

The SQL Admin groups must contain all the required SQL administrator to allow them to manage SQL Server 2012 SP1.

Page 32: Executive Summary

6.2.4 Availability Group Design

The following section provides the design detail for the SQL Server 2012 AlwaysOn Availability groups.

Before creating the SQL availability groups, PPSA must backup all the existing databases and all the databases must be enabled for full recovery mode.

The individual SQL Server instances must also be enabled to use AlwaysOn availability Groups in the SQL Server Configurations Manager.

The following table provides the availability group configuration. This configuration needs to be done by connecting to the individual SQL instances.

Availability Group Name

Databases in Group Primary Server

Replica Server

Replica Configuration

Listener

System Center 2012

VirtualMachineManagerDB

SQL01 SQL02 Automatic Failover

Name: VMM_Listener

Port:1433

IP: 10.131.133.xx

System Center 2012

OperationsManagerDB

SQL01 SQL02 Automatic Failover

Name: SCOMDB_Listener

Port:1433

IP: 10.131.133.xx

System Center 2012

OperationsManagerDW

SQL01 SQL02 Automatic Failover

Name: SCOMDW_Listener

Port:1433

IP: 10.131.133.xx

System Center 2012

ConfigurationsManagerDB

SQL02 SQL01 Automatic Failover

Name: SCCM_Listener

Port:1433

IP: 10.131.133.xx

SharePoint SharePointDB SQL02 SQL01 Automatic Failover

Name: SP_Listener

Port:1433

IP: 10.131.133.xx

Page 33: Executive Summary

Table 29: Availability Group Design

When creating the availability group there will be a requirement for a file share to do the initial synchronization of the database. A temporary share can be be established on the file server called \\fileshare\SQLSync.