architecture client virtualization for hp virtualsystem
DESCRIPTION
Architecture Client Virtualization for HP VirtualSystemTRANSCRIPT
Enterprise Reference Architecture for Client
Virtualization for HP VirtualSystem
Implementing the HP Architecture for VMware View 4.6
Technical white paper
Table of contents
HP and Client Virtualization .................................................................................................................. 2 Software used for this document ........................................................................................................ 3
Why HP and VMware .......................................................................................................................... 5
The Enterprise Client Virtualization Reference Architecture for HP VirtualSystem .......................................... 7
What this document produces ............................................................................................................. 11
Configuring the platform ..................................................................................................................... 20 External Insight Control ................................................................................................................... 20 Configuring the enclosures .............................................................................................................. 20 Creating a Virtual Connect domain with stacked enclosures ................................................................ 22 Defining profiles for hosts ................................................................................................................ 32 Setting up management hosts .......................................................................................................... 41 Configuring the P4800 G2 SAN for BladeSystem .............................................................................. 43 Setting up hypervisor hosts .............................................................................................................. 49 Deploying storage .......................................................................................................................... 50 Configuring the Management Group for hosts ................................................................................... 58 Configuring and attaching storage for management hosts ................................................................... 72 Set up management VMs ................................................................................................................ 76 Understanding storage for View pools .............................................................................................. 78 Creating shared storage for replicas ................................................................................................ 79 Creating storage for linked clones .................................................................................................... 80 Creating shared storage for user data disks and temporary files .......................................................... 81
Bill of materials .................................................................................................................................. 81
Appendix A – Storage patterning and planning in VMware View environments ........................................ 85
Appendix B – HP Insight Control for VMware vCenter Server .................................................................. 91
Appendix C – Scripting the configuration of the Onboard Administrator .................................................. 95
Appendix D – CLIQ commands for working with P4000....................................................................... 101
Appendix E – Understanding memory overcommitment ........................................................................ 102
For more information ........................................................................................................................ 110
2
HP and Client Virtualization
Planning a Microsoft® Windows® 7 migration? How much of your corporate data is at the airport
today in a lost or stolen laptop? What is the cost per year to manage your desktops? Are you
prepared to support the upcoming ―always-on‖ workforce and the myriad of devices it brings?
HP Client Virtualization with VMware View can help customers achieve the goals of IT and workforce
support, without compromising on performance, operating costs, information security, and user
experience with HP Client Virtualization Reference Architectures. These reference architectures
provide:
Simplicity: with an integrated data center solution for rapid installation/startup and easy ongoing
operations
Self contained and modular server, storage, and networking architecture – no virtualization data
egresses the rack
3x improvement in IT productivity
Optimization: a tested solution with the right combination of compute, storage, networking, and
system management tuned for Client Virtualization efficiency
Scalable performance, enhanced security, always available
60% less rack space compared to competitors
95% fewer NICs, HBAs, and switches; 65% lower cost; 40% less power for LAN/SAN connections
Flexibility: with options to scale up and/or scale out to meet precise customer requirements
Flexible solution for task workers to PC power users
Up to 10,500 productivity users in three racks
Unmatched price/performance with both direct attached (DAS) and SAS tiered storage in a single
rack (up to 50% cheaper than traditional fibre channel SAN)
Purpose of this document
This document servers three primary functions.
Give IT decision makers, architects and implementation specialists an overview of how HP and
VMware approach Client Virtualization and how the joint solutions they bring to market enable
simpler, optimized and more flexible IT.
Outline the steps required to configure and deploy the hardware platform in an optimized fashion
to support VMware View as an enterprise level virtual desktop infrastructure (VDI) implementation.
Assist IT planners and architects with understanding storage patterning and tiering within the
context of the overall architecture.
This document does not discuss the in depth implementation steps to install and configure VMware
software unless it directly effects the successful deployment of the overall platform.
3
Abbreviations and naming conventions
Table 1 is a list of abbreviations and names used throughout this document and their intended
meaning.
Table 1. Abbreviations and names used in this document.
Convention Definition
MS RDP Microsoft Remote Desktop Protocol
PCoIP Teradici PC over IP protocol
VDI Virtual Desktop Infrastructure
OA Onboard Administrator
LUN Logical Unit Number
IOPS Input and Output Operations per Second
POD The scaling unit of this reference architecture. This is not an
acronym. It is a reference to a group of hardware.
SIM HP Systems Insight Manager
ICSD HP Insight Control Server Deployment
Target audience
This document is targeted at IT architects and engineers that plan on implementing VMware View and
are interested in understanding the unique capabilities and solutions that HP and VMware bring to the
Client Virtualization market as well as how a viable, enterprise level VDI solution is crafted. This
document is one in a series of reference architecture documents available at
http://www.hp.com/go/cv. This document does not discuss implementing the VMware solution stack
at a detailed level unless it directly effects the implementation of the platform. For specifics, please
consult the VMware documentation for View at http://www.vmware.com/technical-
resources/products/view.html.
Skillset
It is expected that the installer utilizing this document will be familiar with server, networking and
storage principles and have skills around VMware virtualization. The installer should also be familiar
with HP BladeSystem. Familiarity with Client Virtualization and end user computing concepts and
definitions is helpful, but not necessary.
Software used for this document
This document references numerous software components. The acceptable version of each OS and
versions of software used for test are listed in this section.
Hypervisor hosts
Components Software description
OS VMware ESX 4.1 Update 1
4
Management server operating systems
Components Software description
VMware vCenter / View
Composer server Microsoft Windows Server 2008 R2
View Manager server Microsoft Windows Server 2008 R2
HP Systems Insight
Manager (SIM) server Microsoft Windows Server 2008 R2
HP P4000 Central
Management Console
server
Microsoft Windows 7 Professional, x32
HP Insight Control Server
Deployment Microsoft Windows Server 2003
Microsoft SQL Server
servers1 Microsoft Windows Server 2008 R2
Management software
Components Software description
Virtualization Management VMware vCenter Server 4.1
Hardware Monitoring and
Management HP Systems Insight Manager 6.3
Storage Management HP StorageWorks P4000 SAN/iQ Centralized Management
Console (CMC) 9
Server Deployment HP Insight Control Server Deployment
Databases Microsoft SQL Server 2008 Enterprise edition, x64
View 4.6 components
Components Software description
VMware View Manager VMware View Connection Server 4.6
VMware View Composer VMware View Composer 2.6
VMware ThinApp VMware ThinApp Enterprise, 4.6
1 It is assumed that an existing SQL Server cluster will be used to host the necessary databases.
5
Firmware revisions
Components Version
HP Onboard Administrator 3.30
HP Virtual Connect 3.18
HP ProLiant Server System
ROM Varies by server
HP SAS Switch 2.2.15.0
HP Integrated Lights-Out 3
(iLO 3) 1.20
HP 600 Modular Disk
System (MDS600) 2.66
End user virtual machines
Components Software description
Operating System Microsoft Windows 7 Professional, x64
Connection Protocol Microsoft RDP and Teradici PCoIP
Why HP and VMware
Great solutions deliver value at a level that cobbled together components and poorly coordinated
partnerships cannot approach. VMware View on the HP Enterprise Reference Architecture for Client
Virtualization brings value to end user computing.
The value of convergence – Converging network, server, storage and infrastructure teams with
the teams that manage the desktop is not trivial. Somebody needs to own the solution as a whole, but
with traditional approaches, the segmented hardware demands segmented teams. The simplest
answer is to converge the infrastructure to drive toward a single administrator. HP BladeSystem and
the HP P4800 G2 SAN Solution for BladeSystem deliver unheard of levels of convergence between
storage, network and servers to provide real value where it counts – on the IT bottom line. Combined
with VMware View with its minimal software to install and simple management, the combination is
compelling. For Client Virtualization this combination helps deliver the following:
Simplified ownership of the VDI infrastructure. Traditional VDI implementations involve the
coordination of resources from across the IT realm. Storage teams, network specialists, virtualization
gurus, server and infrastructure managers and client support must converge to provide responsive
support to end user and IT needs. Support, troubleshooting and even implementation can become
cumbersome without a well defined path and clear ownership. Eliminate the problem by providing
a converged platform that allows a single administrator to manage storage, server and
infrastructure while reducing the network team involvement to the management of outbound ports
that carry protocol traffic straight to the core. With the HP Insight plugins for VMware vCenter, the
level of simplicity is enhanced.
Reduce expensive network ports while gaining network flexibility. With Virtual
Connect Flex-10 and Flex-NICs your network infrastructure from the server to the core is
6
dramatically simplified, rack switches are eliminated and network setup and monitoring is a breeze.
In virtual environments where bandwidth and segregation of networks is key, HP delivers the ability
to provision up to 16 network adapters with variable bandwidth totaling up to 10Gb/s on a single
adapter while using as few as 2 cables to connect all of the NICs within a rack to the core. This
allows VMware virtual networks to be created with the greatest degree of resiliency and
segmentation while eliminating the mass of cables that comes with multiple network adapters as
well as eliminating expensive rack switches. The effect on both cost and reduced complexity is
dramatic.
The infrastructure is the fabric. The HP P4800 G2 SAN for BladeSystem uses the electronic
links between the Virtual Connect modules to receive storage traffic. The result is when properly
implemented traffic does not leave the Virtual Connect domain and the latencies are lowered by the
nature of the connections. You can build cable free VMware vMotion networks into the enclosure.
With no cables to fail, performance and reliability become built in features rather than hoped for
achievements.
Incredible density. Companies have been virtualizing and consolidating infrastructure for awhile
now. With VMware View’s ability to consolidate numerous users onto a server as well as a choice
of HP’s industry leading ProLiant servers means optimal density and a minimal increase in the use of
valuable data center floor space.
Reduce the time and cost to migrate to Windows 7. Utilizing VMware View and VMware
ThinApp to rapidly provision new virtual machines (VMs) as well as virtualize legacy applications
means quicker time to migration as well as lower costs. Done on HP’s Client Virtualization platform
means optimized hardware as well as meaningful integration.
Smarter, more optimized storage choices. HP allows you to take advantage of needed
management infrastructure to provide shared, accelerated storage right inside the BladeSystem
enclosure. The benefit of utilizing this tier as a standalone, bladed entity is it allows you to make
optimal choices for other storage tiers – including choosing inexpensive, high performance direct
attached storage – without the need to tie it all to a single function storage head. VMware replicas
reside on cost effective, high speed, low latency storage heads while linked clones and accessory files
reside where they make the most sense from both an architectural and a cost standpoint. In addition,
extend business continuity and disaster recovery to desktops with VMware vSphere. The result is great
price points and outstanding performance achieved with realistic sizing.
Industry standard servers and approaches matter. Standardizing on a common
virtualization platform with VMware View and VMware vSphere running on HP industry-standard
blade servers and storage provides a simplified, easy to deploy and manage, performance-optimized
and flexible environment for Client Virtualization.
Integrated management. Manage and monitor servers, power and cooling, networking and
storage from within VMware vCenter. HP has partnered to deliver real value in a place where IT
spend continues to grow. This is critical in VDI deployments where realizing single administrator
benefits pays big dividends.
The result of close collaboration between HP and VMware, HP Insight Control for VMware vCenter
brings together the best HP and VMware management environments, providing virtualization
administrators with deep insight and complete control of their VMware virtualized infrastructure.
HP Insight Control for VMware vCenter augments the capabilities of the VMware vCenter with the
physical server management capabilities of Insight Control providing virtualization administrators with
unparalleled visibility and control of the servers that power their VMware virtualized environment.
Industry leading partnerships matter. HP and VMware have a greater than 10 year
partnership. The results of our partnerships show in this architecture and in our joint ability to provide
service from one end of the VDI hardware and software stack to the other.
7
The Enterprise Client Virtualization Reference Architecture
for HP VirtualSystem
The Enterprise Client Virtualization Reference Architecture for HP VirtualSystem is shown in Figure 1.
As discussed in the document entitled Virtual Desktop Infrastructure for the Enterprise
(http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-4348ENW), HP is focusing
on Client Virtualization as a whole with VDI as a specific implementation of Client Virtualization
technologies. In the image below, that means that the VDI session is represented by the Compute
Resource.
Key to the optimization of that resource are the concepts of application and user virtualization. The
following sections will address VMware’s approach to providing these critical elements.
Figure 1: The HP Converged Infrastructure for Client Virtualization Reference Architecture for VDI
VMware and user virtualization
Eliminating the end user data and associated application information from the core compute resource
allows for greater flexibility, improved management and direct attached storage architectures. Both
HP and VMware have partnered with a number of third party vendors to enable user virtualization in
View environments. From simple profile redirection to complex conditional policy with just-in-time
profile access, there are a number of options to optimize your users.
For this reference architecture, optimization of end users is critical. The decoupling of end users from
the base image is one of the first steps in optimizing storage choices and reducing costs and
management. It is assumed for the design of the floating pools hosted on direct attached storage
referenced in this document that an end user virtualization solution is in place.
VMware and application virtualization
Application delivery is challenging in both physical and virtual environments. VMware ThinApp
addresses delivery challenges in both environments, however the focus of this guide is delivery to a
virtual environment. ThinApp is an agentless application virtualization solution that decouples
8
applications from their underlying operating systems to eliminate application conflict and streamline
application delivery and management. VMware ThinApp simplifies application virtualization to
enable IT administrators to quickly deploy, efficiently manage, and upgrade applications without risk,
accelerating desktop value. With ThinApp, an entire application and its settings can be packaged
into a single executable and deployed to many different Windows operating systems without
imposing additional cost and complexity to the server or client. Application virtualization with
ThinApp eliminates conflicts at the application and OS level and minimizes costly recoding and
regression testing to speed application migration to Windows 7.
ThinApp virtualizes applications by encapsulating application files and registry into a single ThinApp
package that can be deployed, managed, and updated independently from the underlying operating
system (OS). The virtualized applications do not make any changes to the underlying OS and
continue to behave the same across different configurations for compatibility, consistent end-user
experiences, and ease of management.
As a key component of VMware View, ThinApp helps reduce the management burden of applications
and desktop image management. IT administrators have the benefit of a single console to deploy and
manage desktops and applications using the integrated functionality of assigning ThinApp packages
within the View Administrator console.
Common use cases to leverage VMware ThinApp
VMware ThinApp simplifies application delivery by encapsulating applications in portable packages
that can be deployed to many end point devices while isolating applications from each other and the
underlying operating system.
Simplify Windows 7 migration—Migrate legacy applications such as Internet Explorer 6 to 32- and
64-bit Windows 7 systems with ease by using ThinApp to eliminate costly recoding, regression
testing, and support costs.
Eliminate application conflicts—Isolate desktop applications from each other and from the
underlying OS to avoid conflicts.
Consolidate application streaming servers—Enable multiple applications and ―sandboxed‖ user-
specific configuration data and settings to safely reside on the same server.
Reduce desktop storage costs—Leverage ThinApp as a component of VMware View to reduce
desktop storage costs and streamline updates to endpoints.
Augment security policies—Deploy ThinApp packages on ―locked-down‖ PCs and allow end users
to run their favorite applications without compromising security.
Increase mobility for end users—Deploy, maintain, and update virtualized applications on USB
sticks for ultimate portability.
Review of key features
Agentless Application Virtualization
Agentless architecture—Designed for fast deployment and ease of management, ThinApp requires
no agent code on target devices.
Complete application isolation—Package entire applications and their settings into a single
executable that runs independently on any endpoint, allowing multiple versions or multiple
applications to run on the same device without any conflict.
Built-in security—Application packages run only in user mode, so end users have the freedom and
flexibility to run their preferred applications on locked-down PCs without compromising security.
Fast, Flexible Application Packaging
Package once, deploy to many—Package an application once and deploy it to desktops or servers
(physical or virtual, 32- or 64-bit) running Windows XP, Windows Vista, Windows 7, Windows
Server 2003, or Windows Server 2008.
9
Three-step setup capture—Use a three-step process for pre- and post-install system states to simplify
application packaging and support applications that require a reboot during the installation
process.
Microsoft Internet Explorer 6 support—ThinApp now offers complete support for virtualizing
Microsoft Internet Explorer 6 (IE 6) that makes it easy to virtualize and deploy IE 6 application
packages to 32- and 64-bit Windows 7 desktops.
ThinApp Converter—ThinApp works with VMware vSphere, VMware ESX, and VMware
Workstation images to convert silently installed applications into ThinApp packages through a
command-line interface that allows for automation of application conversion and management.
Relink—Upgrade existing ThinApp packages to the new ThinApp 4.6 format quickly and easily
without the need for associated project files.
Fast, Flexible Application Delivery
ThinDirect—This new feature gives end users the flexibility to seamlessly run IE 6 on Windows 7
desktops alongside newer browsers such as IE 8, and allows the administrator to configure web
pages with IE 6 dependencies to ensure that URLs always open in the right browser.
Application link—Configure relationships between virtualized applications, plug-ins, service packs,
and even runtime environments such as Java and .NET.
Application sync—Automatically apply updates over the web to applications on unmanaged PCs
and devices.
Support for USB drives and thin clients—Deploy, maintain, and update applications on USB storage
drives and thin client terminals.
Microsoft Windows 7 support—Deploy legacy applications on 32- and 64-bit Windows 7 systems
and streamline application migration by avoiding costly, time-consuming recoding and regression
testing.
Seamless Integration with Existing infrastructure
Zero-footprint architecture—Plug ThinApp directly into existing IT tools without the need to add
dedicated hardware or backend databases.
Integration with management tools—ThinApp creates standard .MSI and EXE packages that can be
delivered through existing tools from Microsoft, BMC, HP, CA, Novell, Symantec, LANDesk, and
others.
Support for Active Directory authentication—Add and remove ThinApp users from Active Directory
groups, and prevent unauthorized users from executing ThinApp packages.
Integrated application assignment in VMware View Manager 4.6—ThinApp packages can be
assigned to individual desktops or pools of desktops in View Manager to allow for streamlined
application deployment.
VMware and the compute resource
Deliver rich, personalized virtual desktops as a managed service from a virtualization platform built to
deliver the entire desktop, including the operating system, applications and data. With VMware View
4.6, desktop administrators virtualize the operating system, applications, and user data and deliver
modern desktops to end-users. Get centralized automated management of these components for
increased control and cost savings. Improve business agility while providing a flexible high
performance desktop experience for end-users, across a variety of network conditions. VMware View
4.6 delivers:
Windows 7 support
Increase the speed and reduce the cost and complexity of migration by delivering Windows 7 as a
virtual desktop with VMware View. Add in VMware ThinApp to help save the cost of application
porting by virtualizing legacy applications to deploy on Windows 7 desktops.
10
Simplified desktop management
Desktop and application virtualization breaks the bonds between the operating system, applications,
user data and hardware, eliminating the need to install or manage desktop environments on end-user
devices. From a central location you can deliver, manage and update all of your Windows desktops
and applications in minutes. VMware View makes the testing, provisioning and support of
applications and desktops much easier and less costly.
Streamlined application management
VMware ThinApp application virtualization separates applications from underlying operating systems
for increased compatibility and streamlined application management. Applications packaged with
VMware ThinApp can be run in the data center where they are accessible through a shortcut on the
virtual desktop, reducing the size of the desktop image and minimizing storage needs. Since VMware
ThinApp isolates and virtualizes applications, multiple applications or multiple versions of the same
applications can run on the virtual desktops without conflict. Applications are assigned centrally
through View Manager, ensuring that all user desktops are up-to-date with the latest application
versions.
Automated desktop provisioning
VMware View Manager provides a single management tool to provision new desktops or groups of
desktops, and an easy interface for setting desktop policies. Using a template, you can customize
virtual pools of desktops and easily set policies, such as how many virtual machines can be in a pool,
or logoff parameters. This feature enables greater IT efficiency by automating and centralizing
desktop provisioning activities.
Advanced virtual desktop image management
Based on the mature Linked Clone technology, VMware View Composer enables the rapid creation of
desktop images from a golden image. Updates implemented on the parent image can be easily
pushed out to any number of virtual desktops in minutes, greatly simplifying deployment, upgrades
and patches while reducing desktop operational costs. With the core components of the desktop
being managed separately the process does not affect user settings, data or applications, so the end-
user remains productive on a working desktop, even while changes are being applied to the master
image.
Superior end user experience
Address the broadest range of use cases and deployment options with VMware View’s PCoIP display
protocol from Teradici, and deliver a high-performance desktop experience, even over high latency
and low bandwidth connections. The adaptive capabilities of the PCoIP display protocol are
optimized to deliver virtual desktops to users on the LAN or over the WAN. VMware View gives users
access to their virtual desktops over a wide variety of devices, without any performance degradation.
End-users enjoy a seamless desktop experience with the ability to play rich media content, choose
from various monitor configurations and easily access locally attached peripherals such as scanners
and mass storage devices.
Built-in security
Maintain control over data and intellectual property by keeping it secure in the data center. End-users
access their personalized desktop, complete with applications and data, securely from any location,
at any time without jeopardizing corporate security policies. End-users outside of the corporate
network can connect to their desktop easily and securely through the VMware View Security Server.
Integration with vShield Endpoint enables offloaded and centralized anti-virus and anti-malware (AV)
solutions. This integration helps to eliminate agent sprawl and AV storm issues while minimizing the
risk of malware infection and simplifying AV administration. VMware View also supports integration
with RSA SecureID for 2-factor authentication requirements.
11
Availability and scalability
VMware View delivers high availability, with no single point of failure. When using a SAN based
model, VMware High Availability (HA) ensures automatic failover and provides pervasive, cost-
effective protection within your virtualized desktop environment to ensure service level agreements are
met and downtime is minimized. Advanced clustering capabilities on the physical and virtual layers
provide enterprise-class scalability with no single point of failure. VMware View can also be
integrated with existing systems management platforms already deployed within the enterprise.
View Client with Local Mode
The VMware View Client with Local Mode increases productivity by allowing end-users to run
managed virtual desktops locally or in the data center through the same administration framework.
Simply download a virtual desktop onto the local client device where the operating system,
applications and data can be accessed with or without a network connection. Offline users can
synchronize desktop changes back to the data center when they return to the network. The entire
contents of the desktop are secure within an encrypted desktop image while all existing IT security
policies for that virtual desktop continue to be applied and enforced regardless of network
connection.
What this document produces
This document will help construct a platform capable of supporting 1140 task workers in floating
pools connected to direct attached storage at the linked clone layer and 840 productivity workers
connected to the HP P4800 G2 SAN for BladeSystem. For an understanding of how HP arrived at
these numbers see the document entitled Virtual Desktop Infrastructure for the Enterprise
(http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-4348ENW). The solution
scales larger, both as defined in this document and with the addition of more racks, enclosures and
storage containers.
Before diving deep into how the platform when combined with VMware View creates a robust,
enterprise ready VDI solution, it is first necessary to define VDI. HP’s view of VDI is captured in Figure
2. An end user, from a client, accesses a brokering mechanism which provides a desktop over a
connection protocol to the end user.
12
Figure 2: The HP Approach for VDI
This desktop is, at logon, combined with the user’s personality, application and data settings and also
an application to create a runtime VDI instance as in Figure 3. This instance in simple terms is a
combination of three elements that are separate prior to the user logging on. This insures that
optimized management can be applied to each layer and that the same experience is available to the
end user regardless of how they receive their core compute resource.
Figure 3: The VDI runtime instance
13
Figure 4 shows the VMware View architecture stack on the HP Client Virtualization Reference
Architecture platform. ThinApp provides the application virtualization layer, View Connection server
acts as the broker and desktops are delivered over the network via Microsoft’s Remote Desktop
Protocol or via PCoIP. VMware does not explicitly recommend a mechanism for user virtualization,
but rather allows multiple models for managing user data within the overall ecosystem. HP
recommends selecting a mechanism for user virtualization that minimizes network impact from the
movement of user files and settings and allows for the realtime customization of the user’s environment
based on a number of factors including location, operating system and device.
Figure 4: VMware View on HP Converged Infrastructure
14
The entire VDI stack must be housed on resilient, cost effective and scalable infrastructure that can be
managed by a minimal number of resources. This includes insuring flexibility at the various layers to
insure the right device is selected for the job.
When all components are viewed in the broadest fashion you get Figure 5. This outlines the mixture
of HP components and their direct interaction with the VMware software suite.
Figure 5: End to end architecture for VMware View on the HP Client Virtualization Reference Architecture
15
Figure 6 below shows the networks required to configure the platform for View and where the various
components reside. Note the dual homed storage management approach allows all storage traffic to
remain within the Virtual Connect (VC) domain, reducing complexity and involvement from multiple
teams. Application delivery traffic, provisioning traffic and if you choose, even Active Directory
authentication can be kept within the VC domain. This reduces the number of switch ports that need to
be purchased and managed. This reduction in ports leads to not only a lower acquisition cost, but
also a long-term cost benefit from reduced complexity and fewer individual pieces to manage.
Figure 6: A VMware View specific implementation viewed from an overall network standpoint
16
Figure 7 shows the overall rack layout of the infrastructure used. Note that there are no rack switches
used. Both the Virtual Connect domain and the optional, external management host are connected
directly to the network core.
Figure 7: Hardware platform being created for this document (front and back)
17
Figure 8 highlights the overall function for each component shown. There will be a number of clusters
developed for this document. Two of these clusters will feature dedicated VMs created using
dedicated linked clones and hosting productivity workers. The remaining clusters will support floating
pools of non-persistent VMs.
Figure 8: Layout by function in a hybrid implementation (mix of SAN and DAS for storage)
18
Figure 9 shows cabling for the platform outlined in this document. This minimal configuration shows
four cables supporting all users with two 10GbE uplinks. Redundant 10GbE is dedicated to
production and management traffic via a pair of cables. The enclosures communicate via a highly
available 10GbE bi-directional network that carries vMotion and storage traffic without egressing.
This minimizes network team involvement while enhancing flexibility. This configuration can be
expanded to include a 10GbE uplink to the core from each enclosure enhancing availability or
multiple uplinks per enclosure dramatically increasing bandwidth to the core.
Figure 9: Minimal total cabling within the rack required to support all users and hosts within the rack
19
Figure 10 highlights the configuration of the vSwitches used for virtualized hosts in this environment
and how they relate to the Virtual Connect configuration shown above and to the core. There is some
flexibility for your specific environment. The configuration shown maximizes network segmentation
and takes advantage of the internal networking capabilities of the Virtual Connect domain. From the
diagram it can be determined for example that storage and vMotion traffic never leave the Virtual
Connect domain.
Figure 10: vSwitch configuration and its relationship to the Virtual Connect domain and network core
20
Configuring the platform
This section is designed to guide the installer through the process of installing and configuring the
hardware platform in a way that is optimized for deploying VMware View while enabling future
scaling.
External Insight Control
Prior to configuring the platform, you will need to make a decision as to where you will locate your
Insight Control Suite. As a rule, external servers in VDI environments that will not monitor the desktop
virtual machines are recommended as they scale easily across numerous sets of hardware. This
reduces the amount of management servers required and minimizes licensing costs. For other
implementation scenarios it is recommended that the Insight Control software is installed within the
enclosure on a management host. If you are using the Insight Control Plugins for VMware vCenter, the
software required will be installed within the VC domain.
Configuring the enclosures
Once the infrastructure is physically in place, it needs to be configured to work within a VDI
environment. The setup is straightforward and can be accomplished via a single web browser
session.
Configuration settings for the Onboard Administrator (OA) will vary from customer to customer and
thus they are not highlighted here in depth. Appendix C offers a sample script to aid with OA
configuration and can be used to build a script customized to your environment.
There are a couple of steps that must be undertaken to ensure that your infrastructure is optimized to
work with your storage in a lights out data center. This involves setting the startup timings for the
various interconnects and servers within the infrastructure stack.
21
For both enclosures, log on to your OA. In the left column, expand Enclosure Information,
Enclosure Settings and then finally, click on Device Power Sequence. Ensure that the tab for
Interconnect Bays is highlighted. Set the power on for your SAS switches to Enabled and the
delay to 180 seconds as in Figure 11. This step should be done for any enclosure that contains SAS
switches. This ensures that in the event of a catastrophic power event, the disks in the P4800 G2 SAN
or the MDS600 will have time to fully initialize prior to the SAS switches communicating with them.
Figure 11: Setting the SAS switch power on delay
From the same page, set the Virtual Connect Flex-10 power on to Enabled and set the delay for
some point beyond the time when the SAS switches will power on. A time of 210 seconds is generally
acceptable. Click on Apply when finished to save settings as in Figure 12.
Figure 12: Configuring the power on sequence of the Flex-10 modules within the enclosures
22
Highlight the Device Bays tab by clicking on it. Set power on timings for any P4460sb G2 blades
or floating pool hosts that are attached to storage to 240 seconds. Click on Apply when finished.
Set remaining hosts to power on at some point beyond this point.
Before proceeding to the next section, ensure your enclosures are fully configured for your
environment.
Creating a Virtual Connect domain with stacked enclosures
This section will focus on configuring the Virtual Connect domain. The simplest way to do this is to
undertake the configuration with either a minimal number or no servers within the enclosures. It is
recommended that you begin the configuration with the enclosure that will house the P4800 G2 SAN.
To begin, you should launch the Virtual Connect Manager console by either clicking the link from the
OA interface or by entering the IP address or hostname directly into your browser. Log onto the VC
modules using the information provided on the asset tag that came with the primary module as in
Figure 13.
Note: It is recommended that you carry out the configurations in this section without the enclosures
stacked, but the stacking will need to be in place prior to completing the import of the second
enclosure. Plan on connecting the modules prior to importing the second enclosure.
Figure 13: Virtual Connect Manager logon screen
23
Once logged in you’ll be presented with the Domain Setup Wizard. Click on Next as in Figure
14 to proceed with setup.
Figure 14: The initial Domain Setup Wizard screen
You will be asked for the Administrator credentials for the local enclosure as in Figure 15. Enter the
appropriate information and click on Next.
Figure 15: Establishing communication with the local enclosure
24
At the resulting screen choose to Create a new Virtual Connect domain by importing this
enclosure and click on Next as in Figure 16.
Figure 16: Importing the first enclosure
When asked, click on Yes as in Figure 17 to confirm that you wish to import the enclosure.
Figure 17: Confirm importing the enclosure
25
You should receive a success message as in Figure 18 that highlights the successful import of the
enclosure. Click on Next to proceed.
Figure 18: Enclosure import success screen
At the next screen, you will be asked to assign a name to the Virtual Connect domain. Keep scaling in
mind as you do this. A moderate sized domain with 4,000 users can potentially be housed within a
single Virtual Connect domain. Very large implementations may require multiple domains. If you will
scale to very large numbers a naming convention that scales is advisable.
Enter the name of the domain in the text box as in Figure 19 and then click on Next to proceed.
Figure 19: Virtual Connect domain naming screen
26
Configure local user accounts at the Local User Accounts screen. Ensure that you change the
default Administrator password. When done with this section, click on Next as in Figure 20.
Figure 20: Configuring local user accounts within Virtual Connect Manager
This will complete the initial domain configuration. Ensure that you check the box to Start the
Network Setup Wizard as in Figure 21. Click Finish to start configuring the network.
Figure 21: Final screen of the initial configuration
27
Configuring the network
The next screen to appear is the initial Network Setup Wizard screen. Click on Next to proceed
as in Figure 22.
Figure 22: Initial network setup screen
Click Next. At the Virtual Connect MAC Address screen choose to use the static MAC addresses of
the adapters rather than Virtual Connect assigned MAC addresses as in Figure 23. Click on Next to
proceed when done.
Figure 23: Virtual Connect MAC Address settings
28
Select to Map VLAN Tags as in Figure 24. You may change this setting to be optimized for your
environment, but this document assumes mapped tags. Click on Next when done.
Figure 24: Configuring how VLAN tags are handled
At the next screen, you will choose to create a new network connection. The connections used in this
document create shared uplink sets. This initial set will be linked externally and will carry both the
management and production networks. Choose Connection with uplink(s) carrying multiple
networks (using VLAN tagging). Click on Next to proceed as in Figure 25. You will need to
know your network information including VLAN numbers to complete this section.
Figure 25: Defining the network connections
29
Provide a name for the uplink set and grant the connection the two network ports that are cabled from
the Virtual Connect modules to the network core. Add in the management and production networks as
shown in Figure 26. Click on Apply to proceed.
Figure 26: Configuring external networks
You will be returned to the network setup screen. Choose once again to Connection with
uplink(s) carrying multiple networks (using VLAN tagging). Click on Next to proceed as
in Figure 27.
Figure 27: Setting up the second uplink set
30
This second set of uplinks will carry the vMotion and iSCSI networks. These networks will not egress
the Virtual Connect domain. Give a name to the uplink set and define your networks along with their
VLAN ID, but do not assign uplink ports as in Figure 28. This will ensure the traffic stays inside the
domain. Click on Next to proceed.
Figure 28: Defining the internal network
31
When you return to the Defined Networks screen, choose No, I have defined all available
networks as in Figure 29. Click on Next to continue.
Figure 29: Final defined networks screen
Click on Finish at the final wizard screen. This will take you to the home screen for the Virtual
Connect Manager as in Figure 30. This completes the initial setup of the Virtual Connect domain.
Figure 30: The initial Virtual Connect Manager screen
32
Defining profiles for hosts
Virtual Connect allows you to build a network profile for a device bay within the enclosure. No server
need be present. The profile will be assigned to any server that is placed into the bay. This profile
configures the networks and the bandwidth associated with the onboard FlexNICs. The following
recommendations work for all ProLiant servers, but if you are using the ProLiant BL620c G7 you will
have twice as many NICs to work with (up to 16 FlexNICs). You may wish to maximize bandwidth
accordingly with these adapters.
Table 2 reiterates the networks created for the Virtual Connect domain as well as how they are
assigned for hypervisor and management hosts.
Table 2. Virtual Connect networks and path
Network External or Internal
Network
VLAN’d Uplink Set
Production External Yes External
VMotion Internal No Internal
iSCSI Internal No Internal
Management External Yes External
The device bays for the P4800 G2 SAN are simply configured with two adapters of 10GbE
bandwidth each. Both adapters are assigned to the iSCSI network.
HP suggests the following bandwidth allocations for each network as in Table 3.
Table 3. Bandwidth recommendations for hypervisor and management host profiles
Network Assigned Bandwidth
Production 1.5 Gb/s
VMotion 2 Gb/s
iSCSI 6 Gb/s
Management 500 Mb/s
33
To begin the process of defining and assigning the server profiles, click on Define and then Server
Profile as in Figure 31.
Figure 31: Define a server profile via dropdown menu
34
You will create a single profile for the hypervisor and management hosts as in Figure 32. This profile
will be copied and assigned to each device bay.
Right click on the Ethernet Adapter Connections and choose to Add Connection. You will do
this eight times. Assign two adapters to each network defined in Table 3 above and assign the
bandwidth to those adapters as shown. Do not assign the profile to any bay as this will serve as your
master profile. Click on Apply when finished.
Figure 32: Configuring the profile for hypervisor and management hosts
35
Figure 33 shows the screen as it appears once the networks are properly defined.
Figure 33: The host profile for hypervisor and management hosts
36
Repeat the prior process to define a second profile to be copied to any slot where a P4460sb G2
storage blade resides. Assign the full bandwidth of 10Gb to each of two adapters. Do not create any
extra Ethernet connections. Click on Apply as in Figure 34 to save the profile.
Figure 34: Master profile to be copied to P4800 storage blades
37
Importing the second enclosure
With the configuration of the initial enclosure and VC domain complete, additional enclosures can be
incorporated into the domain. On the left column, click on Domain Enclosures and then click on
the Domain Enclosures tab. Click on the Find button and enter the information for your second
enclosure as in Figure 35.
Figure 35: Find enclosure screen
38
At the next screen, click the check box next to the second enclosure and choose the Import button as
in Figure 36.
Figure 36: Import the second enclosure into the domain
39
You should receive an enclosure import success screen as in Figure 37 below.
Figure 37: Enclosure import success
40
Optionally, you may click on the Domain IP Address tab and assign a single IP address from
which to manage the entire domain as in Figure 38.
Figure 38: Configuring an IP address for the domain
41
Be sure to back up your configuration as in Figure 39 prior to proceeding. This will ensure you have
a baseline to return to.
Figure 39: Domain settings backup screen
With your domain configured, copy the server profiles you created and assign them to the desired
bays. Once complete, backup your entire domain configuration again for a baseline configuration
with profiles in place that can be used to restore to a starting point if needed.
Setting up management hosts
For each of the management servers within the enclosure, the following steps should be carried out.
If you will use solid state disks and the HP SB40c storage blade as the basis for your accelerated
storage, ensure that you select to configure the disks as a RAID5 set in the Online ROM Configuration
for Arrays (ORCA) during startup.
Configure the onboard disks as a RAID10 set in ORCA and ensure they are set as the boot volume in
the ROM-Based Setup Utility (RBSU). Install VMware vSphere 4.1 on the management hosts. Ensure a
local datastore is created on the RAID10 set. Do not create a datastore using the accelerated storage
at this time.
If you are using an HP IO Accelerator module as the basis for your accelerated storage, ensure you
load the driver for the module either during or immediately after setup of vSphere.
Connect to each host with the VMware Virtual Infrastructure Client (VIC) to perform the remaining
steps.
Create a datastore from the accelerated storage. You should set the block size for the datastore to
maximize the largest file size. You will create the storage for your P4000 Virtual SAN Appliance
(VSA) within this datastore. This step is identical regardless of whether you use an SB40c storage
blade or an HP IO Accelerator.
42
License each server as appropriate.
In addition, you will need to set parameters in the configuration as in Table 4 below.
Table 4. Configuration changes for vSphere management hosts
Category Subcategory Changes
Time Configuration NTP Configure an NTP server for the host to
point to
Security Profile Firewall SSH server and client enabled, vSphere
Web Access, Software iSCSI Client
Security Profile Firewall Optionally enable update ports
Configuring the vSwitches
Table 5 describes the vSwitch configurations needed for each management host. Ensure these are
configured prior to proceeding. It is recommended that you read and understand the document
Running VMware vSphere 4 on HP LeftHand P4000 SAN Solutions at
http://h20195.www2.hp.com/v2/GetDocument.aspx?docname=4AA3-0261ENW prior to
completing your vSwitch configuration.
Table 5. vSwitch configuration for hypervisor hosts
vSwitch Port Group Adapters of
Bandwidth
Function
vSwitch0
Service Console 500 Mb/s Management
Virtual Machine – Management 500 Mb/s Management
vSwitch1 Virtual Machine 1.5 Gb/s Production network for protocol, application
and user traffic
vSwitch2 VM kernel 2 Gb/s vMotion traffic
vSwitch3
VMkernel 1 – iSCSI A 6 Gb/s iSCSI A side network
VMkernel 2 – iSCSI B 6 Gb/s iSCSI B side network
Virtual Machine – iSCSI 6 Gb/s iSCSI Access for CMC
Creating a dual homed management VM
You will not be able to connect to the P4800 G2 SAN yet with the Centralized Management Console
(CMC) as no external networks are defined for access. In order to do this, you will need to create a
management VM to run the CMC and assign it two (2) Ethernet adapters. The first should have access
to the Management network and the second should have access to the iSCSI network.
Install the operating system into this VM. You may choose from a variety of operating systems
supported by the CMC. For the purpose of this document we installed a copy of Microsoft Windows
7 Professional, 32 bit and granted it a single vCPU with 1GB of RAM. This VM should be installed on
the local datastore. It is recommended that the installer migrate the VM to a shared storage volume
once the management hosts are fully configured.
43
Once the operating system has been installed and patched, install the latest version of the HP P4000
Centralized Management Console on the host and enable remote access via MS RDP for at least one
user on the management network.
Configuring the P4800 G2 SAN for BladeSystem
In order to access and manage the P4800 G2 you must first set IP addresses for the P4460sb G2
storage blades. For each blade, perform the following steps to configure the initial network settings. It
is assumed you will not have DHCP available on the private storage network. If you are configuring
storage traffic to egress the enclosure and are running DHCP you can skip ahead.
1. Log onto the blade from the iLO. The iLO for each blade can be launched from within the
Onboard Administrator as long as the OA user has appropriate permissions. If not, use the asset
tag on the P4460sb to locate the iLO name and administrator password.
2. From the command line, type the word Start.
3. Choose Network TCP/IP Settings from the available options.
4. Choose a single adapter to configure.
5. At the Network Settings screen, enter the IP information for the node.
When you have completed these steps for each P4460sb proceed to the next section.
Configuring the SAN
With P4000 SANs, there is a hierarchy of relationships between nodes and between SANs that
should be comprehended by the installer. In order to create a P4800 G2 SAN, you will need to
define the nodes, cluster and a management group.
A node in P4000 SAN parlance is an individual storage server. In this case, the server is an HP
P4460sb G2.
A cluster is a group of nodes that when combined form a SAN.
A management group houses one or more clusters/SANs and serves as the management point for
those devices.
To start the configuration, launch the HP P4000 Centralized Management Console by logging into
your management VM and clicking the icon.
44
Detecting nodes
You will locate the four (4) P4460sb nodes that you just configured. Figure 40 shows the initial
wizard for identifying nodes.
Figure 40: CMC find nodes wizard
Click on the Find button to proceed. You can now walk through the wizard adding nodes by IP
address or finding them via mask. Once detected, you will create an Adaptive Load Balance (ALB)
bond for each node.
On the first P4460sb G2 node you need to click the plus next to the node and select TCP/IP
Networking as in Figure 41.
Figure 41: Highlight TCP/IP Network
45
Highlight both adapters in the right hand TCP/IP tab, right click and select New Bond. Define the
IP address of the bond. When done, it should appear as in Figure 42. Repeat this until every
P4460sb node in the cluster has an ALB bond defined.
Figure 42: New ALB bond screen
Once you have validated that all nodes are present in the CMC and your bonds are created you can
move on to the next section to create the management group.
Creating the management group
When maintaining an internal iSCSI network each Virtual Connect domain must have its own CMC
and Management Group. The management group is the highest level from which the administrator
will manage and maintain the P4800 SAN.
To create the first management group, click on the Management Groups, Clusters, and
Volumes Wizard at the Welcome screen of the CMC as shown in Figure 43.
Figure 43: CMC Welcome screen
46
Click on the Next button when the wizard starts. This will take you to the Choose a Management
Group screen as in Figure 44.
Figure 44: Choose a Management Group screen
Select the New Management Group radio button and then click on the Next button. This will
take you to the Management Group Name screen. Assign a name to the group and ensure all
four (4) P4460sb nodes are selected prior to clicking on the Next button. Figure 45 shows the
screen.
Figure 45: Name the management group and choose the nodes
47
It will take time for the management group creation to complete. When the wizard finishes, click on
Next to continue.
Figure 46 shows the resulting screen where you will be asked to add an administrative user.
Figure 46: Creating the administrative user
Enter the requested information to create the administrative user and then click on Next. You will
have the opportunity to create more users in the CMC after the initial installation.
Enter an NTP server on the iSCSI network if available at the next screen and click on Next. If
unavailable, manually set the time. An NTP server is highly recommended.
Immediately after, you will be asked to configure DNS information for email notifications. Enter the
information requested and click on Next.
Enter the SMTP information for email configuration and click Next.
The next screen will begin the process of cluster creation described in the following section.
48
Create the cluster
At the Create a cluster screen, select the radio button to choose a Standard Cluster as in
Figure 47.
Figure 47: Create a standard cluster
Click on the Next button once you are done.
At the next screen, enter a Cluster name and ensure all P4460sb nodes are highlighted. Click on
the Next button.
At the next screen, you will be asked to assign a virtual IP address for the cluster as in Figure 48.
Figure 48: Assign a virtual IP address to the cluster
49
Click on Add and enter an IP Address and Subnet Mask on the private iSCSI network. This will serve
as the target address for your hypervisor side iSCSI configuration.
Click on Next when done.
At the resulting screen, check the box in the lower right corner that says Skip Volume Creation
and then click Finish. You will create volumes in another section.
Once done, close all windows.
At this point, the sixty (60) day evaluation period for SANiQ 9 begins. You will need to license and
register each of the 4 nodes within this sixty (60) day period.
Setting up hypervisor hosts
If you have not done so already, now is the time to insert your hosts into their respective locations
within the enclosures. Because Virtual Connect works off of the concept of a device bay rather than
an actual server, the hosts have not been needed up to this point and when they are inserted their
identity will already be in place.
This section focuses on configuring the hosts to function within the VMware View 4.6 environment.
The simplest way to configure the hosts is to use HP ICSD to install VMware vSphere 4.1 on each
host. The remaining portion of this section highlights what needs to be configured on each host.
Configure vSwitches
If you followed the recommended configuration advice for the setup of the Virtual Connect domain
this section will complete the networking configuration through to the vSwitch. Configure your
vSwitches for each hypervisor host as in Table 6.
Table 6. vSwitch configuration for hypervisor hosts
vSwitch Port Group Adapters of Bandwidth Function
vSwitch0 Service Console 500 Mb/s Management
vSwitch1 Virtual Machine
1.5 Gb/s Production network for
protocol, application and
user traffic
vSwitch2 VM kernel 2 Gb/s vMotion traffic
vSwitch3
VMkernel 1 – iSCSI A 6 Gb/s iSCSI A side network
VMkernel 2 – iSCSI B 6 Gb/s iSCSI B side network
Note that there are two VMkernel portgroups for iSCSI. Each portgroup has one adapter dedicated to
it. These adapters are tied to the internal network and thus no configuration of external switches is
needed for iSCSI traffic. If you design your infrastructure to allow iSCSI traffic to egress the Virtual
Connect domain you will need to optimize for that specific use case. For in depth descriptions of how
to configure the P4000 with VMware, consult the HP P4000 and VMware Best Practices Guide at
http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA3-0261ENW.pdf.
50
Deploying storage
In order to deploy virtual machines, including management VMs, to the hosts, you will need to attach
storage to each host. The following sections configure basic storage for a variety of functions.
Setting up DAS for floating pool hosts
The steps in this section will help you configure direct attached storage for your hypervisor hosts that
will be part of floating pools. This direct attached storage will house linked clones as well as
temporary files that will be eliminated when users log off. The number of disks you assign each host
will vary based on your expected I/O profile. This section assumes four 15K RPM LFF Hot Plug SAS
disks for each host.
When mapping the drives to the server, follow the zoning rule as described in Figure 49 and in the
documentation provided with your SAS switches.
Figure 49: SAS switch zoning rules by blade slot
51
Launch the Virtual SAS Manager by highlighting a SAS switch and clicking on Management
Console. The following screen appears as in Figure 50. Highlight the Zone Groups and then click
on Create Zone Group.
Figure 50: HP Virtual SAS Manager screen
52
You will highlight between 4 and 6 disks based on expected I/O patterns for the individual hosts.
Figure 51 highlights the selection of 4 disks for the new Zone Group. Click on OK once you have
selected the disks and assigned a name to the zone group.
Figure 51: Selecting disks to create a zone group
53
Repeat the process until you have zone groups defined for your DAS hosts. Figure 52 shows four (4)
Zone Groups that have been created. Click on Save Changes prior to proceeding.
Figure 52: Zone Groups prior to saving
54
For each device bay that you have created a Zone Group, highlight the device bay and then click
Modify Zone Access as in Figure 53.
Figure 53: Modifying zone access by device bay
55
Select the Zone Group that belongs to the device bay by clicking on the check box next to it.
Click on OK to complete the assignment as in Figure 54.
Figure 54: Assigning the Zone Group to a device bay
Once set, when you boot your server you will need to change the settings in the RBSU to boot from
the P700m array. Use the ORCA for the Smart Array P700m, not the Smart Array P410i controller, to
configure the disks you assigned in this section as a single RAID10 set.
Preparing hosts for shared storage
Note: While it has not been created yet, you will be creating a second cluster within your
management group and this cluster will have a second virtual IP address that you will access. You
should be aware of what IP address you will use for the virtual IP at this point and use it as required
for setup.
On each host, click on the Configuration tab in the VMware VIC and then Storage Adapters.
Highlight the software based iSCSI adapter as in Figure 55.
56
Figure 55: Highlighting the iSCSI software adapter in the VIC
Click on the Properties link for the adapter. A new window appears as in Figure 56. Click on the
Configure button to proceed.
Figure 56: Properties window for the iSCSI adapter
When the new window appears, click on Enabled and then click OK to continue.
57
Click on the Configure button again. The window in Figure 57 should appear.
Figure 57: Configure window
There is now an iSCSI name attached to the device. You may choose to alter this name and make it
simpler to remember by eliminating characters after the ―:‖ or you may leave it as is.
From the Properties window, click on the Dynamic Discovery tab and then on Add as in
Figure 58.
Figure 58: Adding an iSCSI target
58
Enter the IP Address of your P4800 cluster in the window as in Figure 59. Repeat this process to add
the IP address of your accelerated storage cluster.
Figure 59: Defining the cluster IP
If you are using CHAP you may configure it. Click on OK when you have finished and close out the
windows. When you close the window you will be asked to rescan the adapter. Skip this step at this
time.
Repeat this process for each hypervisor host including your virtualized management servers.
Configuring the Management Group for hosts
Configuring the P4000 storage and hosts for iSCSI communication is a two part process. Each
hypervisor host must have its software based iSCSI initiator enabled and pointed at the target address
of the P4800. The P4800 must have each host that will access it defined in its host list by a logical
name and iSCSI initiator name. This section covers the configuration of servers within the CMC.
From the CMC virtual machine, start a session with the CMC as the administrative user. Highlight the
Management Group you created earlier and log in if prompted.
59
Right click on Servers in the CMC and select New Server or New Server Cluster if you will tie
a group of servers to volumes.
Figure 60: Adding a new server or server cluster from the CMC
The resulting window as in Figure 61 appears.
Figure 61: The New Server window in the CMC
Enter a name for the server (the hostname of the server works well), a brief description of the host and
then enter the initiator node name for your host. If you are using CHAP you should configure it at this
time. Click on OK when done.
Repeat this process for every host that will attach to the P4800 G2 or accelerated storage.
60
Installing P4000 Virtual SAN Appliance
Prior to installation, use the VIC to create a datastore on your accelerated storage. It is recommended
that you use a 4MB block size when formatting this partition to allow the maximum extent of the disks
to be used.
Ensure you have access to the HP P4000 Virtual SAN Appliance Software DVD that contains the files
necessary for the creation of a P4000 Virtual SAN Appliance (VSA) in a path that is accessible to the
host you are working on (you may even wish to copy the files to a fileshare). From the VIC, highlight
the management host you will deploy the first VSA to and click on File and then Deploy OVF
Template… as in Figure 62. This will start the wizard.
Figure 62: Deploy OVF Template menu
61
At the Deploy OVF Template screen, browse to the location of the VSA.ovf file as in Figure 63.
Once you have located the file, click on Next to proceed.
Figure 63: Browse to find the VSA.ovf file to deploy
Validate the details at the resulting screen and click on Next. This will take you to the licensing
screen. Read and accept the license and then click on Next to continue.
62
Figure 64: Accept the EULA
Choose a name and datastore for the VSA as in Figure 65 and select Next to continue.
Figure 65: Give a name to the initial VSA
63
Choose to place the VSA on a local datastore as in Figure 66 and then click Next.
Figure 66: Select the local datastore
64
Choose Thick provisioned format for the VSA and select Next as in Figure 67.
Figure 67: Choose to Thick Provision the VSA
65
Choose your iSCSI network as the network for the adapter and then click on Next as in Figure 68.
Figure 68: Select the iSCSI network for communication
Click on Finish at the resulting screen and wait for the VSA deployment to complete. Repeat this
process for the next host.
66
For each VSA, you will need to create storage within the accelerated storage datastore (the datastore
on your IO Accelerators or solid state disks). Right click on each VSA VM in vCenter and click Add
Hardware. The screen shown in Figure 69 appears. Select Hard Disk as shown and click on
Next.
Figure 69: Add Hardware screen for the VSA VM
67
Select Create a new virtual disk as in Figure 70 and click Next.
Figure 70: Choose to create a new disk
68
Click on Specify a datastore and click on Browse as in Figure 71.
Figure 71: Click on Specify a datastore and click on Browse
69
Choose your accelerated storage datastore as in Figure 72. Click on OK when done.
Figure 72: Highlight your accelerated storage datastore
70
Enter the maximum size for your disk as in Figure 73 and click on Next.
Figure 73: Set disk size for the VSA
71
Under Advanced Options select SCSI (1:0) and click on Next as in Figure 74.
Figure 74: Select the iSCSI network for communication
Finish the wizard.
Prior to starting the VSA appliances, ensure that you give each appliance 4 vCPUs by choosing Edit
Settings from the VM context menu.
Start the VM and use the VMware console to log on. You will find the console looks like the P4460sb
console as it appears from the iLO. Follow the steps in the section entitled Configuring the P4800 G2
SAN for BladeSystem to create a second cluster within the same management group. This cluster will
be dedicated to the accelerated storage you have just configured. By creating it within the same
management group you can manage from the same console, eliminate the need for a Failover
Manager for this deployment (a single site is assumed) and share all of the server definitions you
created. It is possible to manage a number of management groups and clusters from a single console.
72
Configuring and attaching storage for management hosts
Figure 75 shows the desired layout of the management hosts using accelerated storage. The following
sections will cover the creation of these disks in more depth.
Figure 75: Management hosts storage layout
You will need to create an initial set of volumes on the P4800 G2 SAN that will house your
management VMs. The overall space to be dedicated will vary. 650GB of space is used in this
document. For this example, four (4) volumes will be created.
From the CMC, ensure that all management servers have been properly defined in the servers section
before proceeding. The volumes you are about to create will be assigned to these servers after the
vCenter virtual machine has been created. They will be assigned to the first management server in this
section.
73
From the CMC, expand the cluster as in Figure 76 and click on Volumes (0) and Snapshots (0).
Right click to create a New Volume as in Figure 76.
Figure 76: Create a new volume
In the New Volume window under the Basic tab, enter a volume name and short description. Enter
a volume size of 50GB. This volume will house ThinApp executables while connected to a fileserving
VM. Figure 77 shows the window.
Figure 77: New Volume window
74
Once you have entered the data, click on the Advanced tab. As in Figure 78, ensure you have
selected your cluster and RAID-10 replication; then click the radio button for Thin Provisioning.
Figure 78: The Advanced tab of the New Volume window
Click on the OK button when done.
Repeat this process by creating three (3) additional management volumes. One will house
management servers and should be sized at 200GB, one will house OS images, templates and test
VMs and may be sized at 100GB, and the final will serve as a production test volume for testing the
deployment of VMs as well as testing patch application and functionality of new master VMs. This
volume should be sized at 300GB. These volume sizes may be altered to be optimized for your
environment, but it is recommended that you do create them as separate volumes by function.
When all volumes have been created, return to the Servers section of the CMC under the main
Management Group. You will initially assign the volumes you just created to the first management
host. In this document this host is in device bay 1.
Right click on your first management server as in Figure 79 and choose to Assign and Unassign
Volumes.
Figure 79: Server options
75
A window will appear with the volumes you have defined. Select the appropriate volumes to assign to
the host by selecting the check boxes under the Assigned column. If you chose to create Server
Clusters you will be able to assign the volumes simultaneously to a number of different hosts within
that cluster.
You will repeat these steps when you create your other volumes after the vCenter server has been
created. These steps will also be used to create your primary volumes for linked clones and, if
needed, user data disks.
NOTE: You may script the creation and assignment of volumes using the CLIQ utility shipped on your
P4000 software DVD. See Appendix D of this document for samples.
Use the VIC to attach to management host 1. Under the Configuration tab click on the Storage
Adapters link in the left menu and then click on Rescan as in Figure 80. Click on OK when
prompted to rescan.
Figure 80: VIC storage adapters screen
This will cause the host to see the volumes you just assigned to it in the CMC.
76
To make these volumes active you will need to format them as VMFS. Under the Configuration tab
in the VIC, click on Storage as in Figure 81.
Figure 81: VIC storage screen
Click on Add Storage as in the figure. Follow the onscreen instructions to create the datastores on
this host. Use all available space per volume. When you have finished, your storage should appear
similar to that shown in Figure 82.
Figure 82: Storage for management server 1
Set up management VMs
In this document, each management server with the exception of Microsoft SQL Server is housed in a
virtual machine. You may choose to virtualize SQL Server. It is not virtualized in this configuration
because the assumption is a large, redundant SQL Server entity exists on an accessible network that is
sized to accommodate all of the databases required for this configuration. By keeping the
management components virtualized it ensures that the majority of the overall architecture is made
highly available via standard VMware practices and reduces the server count required to manage the
overall stack.
Each of the following management servers should be created based on the best practices of the
software vendor. The vendor’s installation instructions should be followed to produce an optimized
VM for the particular application service. These VMs should be created on the first management host
77
in the 200GB management servers volume. You may do a storage vMotion to migrate the CMC VM
volumes to this storage if you wish.
VMware vCenter/View Composer
vCenter is the backbone for the management of all virtualized resources. The vCenter server when
complete (post deployment) will be at almost half of the recommended maximum capacity. When
sizing this virtual server it is important to consider the enterprise nature of the overall configuration. It
should be noted as well that it is expected that the individual vCenter virtual machines in Enterprise
deployments will be federated at a higher level. This also needs to be considered when designing the
virtual machine.
View Manager
VMware View Manager is the central component that determines how desktops are delivered in a
VDI environment. HP’s reference platform is capable of supporting very large numbers of users. The
installer should consult VMware’s documentation in order to determine proper VM sizing. An
approach that seeks to maximize overall scaling is heavily recommended.
Documentation can be found at http://www.vmware.com/technical-resources/products/view.html.
VMware Security Server
If you will be utilizing security server for your end users you should install it within the same
management server cluster as your other VMware management hosts.
HP Insight Control Plugins for VMware vCenter Server
The HP Insight Control Plugins for VMware vCenter Server allow virtualization administrators to
monitor health, power and performance of HP servers as well as provision, manage and troubleshoot
HP storage. All of this is done from within the VMware vCenter console. The plugins are installed in
this VM.
Failover Manager
HP P4000 SANs utilize a Failover Manager (FOM) to ensure that data remains available within a
management group in the event of a node failure. If you are deploying VDI in a single site
configuration with a P4800 G2 and VSA nodes within the same management group, you may not
need a FOM. If however you choose not to optimize with accelerated storage, you should install a
FOM as a VM on local storage within the management group. If you want to run multi-site, then
please follow the recommendations of the P4000 Multi-Site HA/DR Solution Pack user guide at
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c02063195/c02063195.pdf for
placing an additional manager within the management group.
78
Understanding storage for View pools
The following sections will highlight how storage is attached to various pools in this document and
also issue advice on how to properly configure volumes for your circumstances. Figure 83 shows the
storage connectivity of the persistent VM hosts as they connect to the replica and linked clone storage.
A single replica is created for each cluster and each cluster will host 420 linked clones.
Figure 83: Persistent VM cluster and the relationship between hosts, replica volumes and linked clone volumes
79
Figure 84 shows an overview of how storage will connect to clusters that support floating pools or
non-persistent VM pools. Note that each server is independent at the linked clone layer, but the
replicas remain on shared, highly available, accelerated storage. This particular approach yields an
outstanding end user experience. Each host is assumed to house 95 task workers per the sizing
numbers HP has calculated in the document entitled Virtual Desktop Infrastructure for the Enterprise at
http://h20195.www2.hp.com/V2/GetDocument.aspx?docname=4AA3-4348ENW. Note that these
hosts boot from the disks they use in the MDS600.
Figure 84: Non-persistent hosts organized into clusters with direct attached disks and replica volumes
Creating shared storage for replicas
Sizing the storage volume
The number and size of storage containers for replicas can be determined using information about
image and memory sizes plus total planned user counts. The number of storage volumes is based on
a maximum recommendation of 500 linked clones for a single replica. The size of each volume for
the replica is determined by taking the image size plus memory and doubling it and then adding in
1GB. As an example, a 26GB master VM that utilizes 1GB of memory would require a storage
volume of 55GB.
((26 + 1) * 2) + 1 = 55 GB
Flexibility around the number of linked clones served by a single replica will likely be needed to
optimize and accommodate various pool sizes. The concept of multiple replicas should not be
80
confused with that of multiple master VMs. A replica is just that. It is a replica of the master VM. It is
possible to utilize one master VM while building multiple replicas of that master.
There are some limitations to note when segmenting linked clones and replicas.
Only one replica datastore may be specified per pool in View.
A shared datastore must be accessible to all hosts within a vSphere cluster.
If you share a replica datastore, the associated linked clone datastores must also be shared.
It is highly recommended that you consult VMware’s reference architecture documentation for
information on replica information and best practices. See http://www.vmware.com/technical-
resources/products/view.html.
Creating storage for linked clones
For the purposes of this document we have assumed you have successfully disaggregated your end
user and application data from the core compute resource. We have further assumed that half of the
hypervisor hosts will be dedicated to floating pools or non-persistent VMs while the other half will be
used to support dedicated linked clone VMs. This configuration is supported by a combination of
direct attached storage and SAN at the linked clone layer.
Configuring DAS has been largely addressed in the section entitled Setting up DAS for floating pool
hosts. The only step that remains is to create a datastore on each DAS array that can be used to host
linked clones. This is done on a server by server basis from within vCenter.
The end user with a dedicated linked clone has their own experience tied to a linked clone that is
―their compute resource‖. Once this resource is personalized to an end user, it must be protected to
maintain the integrity of the information. In a VMware View implementation that translates to a SAN
based Linked Clone implementation.
To configure storage for the shared Linked Clones you will need to know the number of users/VMs,
size of the master image, how aggressive you will be with the recomposition of VMs and be mindful
that the size of your pools and clusters should stay within the maximums suggested by VMware for
replica to linked clone relationships. This document suggests two clusters of six (6) hosts each. These
hosts are sized to support 70 productivity users each for a total of 420 users per cluster. The
assumption is of a 37GB master image.
In raw storage sizing terms, the sizing needs to accommodate 840 users multiplied by 37GB each.
VMware allows for the creation of dedicated linked clones that allow for a fraction of that data
storage to be required. HP has assumed an aggressive recompose schedule for this implementation
and is assuming each Linked Clone will eventually grow to be 50% of the size of the master VM. The
master VM replica is on accelerated storage and does not need to be factored into the space sizing.
This yields a new total amount of storage required for Linked Clones of
Total space =
HP suggests aligning the volumes with approximately 30-35 linked clones per volume. You can
support many more VMs per volume, but the volumes become large very quickly. Utilizing a smaller
number of VMs per volume as well as using more volumes will keep volume sizes down while
providing predictable performance and minimizing risks if an issue with any one volume were to
occur2. Finding a number that creates a set of volumes that is a simple divisor of VMs and VMs per
volume is convenient and ensures best balancing of load across hosts. For example, if you have a 6
host cluster with 420 total VMs planned, 12-14 volumes are within the range.
2 If the number of volumes is a concern, the installer may choose to host up to 64 linked clones on a volume.
81
To calculate the total space needed, we take the user count and multiply it by our image size. We
then multiply that by a storage reduction factor we believe we can achieve with Linked Clone
technologies to arrive at a final space configuration. The previous example yields a number of
approximately 15.5 TB of space.
To calculate space required per volume we can simply divide our total space by our volume count.
For the purposes of this document, our volume size will thus be 15.5TB / 14 or approximately 1.11TB
per volume.
It should be noted that all P4800 SANs are ready for thin provisioning from initialization. This allows
for overprovisioning of space to ensure that storage is not constrained by physical limits that don’t
always make sense in VDI environments to begin with. This means that the volumes may be sized for
a 100% size match between the master VM and Linked Clone. The installer must understand that
growth must be accommodated and reacted to when thin provisioning is in use.
Creating shared storage for user data disks and temporary files
HP recommends consulting the documentation from VMware as well as the documentation from your
user virtualization solution vendor to determine the need for and best practices for implementing
storage for user data disks and temporary files. Some methods of managing user profiles and
personality do not require an individual User Data Disk (UDD) and thus conserve storage space and
lower costs. Consult documentation for the method of user virtualization you have selected.
Bill of materials
This section shows the equipment needed to build the sample configuration contained in this
document including the licenses for VMware View. It does not include clients, operating system,
alternative application virtualization technology, user virtualization or application costs as those are
unique to each implementation. Some items related to power and overall infrastructure may need to
be customized to meet customer requirements.
Core Blade Infrastructure
Quantity Part Number Description
2 507019-B21 HP BladeSystem c7000 Enclosure with 3‖ LCD
2 413379-B21 Single Phase Power Module
2 517521-B21 6x Power supply bundle
2 517520-B21 6x Active Cool Fan Bundle
2 456204-B21 c7000 Redundant Onboard Administrator
4 455880-B21 Virtual Connect Flex-10 Ethernet Module for HP BladeSystem
82
Rack and Power
Quantity Part Number Description
1 AF002A 10642 G2 (42U) Rack Cabinet - Shock Pallet*
1 AF009A HP 10642 G2 Front Door
1 AF054A 10642 G2 (42U) Side Panels (set of two) (Graphite Metallic)
- Customer Choice Power distribution unit
- Customer Choice Uninterruptable power supply
Management / Accelerated Storage
Quantity Part Number Description
2 603718-B21 HP ProLiant BL460c G7 CTO Blade
2 610859-L21 HP BL460c G7 Intel® Xeon® X5670 (2.93GHz/6-core/12MB/95W) FIO Processor
Kit
2 610859-B21 HP BL460c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO Processor
Kit
4 512545-B21 72GB 6GB SAS 15K SFF DP HDD
24 500662-B21 HP 8GB Dual Rank x4 PC3-10600 DIMMs
2 AT459A HP SB40c Storage Blade with P4000 Virtual SAN Appliance Software
12 572073-B21 HP 120GB 3G SATA SFF MDL SSD
NOTE: You may replace the HP SB40c Storage Blade and HP 120GB 3G SATA SSDs with IO
Accelerators as listed below. You may wish to use larger IO Accelerators depending on your image
size and total number of users.
Quantity Part Number Description
2 AJ878B HP 320GB MLC IO Accelerator for BladeSystem c-Class
Optional External Management Host (Virtualized)
Quantity Part Number Description
1 579240-001 HP ProLiant DL360 G7 E5640 1P
83
Dedicated VDI Hypervisor Hosts (Processors may be substituted to match the configurations found in
the document Virtual Desktop Infrastructure for the Enterprise.)
Quantity Part Number Description
12 603719-B21 ProLiant BL490c G7 CTO Blade
12 603600-L21 HP BL490c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO Processor Kit
12 603600-B21 HP BL490c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO Processor Kit
12 572075-B21 60GB 3G SATA SFF Non-hot plug SSD
216 500662-B21 HP 8GB Dual Rank x4 PC3-10600 DIMMs
Floating Pool Hypervisor Hosts (Processors may be substituted to match the configurations found in the
document Virtual Desktop Infrastructure for the Enterprise.)
Quantity Part Number Description
12 603718-B21 ProLiant BL460c G7 CTO Blade
12 610859-L21 HP BL460c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO Processor Kit
12 610859-B21 HP BL460c G7 Intel Xeon X5670 (2.93GHz/6-core/12MB/95W) FIO Processor
Kit
144 500662-B21 HP 8GB Dual Rank x4 PC3-10600 DIMMs
12 508226-B21 HP Smart Array P700m SAS Controller
12 452348-B21 HP Smart Array P-Series Low Profile Battery
P4800 G2 SAN for BladeSystem
Quantity Part Number Description
2 BV931A HP P4800 G2 SAN Solution for BladeSystem
1 AJ865A HP 3Gb SAS BL Switch – Dual Pack
4 Customer Choice Mini-SAS Cable
1 HF383E P4000 Training Module
84
Direct Attached Storage
Quantity Part Number Description
1 AJ866A HP MDS600 with two Dual Port IO Module System
2 AP763A HP MDS600 Dual I/O Module Option Kit
1 AF502B C-13 Offset Power Cord Kit
48 516816-B21 HP 450GB SAS 3.5‖ 15K DP HDD
1 AJ865A HP 3Gb SAS BL Switch – Dual Pack
4 Customer Choice Mini-SAS Cable
NOTE: The drive count should reflect the number of drives (four or six) planned for each of the direct
attached storage hosts within the reference architecture.
User Data Storage
Quantity Part Number Description
2 BV871A HP X3800 G2 Gateway
1 BQ890A HP P4500 G2 120TB MDL SAS Scalable Capacity SAN Solution
VMware View
Quantity Part Number Description
19 TD437AAE VMware View Premier Bundle, 100 Pack
8 TD438AAE VMware View Premier Bundle, 10 Pack
HP Software
Quantity Part Number Description
2 TC277AAE HP Insight Control for BladeSystem – 16 Server license
1 436222-B21 HP Insight Software Media Kit
VAR HP Client Automation, Enterprise
85
Appendix A – Storage patterning and planning in VMware
View environments
HP has taken a proprietary approach to storage testing since 2008. One of the goals for the original
test design was to address the somewhat predictable storage I/O patterns produced by the test
methodologies that were in use at the time. These I/O patterns were the result of user scripts that
utilized a minimal set of applications and services. The consequence was that the results of storage
testing observed in the lab frequently differed greatly from what was observed in the field.
HP used diverse user patterning to create three (3) storage workloads that incorporated I/O counts,
RW ratios over time and more variability in block sizes and that were tied to an end user type. These
workloads were captured over a fibre channel trace analyzer and were fed into an HP developed
program that allowed them to be played against a variety of storage types. This allowed HP to be
able to do comparative performance analysis between storage containers as well as look for areas
that could benefit from various optimizations.
Storage patterning
The primary area of focus for the next generation architecture was the knowledge worker workload.
Of all of the workloads, this workload produced the most diverse set of behaviors. The I/O patterns
for the workload are shown in Figure A1. Note the peak of nearly 4000 IOPs for the 72 users
measured during test. The average IOPs were closer to 22 per user.
Figure A1: Overall IOPs for knowledge workers for a 72 user run
0
500
1000
1500
2000
2500
3000
3500
4000
0 100 200 300 400
IOP
S
Time (Seconds)
Knowledge Worker IOPS
86
Of interest for this workload is the read/write ratio as represented in Figure A2. The graph clearly
shows a range that hovers near 50/50.
Figure A2: Segmented reads and writes for knowledge workers for a 72 user run
For the current reference architecture, VMware has brought capabilities to bear beginning with View
4.5 that allow for the tiering of storage. HP conducted testing with the previously developed
workloads to reflect these new capabilities.
HP creates and records the workloads by running a large variety of user scripts against a storage
container or containers. For this test, HP broke the prior workload up across three storage tiers and
captured trace data for the primary data (user data was not captured). A VMware replica of the
master OS image was housed on SSDs while the Temp files, User Data Disks and Linked Clones were
housed on a SAN volume. As traffic was run against the VMs, all blocks were captured using the
Fibre Channel Analyzer. It should be noted that traffic does not align perfectly with the old workload
due to a change from Windows XP Professional to Windows 7 Professional.
0
500
1000
1500
2000
0 100 200 300 400
IOP
S
Time (Seconds)
Knowledge Worker Patterns
Reads
Writes
87
The traces were analyzed and graphed. A clear workload segmentation emerged from the analysis.
Figure A3 highlights the portion of the workload that was handled by the accelerated storage tier.
Figure A3: Read IOPs observed at the SSD layer
The graph shows peak reads of nearly 900 IOPs or slightly more than 11 IOPs per user. The average
at around 4 IOPs per user is far lower. Writes were nominal and not graphed.
0
100
200
300
400
500
600
700
800
900
1000
1
20
39
58
77
96
11
5
13
4
15
3
17
2
19
1
21
0
22
9
24
8
26
7
28
6
30
5
32
4
34
3
36
2
38
1
40
0
41
9
43
8
45
7
47
6
49
5
51
4
4K
8K
16K
32K
64K-1M
ALL
SSD Read IOPS
88
Figure A4 highlights the remaining read traffic that is sent to the secondary storage tier, in this case
15,000 RPM Fibre Channel drives. When normalized it is obvious that the vast majority of reads are
handled at the replica layer.
Figure A4: Read IOPs observed at the linked clone layer
The read I/Os that land on the linked clone layer amount to less than an I/O per second per user on
average.
0
100
200
300
400
500
600
700
800
900
1000 1
20
39
58
77
96
11
5
13
4
15
3
17
2
19
1
21
0
22
9
24
8
26
7
28
6
30
5
32
4
34
3
36
2
38
1
40
0
41
9
43
8
45
7
47
6
49
5
51
4
4K
8K
16K
32K
64K-1M
ALL
15K Read IOPS
89
The remaining traffic that is not redirected user data amounts to an average of around 4 write IOPs
per user as in Figure A5. The overall storage profile at the clone layer is thus predominantly writes.
Figure A5: Write IOPs observed at the linked clone layer
The primarily write driven workload aligns with expectations based on published tests available
within the industry.
Storage planning
Based on observations and analysis of the storage workload, storage planning at the linked clone
layer must account for a primarily write driven workload. Based on HP’s analysis, the bulk of reads
will be offloaded to an accelerated storage tier. The remaining I/O is observed as the write portion of
the overall I/O per user plus an average of 1 read I/O. The estimated per user requirement for the
linked clone layer could thus be estimated as:
((Total User I/O x Write Percentage) + 1)
A user generating 20 IOPs with a 60% write ratio would thus need ((20 x 0.6) +1) or 13 IOPs on the
linked clone tier. Similarly, an end user generating 10 IOPs with a 20% write ratio would need only
((10 x 0.2) +1) or 3 IOPs on the same tier.
It is important to note that this produces an estimate only, but one that is an intelligent beginning to
proper storage planning. Understanding user patterning en masse will improve estimates. As an
example, a group of users with an 80/20 read/write ratio and an average of only 6 IOPs per user
may be tempting to use as a planning number. However, if your end users have a shared peak of 18
IOPs with a 50/50 read/write ratio for a 20 minute period over the course of the day this could
result in overcommitted storage that is not capable of handling the required load. Similarly,
understanding what files will reside in all locations is critical as well. As an example, if a locally
installed application builds a local, per user cache file and then performs frequent reads against it, it
is likely the cache hits will come from the linked clone which can change the overall profile.
0
100
200
300
400
500
600
700
800
900
1000
1
20
39
58
77
96
11
5
13
4
15
3
17
2
19
1
21
0
22
9
24
8
26
7
28
6
30
5
32
4
34
3
36
2
38
1
40
0
41
9
43
8
45
7
47
6
49
5
51
4
4K
8K
16K
32K
64K-1M
ALL
15K Write IOPs
90
In HP testing, user profile information is located within a database and moved over only as needed
resulting in a very small I/O impact. User data is also treated as a separate I/O pattern and not
recorded on either of the monitored volumes. Failing to relocate your user data or implementing fully
provisioned virtual machines will result in a dramatic difference in I/O patterning as well as a number
of difficulties around management and overall system performance and is not recommended.
For the best understanding of overall user requirements including I/O, CPU, memory and even
application specific information, HP offers the Client Virtualization Analysis and Modeling service. For
more information about the service, see the information sheet at
http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA3-2409ENW.pdf.
91
Appendix B – HP Insight Control for VMware vCenter
Server
The vCenter management platform from VMware is a proven solution for managing the virtualized
infrastructure, with tens of thousands of customers utilizing it to keep their virtual data center operating
efficiently and delivering on Service Level Agreements. For years, these same customers have relied
on Insight Control from HP to complete the picture by delivering an in-depth view of the underlying
host systems that support the virtual infrastructure. While delivering a complete picture of both the
virtual and physical data center assets, the powerful combination of vCenter Server and Insight
Control has required customers to monitor two separate consoles. Until now.
HP’s Insight Control for vCenter is part of HP’s Converged Infrastructure, and brings server,
networking and storage management into the vCenter management suite. This enables you to monitor
and manage HP server, networks, and storage from within the vCenter management console using a
single application plug-in from HP.
HP's plug-in for vCenter consists of a server/network and storage module that can be used together or
separately to manage HP servers/networks and/or HP storage, and is displayed as a tab on the main
screen of the management console. Accessing this tab allows you to quickly obtain context-sensitive
server, networking and storage specific information for individual elements of the virtual environment,
enabling an end-to-end view of networking and how virtual machines are mapped to datastores and
individual disk volumes. By providing these clear relationships between VMs, networking, datastores
and storage, you’re productivity is increased, as does your ability to ensure quality of service.
The HP Insight Control extension for VMware vCenter Server delivers to you powerful HP hardware
management capabilities, enabling comprehensive monitoring, remote control and power
optimization directly from the vCenter console. In addition, Insight Control delivers robust deployment
capabilities and is an integration point for the broader portfolio of infrastructure management, service
automation, and IT operations solutions available from HP. Let’s review these capabilities in greater
detail:
Monitor heath, power and performance status of virtual machines and the underlying host systems
that support them at the cluster, server or resource level from a single pane of glass resulting in
deeper insight of your environment for troubleshooting and planning purposes. Figure B1 provides
an overview.
92
Figure B1: HP Insight Control for VMware vCenter server integration
View hardware events from within the vCenter console and set up alarms to respond to events
automatically without having to learn a new user interface.
Access firmware inventory data and version from inside the vCenter console allowing you to
properly manage firmware lifecycle resulting in better stability and reliability of your virtualized
environment.
93
View power history for your environment over time as well as integrated thermal information as in
Figure B2.
Figure B2: HP Insight Control power screen
94
Visually trace and monitor your network from end-to-end, from the host all the way to the individual
network modules connected within your domain, delivering comprehensive management of the
network making it easy for you to review and change any HP-specific information. Figure B3
highlights this capability.
Figure B3: HP Insight Control networking screen within vCenter
Remotely manage and troubleshoot HP ProLiant and BladeSystem servers by launching in context
the HP Integrated Lights-Out Advanced, HP Onboard Administrator or Systems Insight Manager
from the vCenter console to access powerful control capabilities.
If you also have HP storage in your environment, you can provision, configure and monitor your
storage arrays from your vCenter console allowing you to create or clone virtual machines and
create or expand your data storage combined with ongoing health monitoring.
Set roles aligned to VMware vCenter to segment responsibilities.
As a core component of HP Insight Control, the VMware vCenter Server integration is included with
HP Insight Control, which can be purchased as a single license, in bulk quantities, or bundled with HP
ProLiant and BladeSystem hardware. Existing Insight Control customers who are under a current
Software Updates contract or if you are managing HP storage only you can download this extension
free of charge.
95
Appendix C – Scripting the configuration of the Onboard
Administrator
This is a sample script for single enclosure with Virtual Connect Flex-10 configuration. The installer will
need to alter settings as appropriate to their environment. Consult the BladeSystem documentation at
http://www.hp.com/go/bladesystem for information on individual script options. Note that this script
alters power on timings to ensure the systems involved in the HP P4800 SAN for BladeSystem are
properly delayed to allow the disks in the MDS600 to spin up and enter a full ready state.
#Script Generated by Administrator
#Set Enclosure Time
SET TIMEZONE CST6CDT
#SET DATE MMDDhhmm{{CC}YY}
#Set Enclosure Information
SET ENCLOSURE ASSET TAG "TAG NAME"
SET ENCLOSURE NAME "ENCL NAME"
SET RACK NAME "RACK NAME"
SET POWER MODE REDUNDANT
SET POWER SAVINGS ON
#Power limit must be within the range of 2700-16400
SET POWER LIMIT OFF
#Enclosure Dynamic Power Cap must be within the range of 2013-7822
#Derated Circuit Capacity must be within the range of 2013-7822
#Rated Circuit Capacity must be within the range of 2082-7822
SET ENCLOSURE POWER_CAP OFF
SET ENCLOSURE POWER_CAP_BAYS_TO_EXCLUDE None
#Set PowerDelay Information
SET INTERCONNECT POWERDELAY 1 210
SET INTERCONNECT POWERDELAY 2 210
SET INTERCONNECT POWERDELAY 3 0
SET INTERCONNECT POWERDELAY 4 0
SET INTERCONNECT POWERDELAY 5 30
SET INTERCONNECT POWERDELAY 6 30
SET INTERCONNECT POWERDELAY 7 0
SET INTERCONNECT POWERDELAY 8 0
SET SERVER POWERDELAY 1 0
SET SERVER POWERDELAY 2 0
SET SERVER POWERDELAY 3 0
SET SERVER POWERDELAY 4 0
SET SERVER POWERDELAY 5 0
SET SERVER POWERDELAY 6 0
SET SERVER POWERDELAY 7 240
SET SERVER POWERDELAY 8 240
SET SERVER POWERDELAY 9 0
SET SERVER POWERDELAY 10 0
SET SERVER POWERDELAY 11 0
SET SERVER POWERDELAY 12 0
SET SERVER POWERDELAY 13 0
SET SERVER POWERDELAY 14 0
SET SERVER POWERDELAY 15 240
SET SERVER POWERDELAY 16 240
96
# Set ENCRYPTION security mode to STRONG or NORMAL.
SET ENCRYPTION NORMAL
#Configure Protocols
ENABLE WEB
ENABLE SECURESH
DISABLE TELNET
ENABLE XMLREPLY
ENABLE GUI_LOGIN_DETAIL
#Configure Alertmail
SET ALERTMAIL SMTPSERVER 0.0.0.0
DISABLE ALERTMAIL
#Configure Trusted Hosts
#REMOVE TRUSTED HOST ALL
DISABLE TRUSTED HOST
#Configure NTP
SET NTP PRIMARY 10.1.0.2
SET NTP SECONDARY 10.1.0.3
SET NTP POLL 720
DISABLE NTP
#Set SNMP Information
SET SNMP CONTACT "Name"
SET SNMP LOCATION "Locale"
SET SNMP COMMUNITY READ "public"
SET SNMP COMMUNITY WRITE "private"
ENABLE SNMP
#Set Remote Syslog Information
SET REMOTE SYSLOG SERVER ""
SET REMOTE SYSLOG PORT 514
DISABLE SYSLOG REMOTE
#Set Enclosure Bay IP Addressing (EBIPA) Information for Device Bays
#NOTE: SET EBIPA commands are only valid for OA v3.00 and later
SET EBIPA SERVER 10.0.0.1 255.0.0.0 1
SET EBIPA SERVER GATEWAY NONE 1
SET EBIPA SERVER DOMAIN "vdi.net" 1
ENABLE EBIPA SERVER 1
SET EBIPA SERVER 10.0.0.2 255.0.0.0 2
SET EBIPA SERVER GATEWAY NONE 2
SET EBIPA SERVER DOMAIN "vdi.net" 2
ENABLE EBIPA SERVER 2
SET EBIPA SERVER 10.0.0.3 255.0.0.0 3
SET EBIPA SERVER GATEWAY NONE 3
SET EBIPA SERVER DOMAIN "vdi.net" 3
ENABLE EBIPA SERVER 3
SET EBIPA SERVER 10.0.0.4 255.0.0.0 4
SET EBIPA SERVER GATEWAY NONE 4
SET EBIPA SERVER DOMAIN "vdi.net" 4
ENABLE EBIPA SERVER 4
SET EBIPA SERVER 10.0.0.5 255.0.0.0 5
SET EBIPA SERVER GATEWAY NONE 5
97
SET EBIPA SERVER DOMAIN "vdi.net" 5
ENABLE EBIPA SERVER 5
SET EBIPA SERVER 10.0.0.6 255.0.0.0 6
SET EBIPA SERVER GATEWAY NONE 6
SET EBIPA SERVER DOMAIN "vdi.net" 6
ENABLE EBIPA SERVER 6
SET EBIPA SERVER 10.0.0.7 255.0.0.0 7
SET EBIPA SERVER GATEWAY NONE 7
SET EBIPA SERVER DOMAIN "vdi.net" 7
ENABLE EBIPA SERVER 7
SET EBIPA SERVER 10.0.0.8 255.0.0.0 8
SET EBIPA SERVER GATEWAY NONE 8
SET EBIPA SERVER DOMAIN "vdi.net" 8
ENABLE EBIPA SERVER 8
SET EBIPA SERVER 10.0.0.9 255.0.0.0 9
SET EBIPA SERVER GATEWAY NONE 9
SET EBIPA SERVER DOMAIN "vdi.net" 9
ENABLE EBIPA SERVER 9
SET EBIPA SERVER 10.0.0.10 255.0.0.0 10
SET EBIPA SERVER GATEWAY NONE 10
SET EBIPA SERVER DOMAIN "vdi.net" 10
ENABLE EBIPA SERVER 10
SET EBIPA SERVER 10.0.0.11 255.0.0.0 11
SET EBIPA SERVER GATEWAY NONE 11
SET EBIPA SERVER DOMAIN "vdi.net" 11
ENABLE EBIPA SERVER 11
SET EBIPA SERVER 10.0.0.12 255.0.0.0 12
SET EBIPA SERVER GATEWAY NONE 12
SET EBIPA SERVER DOMAIN "vdi.net" 12
ENABLE EBIPA SERVER 12
SET EBIPA SERVER 10.0.0.13 255.0.0.0 13
SET EBIPA SERVER GATEWAY NONE 13
SET EBIPA SERVER DOMAIN "vdi.net" 13
ENABLE EBIPA SERVER 13
SET EBIPA SERVER 10.0.0.14 255.0.0.0 14
SET EBIPA SERVER GATEWAY NONE 14
SET EBIPA SERVER DOMAIN "vdi.net" 14
ENABLE EBIPA SERVER 14
SET EBIPA SERVER NONE NONE 14A
SET EBIPA SERVER GATEWAY 10.65.1.254 14A
SET EBIPA SERVER DOMAIN "" 14A
SET EBIPA SERVER 10.0.0.15 255.0.0.0 15
SET EBIPA SERVER GATEWAY NONE 15
SET EBIPA SERVER DOMAIN "vdi.net" 15
ENABLE EBIPA SERVER 15
SET EBIPA SERVER 10.0.0.16 255.0.0.0 16
SET EBIPA SERVER GATEWAY NONE 16
SET EBIPA SERVER DOMAIN "vdi.net" 16
ENABLE EBIPA SERVER 16
#Set Enclosure Bay IP Addressing (EBIPA) Information for Interconnect
Bays
#NOTE: SET EBIPA commands are only valid for OA v3.00 and later
SET EBIPA INTERCONNECT 10.0.0.101 255.0.0.0 1
SET EBIPA INTERCONNECT GATEWAY NONE 1
SET EBIPA INTERCONNECT DOMAIN "vdi.net" 1
SET EBIPA INTERCONNECT NTP PRIMARY 10.1.0.2 1
SET EBIPA INTERCONNECT NTP SECONDARY 10.1.0.3 1
ENABLE EBIPA INTERCONNECT 1
SET EBIPA INTERCONNECT 10.0.0.102 255.0.0.0 2
SET EBIPA INTERCONNECT GATEWAY NONE 2
98
SET EBIPA INTERCONNECT DOMAIN "vdi.net" 2
SET EBIPA INTERCONNECT NTP PRIMARY 10.1.0.2 2
SET EBIPA INTERCONNECT NTP SECONDARY 10.1.0.3 2
ENABLE EBIPA INTERCONNECT 2
SET EBIPA INTERCONNECT 10.0.0.103 255.0.0.0 3
SET EBIPA INTERCONNECT GATEWAY NONE 3
SET EBIPA INTERCONNECT DOMAIN "" 3
SET EBIPA INTERCONNECT NTP PRIMARY NONE 3
SET EBIPA INTERCONNECT NTP SECONDARY NONE 3
ENABLE EBIPA INTERCONNECT 3
SET EBIPA INTERCONNECT 10.0.0.104 255.0.0.0 4
SET EBIPA INTERCONNECT GATEWAY NONE 4
SET EBIPA INTERCONNECT DOMAIN "" 4
SET EBIPA INTERCONNECT NTP PRIMARY NONE 4
SET EBIPA INTERCONNECT NTP SECONDARY NONE 4
ENABLE EBIPA INTERCONNECT 4
SET EBIPA INTERCONNECT 10.0.0.105 255.0.0.0 5
SET EBIPA INTERCONNECT GATEWAY NONE 5
SET EBIPA INTERCONNECT DOMAIN "vdi.net" 5
SET EBIPA INTERCONNECT NTP PRIMARY 10.1.0.2 5
SET EBIPA INTERCONNECT NTP SECONDARY 10.1.0.3 5
ENABLE EBIPA INTERCONNECT 5
SET EBIPA INTERCONNECT 10.0.0.106 255.0.0.0 6
SET EBIPA INTERCONNECT GATEWAY NONE 6
SET EBIPA INTERCONNECT DOMAIN "vdi.net" 6
SET EBIPA INTERCONNECT NTP PRIMARY 10.1.0.2 6
SET EBIPA INTERCONNECT NTP SECONDARY 10.1.0.3 6
ENABLE EBIPA INTERCONNECT 6
SET EBIPA INTERCONNECT 10.0.0.107 255.0.0.0 7
SET EBIPA INTERCONNECT GATEWAY NONE 7
SET EBIPA INTERCONNECT DOMAIN "" 7
SET EBIPA INTERCONNECT NTP PRIMARY NONE 7
SET EBIPA INTERCONNECT NTP SECONDARY NONE 7
ENABLE EBIPA INTERCONNECT 7
SET EBIPA INTERCONNECT 10.0.0.108 255.0.0.0 8
SET EBIPA INTERCONNECT GATEWAY NONE 8
SET EBIPA INTERCONNECT DOMAIN "" 8
SET EBIPA INTERCONNECT NTP PRIMARY NONE 8
SET EBIPA INTERCONNECT NTP SECONDARY NONE 8
ENABLE EBIPA INTERCONNECT 8
SAVE EBIPA
#Uncomment following line to remove all user accounts currently in the
system
#REMOVE USERS ALL
#Create Users – add at least 1 administrative user
ADD USER "admin"
SET USER CONTACT "Administrator"
SET USER FULLNAME "System Admin"
SET USER ACCESS “ADMINISTRATOR”
ASSIGN SERVER
1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,1A,2A,3A,4A,5A,6A,7A,8A,9A,10A,11A
,12A,13A,14A,15A,16A,1B,2B,3B,4B,5B,6B,7B,8B,9B,10B,11B,12B,13B,14B,15B,1
6B "Administrator"
ASSIGN INTERCONNECT 1,2,3,4,5,6,7,8 "Administrator"
ASSIGN OA "Administrator"
ENABLE USER "Administrator"
#Password Settings
ENABLE STRONG PASSWORDS
99
SET MINIMUM PASSWORD LENGTH 8
#Session Timeout Settings
SET SESSION TIMEOUT 1440
#Set LDAP Information
SET LDAP SERVER ""
SET LDAP PORT 0
SET LDAP NAME MAP OFF
SET LDAP SEARCH 1 ""
SET LDAP SEARCH 2 ""
SET LDAP SEARCH 3 ""
SET LDAP SEARCH 4 ""
SET LDAP SEARCH 5 ""
SET LDAP SEARCH 6 ""
#Uncomment following line to remove all LDAP accounts currently in the
system
#REMOVE LDAP GROUP ALL
DISABLE LDAP
#Set SSO TRUST MODE
SET SSO TRUST Disabled
#Set Network Information
#NOTE: Setting your network information through a script while
# remotely accessing the server could drop your connection.
# If your connection is dropped this script may not execute to
conclusion.
SET OA NAME 1 VDIOA1
SET IPCONFIG STATIC 1 10.0.0.255 255.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0
SET NIC AUTO 1
DISABLE ENCLOSURE_IP_MODE
SET LLF INTERVAL 60
DISABLE LLF
#Set VLAN Information
SET VLAN FACTORY
SET VLAN DEFAULT 1
EDIT VLAN 1 "Default"
ADD VLAN 21 "VDI"
ADD VLAN 29 “VMOTION”
ADD VLAN 93 “PUB_ISCSI”
ADD VLAN 110 "MGMT_VLAN"
SET VLAN SERVER 1 1
SET VLAN SERVER 1 2
SET VLAN SERVER 1 3
SET VLAN SERVER 1 4
SET VLAN SERVER 1 5
SET VLAN SERVER 1 6
SET VLAN SERVER 1 7
SET VLAN SERVER 1 8
SET VLAN SERVER 1 9
SET VLAN SERVER 1 10
SET VLAN SERVER 1 11
SET VLAN SERVER 1 12
SET VLAN SERVER 1 13
SET VLAN SERVER 1 14
SET VLAN SERVER 1 15
100
SET VLAN SERVER 1 16
SET VLAN INTERCONNECT 1 1
SET VLAN INTERCONNECT 1 2
SET VLAN INTERCONNECT 1 3
SET VLAN INTERCONNECT 1 4
SET VLAN INTERCONNECT 1 5
SET VLAN INTERCONNECT 1 6
SET VLAN INTERCONNECT 1 7
SET VLAN INTERCONNECT 1 8
SET VLAN OA 1
DISABLE VLAN
SAVE VLAN
DISABLE URB
SET URB URL ""
SET URB PROXY URL ""
SET URB INTERVAL DAILY 0
101
Appendix D – CLIQ commands for working with P4000
This section offers examples of command line syntax for creating SANiQ volumes as well as adding
hosts and presenting volumes to hosts. These samples can be combined to create scripts to make
initial deployment of P4000 volumes simple. For instructions on installing and using CLIQ consult the
HP P4000 documentation that ships with your P4800 G2 SAN for BladeSystem.
The following line when run in the CLIQ will create a server named mtx-esx01 with an initiator name
of iqn.1998-01.com.vmware:esx01.
cliq createServer serverName=mtx-esx01 useChap=0
initiator=iqn.1998-01.com.vmware:esx01 login=172.16.0.130
userName=admin passWord=password
The following line will create a 200GB, thinly provisioned volume named data-01 within a cluster
titled P4800_VDI.
cliq createVolume prompt=0 volumeName=data-01
clusterName=P4800_VDI size=200GB replication=2
thinprovision=1 login=172.16.0.130 username=admin
password=password
The following line assigns the previously created volume to the server that was created.
cliq assignVolumeToServer volumeName=data-01 serverName=mtx-
esx01 login=172.16.0.130 username=admin password=password
These lines can be combined in a batch file and run as a single script to create all volumes and
servers and assign these as needed.
102
Appendix E – Understanding memory overcommitment
This section seeks to explain the concepts and methods of memory management in VMware vSphere.
You should read and understand this section to aid in making decisions about how much beyond the
physical limits of memory you wish to push your server sizing. This section is reprinted with permission
from VMware and can be found in the document Understanding Memory Resource Management in
VMware ESX 4.1.3 It is included for the convenience of VMware and HP’s joint customers.
The concept of memory overcommitment is fairly simple: host memory is overcommitted when the total
amount of guest physical memory of the running virtual machines is larger than the amount of actual
host memory. ESX supports memory overcommitment from the very first version, due to two important
benefits it provides:
Higher memory utilization: With memory overcommitment, ESX ensures that host memory is
consumed by active guest memory as much as possible. Typically, some virtual machines may be
lightly loaded compared to others. Their memory may be used infrequently, so for much of the time
their memory will sit idle. Memory overcommitment allows the hypervisor to use memory reclamation
techniques to take the inactive or unused host physical memory away from the idle virtual machines
and give it to other virtual machines that will actively use it.
Higher consolidation ratio: With memory overcommitment, each virtual machine has a smaller
footprint in host memory usage, making it possible to fit more virtual machines on the host while still
achieving good performance for all virtual machines. For example, as shown in Figure E1, you can
enable a host with 4GB host physical memory to run three virtual machines with 2GB guest physical
memory each. Without memory overcommitment, only one virtual machine can be run because the
hypervisor cannot reserve host memory for more than one virtual machine, considering that each
virtual machine has overhead memory.
Figure E1: Memory overcommitment in ESX
In order to effectively support memory overcommitment, the hypervisor must provide efficient host
memory reclamation techniques. ESX leverages several innovative techniques to support virtual
machine memory reclamation. These techniques are transparent page sharing, ballooning, and host
swapping. In ESX 4.1, before applying host swapping, ESX applies a new technique called memory
compression in order to reduce the amount of pages that need to be swapped out, while reclaiming
the same amount of host memory.
3 Understanding Memory Resource Management in VMware ESX 4.1. VMware Guide. http://www.waldspurger.org/carl/papers/esx-mem-
osdi02.pdf
103
Transparent Page Sharing (TPS)
When multiple virtual machines are running, some of them may have identical sets of memory
content. This presents opportunities for sharing memory across virtual machines (as well as sharing
within a single virtual machine). For example, several virtual machines may be running the same guest
operating system, have the same applications, or contain the same user data. With page sharing, the
hypervisor can reclaim the redundant copies and keep only one copy, which is shared by multiple
virtual machines in the host physical memory. As a result, the total virtual machine host memory
consumption is reduced and a higher level of memory overcommitment is possible.
In ESX, the redundant page copies are identified by their contents. This means that pages with
identical content can be shared regardless of when, where, and how those contents are generated.
ESX scans the content of guest physical memory for sharing opportunities. Instead of comparing each
byte of a candidate guest physical page to other pages, an action that is prohibitively expensive, ESX
uses hashing to identify potentially identical pages. The detailed algorithm is illustrated in Figure E2.
Figure E2: Content based page sharing in ESX
A hash value is generated based on the candidate guest physical page’s content. The hash value is
then used as a key to look up a global hash table, in which each entry records a hash value and the
physical page number of a shared page. If the hash value of the candidate guest physical page
matches an existing entry, a full comparison of the page contents is performed to exclude a false
match. Once the candidate guest physical page’s content is confirmed to match the content of an
existing shared host physical page, the guest physical to host physical mapping of the candidate
guest physical page is changed to the shared host physical page, and the redundant host memory
copy (the page pointed to by the dashed arrow in Figure E2) is reclaimed. This remapping is invisible
to the virtual machine and inaccessible to the guest operating system. Because of this invisibility,
sensitive information cannot be leaked from one virtual machine to another.
A standard copy-on-write (CoW) technique is used to handle writes to the shared host physical pages.
Any attempt to write to the shared pages will generate a minor page fault. In the page fault handler,
the hypervisor will transparently create a private copy of the page for the virtual machine and remap
the affected guest physical page to this private copy. In this way, virtual machines can safely modify
the shared pages without disrupting other virtual machines sharing that memory. Note that writing to
a shared page does incur overhead compared to writing to non-shared pages due to the extra work
performed in the page fault handler.
104
In VMware ESX, the hypervisor scans the guest physical pages randomly with a base scan rate
specified by Mem.ShareScanTime, which specifies the desired time to scan the virtual machine’s
entire guest memory. The maximum number of scanned pages per second in the host and the
maximum number of per-virtual machine scanned pages, (that is, Mem.ShareScanGHz and
Mem.ShareRateMax respectively) can also be specified in ESX advanced settings. An example is
shown in Figure E3.
Figure E3: Configure page sharing in vSphere Client
The default values of these three parameters are carefully chosen to provide sufficient sharing
opportunities while keeping the CPU overhead negligible. In fact, ESX intelligently adjusts the page
scan rate based on the amount of current shared pages. If the virtual machine’s page sharing
opportunity seems to be low, the page scan rate will be reduced accordingly and vice versa. This
optimization further mitigates the overhead of page sharing.
In hardware-assisted memory virtualization (for example, Intel EPT Hardware Assist and AMD RVI
Hardware Assist [6]) systems, ESX will automatically back guest physical pages with large host
physical pages (2MB contiguous memory region instead of 4KB for regular pages) for better
performance due to less TLB misses. In such systems, ESX will not share those large pages because: 1)
the probability of finding two large pages having identical contents is low, and 2) the overhead of
doing a bit-by-bit comparison for a 2MB page is much larger than for a 4KB page. However, ESX still
generates hashes for the 4KB pages within each large page. Since ESX will not swap out large
pages, during host swapping, the large page will be broken into small pages so that these pre-
generated hashes can be used to share the small pages before they are swapped out. In short, we
may not observe any page sharing for hardware-assisted memory virtualization systems until host
memory is overcommitted.
Ballooning
Ballooning is a completely different memory reclamation technique compared to transparent page
sharing. Before describing the technique, it is helpful to review why the hypervisor needs to reclaim
memory from virtual machines. Due to the virtual machine’s isolation, the guest operating system is not
105
aware that it is running inside a virtual machine and is not aware of the states of other virtual
machines on the same host. When the hypervisor runs multiple virtual machines and the total amount
of the free host memory becomes low, none of the virtual machines will free guest physical memory
because the guest operating system cannot detect the host’s memory shortage. Ballooning makes the
guest operating system aware of the low memory status of the host.
In ESX, a balloon driver is loaded into the guest operating system as a pseudo-device driver. VMware
Tools must be installed in order to enable ballooning. This is recommended for all workloads. It has
no external interfaces to the guest operating system and communicates with the hypervisor through a
private channel. The balloon driver polls the hypervisor to obtain a target balloon size. If the
hypervisor needs to reclaim virtual machine memory, it sets a proper target balloon size for the
balloon driver, making it ―inflate‖ by allocating guest physical pages within the virtual machine.
Figure E4 illustrates the process of the balloon inflating.
In Figure E4 (a), four guest physical pages are mapped in the host physical memory. Two of the
pages are used by the guest application and the other two pages (marked by stars) are in the guest
operating system free list. Note that since the hypervisor cannot identify the two pages in the guest
free list, it cannot reclaim the host physical pages that are backing them. Assuming the hypervisor
needs to reclaim two pages from the virtual machine, it will set the target balloon size to two pages.
After obtaining the target balloon size, the balloon driver allocates two guest physical pages inside
the virtual machine and pins them, as shown in Figure E4 (b). Here, ―pinning‖ is achieved through the
guest operating system interface, which ensures that the pinned pages cannot be paged out to disk
under any circumstances. Once the memory is allocated, the balloon driver notifies the hypervisor
about the page numbers of the pinned guest physical memory so that the hypervisor can reclaim the
host physical pages that are backing them. In Figure E4 (b), dashed arrows point at these pages. The
hypervisor can safely reclaim this host physical memory because neither the balloon driver nor the
guest operating system relies on the contents of these pages. This means that no processes in the
virtual machine will intentionally access those pages to read/write any values. Thus, the hypervisor
does not need to allocate host physical memory to store the page contents. If any of these pages are
re-accessed by the virtual machine for some reason, the hypervisor will treat it as a normal virtual
machine memory allocation and allocate a new host physical page for the virtual machine. When the
hypervisor decides to deflate the balloon—by setting a smaller target balloon size—the balloon driver
deallocates the pinned guest physical memory, which releases it for the guest’s applications.
Figure E4: Inflating the balloon in a VM
106
Typically, the hypervisor inflates the virtual machine balloon when it is under memory pressure. By
inflating the balloon, a virtual machine consumes less physical memory on the host, but more physical
memory inside the guest. As a result, the hypervisor offloads some of its memory overload to the guest
operating system while slightly loading the virtual machine. That is, the hypervisor transfers the
memory pressure from the host to the virtual machine. Ballooning induces guest memory pressure. In
response, the balloon driver allocates and pins guest physical memory. The guest operating system
determines if it needs to page out guest physical memory to satisfy the balloon driver’s allocation
requests. If the virtual machine has plenty of free guest physical memory, inflating the balloon will
induce no paging and will not impact guest performance. In this case, as illustrated in Figure E4, the
balloon driver allocates the free guest physical memory from the guest free list. Hence, guest-level
paging is not necessary. However, if the guest is already under memory pressure, the guest operating
system decides which guest physical pages to be paged out to the virtual swap device in order to
satisfy the balloon driver’s allocation requests. The genius of ballooning is that it allows the guest
operating system to intelligently make the hard decision about which pages to be paged out without
the hypervisor’s involvement.
For ballooning to work as intended, the guest operating system must install and enable the balloon
driver, which is included in VMware Tools. The guest operating system must have sufficient virtual
swap space configured for guest paging to be possible. Ballooning might not reclaim memory quickly
enough to satisfy host memory demands. In addition, the upper bound of the target balloon size may
be imposed by various guest operating system limitations.
Hypervisor swapping
In the cases where ballooning and transparent page sharing are not sufficient to reclaim memory, ESX
employs hypervisor swapping to reclaim memory. At virtual machine startup, the hypervisor creates a
separate swap file for the virtual machine. Then, if necessary, the hypervisor can directly swap out
guest physical memory to the swap file, which frees host physical memory for other virtual machines.
Besides the limitation on the reclaimed memory size, both page sharing and ballooning take time to
reclaim memory. The page-sharing speed depends on the page scan rate and the sharing
opportunity. Ballooning speed relies on the guest operating system’s response time for memory
allocation.
In contrast, hypervisor swapping is a guaranteed technique to reclaim a specific amount of memory
within a specific amount of time. However, hypervisor swapping is used as a last resort to reclaim
memory from the virtual machine due to the following limitations on performance:
Page selection problems: Under certain circumstances, hypervisor swapping may severely
penalize guest performance. This occurs when the hypervisor has no knowledge about which guest
physical pages should be swapped out, and the swapping may cause unintended interactions with
the native memory management policies in the guest operating system.
Double paging problems: Another known issue is the double paging problem. Assuming the
hypervisor swaps out a guest physical page, it is possible that the guest operating system pages out
the same physical page, if the guest is also under memory pressure. This causes the page to be
swapped in from the hypervisor swap device and immediately to be paged out to the virtual
machine’s virtual swap device.
Page selection and double-paging problems exist because the information needed to avoid them is
not available to the hypervisor.
High swap-in latency: Swapping in pages is expensive for a VM. If the hypervisor swaps out a
guest page and the guest subsequently accesses that page, the VM will get blocked until the page is
swapped in from disk. High swap-in latency, which can be tens of milliseconds, can severely degrade
guest performance.
107
ESX mitigates the impact of interacting with guest operating system memory management by
randomly selecting the swapped guest physical pages.
Memory compression
The idea of memory compression is very straightforward: if the swapped out pages can be
compressed and stored in a compression cache located in the main memory, the next access to the
page only causes a page decompression which can be an order of magnitude faster than the disk
access. With memory compression, only a few uncompressible pages need to be swapped out if the
compression cache is not full. This means the number of future synchronous swap-in operations will be
reduced. Hence, it may improve application performance significantly when the host is in heavy
memory pressure. In ESX 4.1, only the swap candidate pages will be compressed. This means ESX
will not proactively compress guest pages when host swapping is not necessary. In other words,
memory compression does not affect workload performance when host memory is undercommitted.
Reclaiming memory through compression
Figure E5 illustrates how memory compression reclaims host memory compared to host swapping.
Assuming ESX needs to reclaim two 4KB physical pages from a VM through host swapping, page A
and B are the selected pages (Figure E5a). With host swapping only, these two pages will be directly
swapped to disk and two physical pages are reclaimed (Figure E5b). However, with memory
compression, each swap candidate page will be compressed and stored using 2KB of space in a per-
VM compression cache. Note that page compression would be much faster than the normal page
swap out operation which involves a disk I/O. Page compression will fail if the compression ratio is
less than 50% and the uncompressible pages will be swapped out. As a result, every successful page
compression is accounted for reclaiming 2KB of physical memory. As illustrated in Figure E5c, pages
A and B are compressed and stored as half-pages in the compression cache. Although both pages
are removed from VM guest memory, the actual reclaimed memory size is one page.
Figure E5: Host swapping vs. memory compression in ESX
If any of the subsequent memory access misses in the VM guest memory, the compression cache will
be checked first using the host physical page number. If the page is found in the compression cache,
it will be decompressed and push back to the guest memory. This page is then removed from the
108
compression cache. Otherwise, the memory request is sent to the host swap device and the VM is
blocked.
Managing per-VM compression cache
The per-VM compression cache is accounted for by the VM’s guest memory usage, which means ESX
will not allocate additional host physical memory to store the compressed pages. The compression
cache is transparent to the guest OS. Its size starts with zero when host memory is undercommitted
and grows when virtual machine memory starts to be swapped out.
If the compression cache is full, one compressed page must be replaced in order to make room for a
new compressed page. An age-based replacement policy is used to choose the target page. The
target page will be decompressed and swapped out. ESX will not swap out compressed pages.
If the pages belonging to compression cache need to be swapped out under severe memory pressure,
the compression cache size is reduced and the affected compressed pages are decompressed and
swapped out.
The maximum compression cache size is important for maintaining good VM performance. If the
upper bound is too small, a lot of replaced compressed pages must be decompressed and swapped
out. Any following swap-ins of those pages will hurt VM performance. However, since compression
cache is accounted for by the VM’s guest memory usage, a very large compression cache may waste
VM memory and unnecessarily create VM memory pressure especially when most compressed pages
would not be touched in the future. In ESX 4.1, the default maximum compression cache size is
conservatively set to 10% of configured VM memory size. This value can be changed through the
vSphere Client in Advanced Settings by changing the value for Mem.MemZipMaxPct.
Figure E6: Change the maximum compression cache size in the vSphere Client
109
When to reclaim host memory
ESX maintains four host free memory states: high, soft, hard, and low, which are reflected by four
thresholds: 6%, 4%, 2%, and 1% of host memory respectively. Figure E7 shows how the host free
memory state is reported in esxtop.
By default, ESX enables page sharing since it opportunistically ―frees‖ host memory with little
overhead. When to use ballooning or swapping (which activates memory compression) to reclaim
host memory is largely determined by the current host free memory state.
Figure E7: Host free memory state in ESXTOP
In the high state, the aggregate virtual machine guest memory usage is smaller than the host memory
size. Whether or not host memory is overcommitted, the hypervisor will not reclaim memory through
ballooning or swapping. (This is true only when the virtual machine memory limit is not set.)
If host free memory drops towards the soft threshold, the hypervisor starts to reclaim memory using
ballooning. Ballooning happens before free memory actually reaches the soft threshold because it
takes time for the balloon driver to allocate and pin guest physical memory. Usually, the balloon
driver is able to reclaim memory in a timely fashion so that the host free memory stays above the soft
threshold.
If ballooning is not sufficient to reclaim memory or the host free memory drops towards the hard
threshold, the hypervisor starts to use swapping in addition to using ballooning. During swapping,
memory compression is activated as well. With host swapping and memory compression, the
hypervisor should be able to quickly reclaim memory and bring the host memory state back to the soft
state.
In a rare case where host free memory drops below the low threshold, the hypervisor continues to
reclaim memory through swapping and memory compression, and additionally blocks the execution
of all virtual machines that consume more memory than their target memory allocations.
In certain scenarios, host memory reclamation happens regardless of the current host free memory
state. For example, even if host free memory is in the high state, memory reclamation is still
mandatory when a virtual machine’s memory usage exceeds its specified memory limit. If this
happens, the hypervisor will employ ballooning and, if necessary, swapping and memory
compression to reclaim memory from the virtual machine until the virtual machine’s host memory
usage falls back to its specified limit.
For more information
To read more about HP and Client Virtualization, go to www.hp.com/go/cv
Other documents in the Client Virtualization reference architecture series can be found at the
same URL.
HP and VMware continue to build on more than ten years worth of partnership, to find out more,
go to www.hp.com/go/vmware and www.vmware.com/hp.
HP Insight Control Integrations, Insight Control for VMware vCenter Server, go to
http://h18000.www1.hp.com/products/servers/management/integration.html
HP P4000 and VMware Best Practices Guide, go to
http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA3-0261ENW.pdf
HP Client Virtualization Analysis and Modeling service, go to
http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA3-2409ENW.pdf
VMware View, go to, http://www.vmware.com/view
To help us improve our documents, please provide feedback at
http://h71019.www7.hp.com/ActiveAnswers/us/en/solutions/technical_tools_feedback.html.
© Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. AMD is a trademark of Advanced Micro Devices, Inc. Intel and Xeon are trademarks of Intel Corporation in the U.S. and other countries. Java is a registered trademark of Oracle and/or its affiliates.
4AA3-4920ENW, Created May 2011