xd - modular reference architecture
TRANSCRIPT
Worldwide Consulting Solutions | Whitepaper | Citrix XenDesktop
www.citrix.com
XenDesktop 5.x
Modular Reference Architecture
Page 2
Table of Contents
Overview ............................................................................................................................................................................................... 3
Conceptual Architecture ..................................................................................................................................................................... 4
Access Layer..................................................................................................................................................................................... 5
Desktop Layer ................................................................................................................................................................................. 6
Control Layer ................................................................................................................................................................................... 7
Design Planning ................................................................................................................................................................................... 9
Determine Desktop Layer Requirements ................................................................................................................................... 9
Determine Control Layer Requirements .................................................................................................................................. 15
Determine Access Layer Requirements .................................................................................................................................... 20
Design Examples ............................................................................................................................................................................... 22
Summary .............................................................................................................................................................................................. 24
Acknowledgments ............................................................................................................................................................................. 25
Revision History ................................................................................................................................................................................ 25
Page 3
Overview
Organizations are looking at desktop virtualization technologies as a method to enable device
independence and flexible work-styles, and to improve manageability of the desktop
environment. XenDesktop, with its FlexCast models, offers IT opportunities to deliver desktop
virtualization to a wide range of users by delivering every type of virtual desktop; hosted or local,
physical or virtual, shared or dedicated, tailored to meet the specific needs of the individual end
user groups.
As organizations scale XenDesktop it is important to consider how to best create large
configurations while minimizing risk. While a single XenDesktop site can scale to large numbers
of virtual desktops, creating very large single sites can involve risk as the loss of site components
have the potential to impact a significant number of users. As such, it is important to consider a
design that is modular in nature and allows an environment to be built in self-contained pods that
can be easily replicated. This allows organizations to build an environment which scales to large
numbers of users, while providing availability should a failure impact a single site structure.
This whitepaper focuses on developing a modular architecture to deliver hosted virtual desktops,
with considerations for an environment that can scale to tens of thousands of users. The
architecture provides details for delivering Hosted VDI and Hosted Shared virtual desktops.
Hosted Desktop Blades are discussed, but not covered in detail.
To help architects design a XenDesktop solution based on real-world projects, organizations can
refer to the Citrix Desktop Transformation Accelerator for step by step assessment, design and
deployment guidance, and the XenDesktop Design Handbook for reference architectures,
planning guides and best practices.
Page 4
Conceptual Architecture
Trying to design a completely integrated solution for large numbers of users can be challenging as
there are many variables which must be taken into account. Designing a solution in a modular
fashion allows it to be broken down into distinct modules; smaller portions such as layers and pods
that can be designed and linked together to create the end-state solution.
The modular architecture consists of three major layers which can be combined to build out the
required desktop solution; the access, desktop and control layers. Each layer is briefly discussed
below.
Page 5
Access Layer
The access layer consists of servers and appliances responsible for providing connectivity to the
XenDesktop environment through both web based methods and Citrix Receiver. The access layer
controls connectivity across multiple pods within the desktop delivery layer and generally only one
pool of servers is required to fulfill this role in a datacenter. The access layer consists of the
following components
Web Interface: Web Interface provides initial user access into the XenDesktop
infrastructure by accepting user credentials and passing them to the appropriate XenDesktop
or XenApp controller for authentication and enumeration. For remote users, authentication
takes place on the NetScaler/Access Gateway and credentials are passed to the Web
Interface and controller infrastructure. Once the controller acknowledges authentication,
Web Interface presents the user with the available resources. When the virtual desktop is
selected by the user, the XenDesktop Controllers in the XenDesktop site manage session
initiation.
NetScaler: Citrix NetScaler provides dual functionality in the XenDesktop modular
architecture. It provides front end secure remote access to the environment through
integrated Access Gateway functionality, and provides component high availability to
XenDesktop by load balancing the Web Interface, XenApp Controller and XenDesktop
Controller components. Load balancing is achieved using intelligent monitors which query
the Web Interface, XML, and XenDesktop Controller infrastructure to ensure that the
services are up and running correctly. For high availability, NetScaler devices are configured
in an active-passive pair to ensure that service is available should one device go down.
Page 6
Desktop Layer
The desktop layer contains the hardware level components that provide the base infrastructure to
deliver Hosted VDI, Hosted Shared and Hosted Desktop Blade desktops delivered from the
datacenter. The desktop layer consists of a number of hypervisor pools or blade PCs to deliver
service to the end users. A given XenDesktop implementation can include multiple desktop pods
containing pools of servers to deliver virtual desktops, aligned with Citrix FlexCast models for
virtual desktop delivery.
Hosted VDI Pool: Hypervisor pools provide the structure to run virtual desktops when
delivering a Hosted VDI model. For the purposes of this document, XenServer hosts are
considered, although using VMware vSphere or Microsoft Hyper-V host pools are also
supported. The hypervisor hosts are defined in XenServer pools to provide management
and high availability. Hosts within the pool share common storage and networking
components so that virtual machines can be started on any host within the pool where
sufficient resources exist.
Hosted Shared Pool: Pooled hypervisors hosting XenApp servers to provide Hosted
Shared Desktops. Hosted shared desktops provide a locked-down standardized
environment with a core set of applications. While it is possible to configure a XenApp
farm to deliver both applications and hosted shared desktops, this reference architecture
limits the pool configuration to delivering hosted shared desktops to simplify the
environment. Additional XenApp configurations can be created to deliver applications as
required.
Hosted Desktop Blade: A hosted blade pool consists of a number of physical blade PCs
within an enclosure. These blades are provisioned to end-users in a one-to-one
configuration. The number of blades in a single enclosure is dictated by the specifications of
the hardware manufacturer (as an example, some current blade systems support 16-32 blades
per enclosure). Specifications of the blade PCs can be matched to the end-users’
performance requirements.
Page 7
Control Layer
The control layer contains the XenDesktop components that are required to control the delivery of
virtual desktops to end-users. Some of the XenDesktop components in the control layer are
replicated per-desktop pod while some serve the entire configuration. The components in the
control layer are detailed below.
Per-Desktop Pod
XenDesktop Controller: The XenDesktop Controller provides the control structure to
manage virtual desktops delivered through the Hosted VDI model. The controllers
authenticate users, enumerate resources, and direct user launch request to the appropriate
virtual desktop. The controllers manage and maintain the state of the XenDesktop site to
help control desktop startups, shutdowns and Virtual Desktop Agent registrations. The
controllers constantly query and update the SQL database with site status, allowing
controllers to go offline without impacting user activities. Redundant XenDesktop
Controllers are configured and load balanced to ensure availability of the site should a single
controller fail.
XenApp Data Collector/XML Server: The XenApp Controller infrastructure provides
the link between the Web Interface and the XenApp pool providing Hosted Shared
desktops. Within a XenApp environment, there are two controller functions; Zone Data
Collector and XML Broker. Zone Data Collectors host an in-memory database that
maintains dynamic information about the servers in a zone (e.g. server load, session status,
and published applications). XML Brokers authenticate users, enumerate resources, and
direct user launch request to the appropriate XenApp application server. XenApp
controllers are regular XenApp servers that have been installed with Controller Mode
enabled and do not host any user applications. Zone data collectors are designated “Most
Preferred” and “Preferred” zone election preference for primary and backup data collectors
respectively. Dedicated XML brokers are assigned a default election preference. Redundant
XenApp controllers are configured and load balanced to ensure availability of the site should
a single controller fail.
Page 8
Provisioning Server: The provisioning server is responsible for streaming the operating
system to the virtual desktops in the infrastructure, and optionally streaming the server
image to XenApp servers. Citrix Provisioning Services allows a single vDisk to be used to
deliver a consistent virtual desktop across the environment, and to simplify image
management and maintenance. The provisioning servers must be configured for high
availability by configuring properties on the servers and vDisks, and by having multiple
servers sharing read/write access to a CIFS file share where the vDisks reside. This
structure will provide load balancing operations across the provisioning servers by
distributing I/O requests from target devices across all servers, and allowing target devices
to automatically fail over to another server should connectivity to the original be lost due to
server failure.
Across The Configuration
SQL Server: The SQL database provides the foundation for the virtual desktop solution,
storing all configurations, desktop and current utilization information. The server is critical
to the continuous functioning of XenDesktop, XenApp and Provisioning Services
configurations. The SQL server instance should be highly available to ensure continuous
operation. Citrix recommends the configuration of database mirroring in a synchronous
mirroring with witness configuration to ensure the database is protected and failover is
automatic.
Citrix License Servers: The licensing server provides Citrix licensing for all components
within the XenDesktop architecture, with the exception of the NetScaler components in the
Access pool, as they are manually configured with license files. Citrix licensing has a 30 day
grace period during which the XenDesktop components will function normally should the
license server become unavailable. Because of this grace period, a single license server as a
virtual machine or virtual appliance which is configured for VM-level HA can be
implemented. A failed license server can easily be rebuilt and restored from backup without
impacting operations of the XenDesktop infrastructure.
Note: On XenApp servers the information required to provide the 30 day license grace period is stored
locally on every server. In Provisioning Services based infrastructures, the relevant information is updated on
every XenApp server during runtime, but reset back to the state stored within the vDisk upon reboot. In case
the vDisk has not been in maintenance within the last 30 days, the license server fails and the XenApp
servers are rebooted, no new user sessions can be established. Therefore, the MPS-WSXICA_MPS-
WSXICA.ini is redirected to a file share, as described within Citrix Knowledgebase Article CTX131202.
Microsoft Windows Active Directory, DHCP, and DNS: Provides the base
infrastructure required to access virtual desktops.
Design Planning
The modular reference architecture is built on a “virtualize everything” approach, with all servers
and services within the architecture configured on virtual servers. The architecture is constructed
based on a number of desktop pods which are functional XenDesktop sites and are sized based on
scalability considerations and risk. As the size of an individual desktop site becomes very large, the
risk of a failure impacting a significant portion of the user population increases. Thus the modular
concept allows organizations to build smaller site configurations which are combined together
through the access and control layers to produce an environment that can serve the needs of both
large and small numbers of users while allowing for resiliency and availability should a single site
structure fail.
The first step in determining the overall configuration of the modular reference architecture is to
determine the number of users and the user requirements based on desktop load. Once the user
load is determined, the number of modules can be enumerated based on risk, and the individual
modules can be broken down by individual server and pool scalability. Each of the three layers in
the modular reference architecture is considered separately.
Determine Desktop Layer Requirements
Estimating Single Server Scalability
Single server scalability refers to the number of users that can be loaded onto a single virtual server
and is the basis for determining the scalability of a hypervisor resource pool and the overall
scalability of a single pod. Single server scalability needs to be considered for Hosted VDI and
Hosted Shared models. Hosted Blade scalability is 1:1.
The scalability numbers below are provided as initial estimation guidance. Scalability testing should
be performed prior to production to identify appropriate resource requirements for specific loads,
which should be determined through load simulation and/or user monitoring.
Page 10
Hosted VDI Scalability
For Hosted VDI, single server scalability is based on CPU and memory load. The majority
of XenDesktop deployments use a CPU overcommit ratio of between 4:1 and 8:1.
However, some high-performance virtual desktops may require multiple physical CPUs per
virtual desktop. The following table outlines the initial virtual machine recommendations for
virtual desktops. For more information, please refer to the Citrix Knowledgebase Article
CTX127277 – Hosted VM-Based Resource Allocation.
User Category
Operating System
vCPU RAM Users per Core (est.)
Light Windows XP 1 768 MB-1 GB 10-12
Windows 7 1 1-1.5 GB 8-10
Normal Windows XP 1 1-1.5 GB 7-9
Windows 7 2 1.5-2 GB 5-7
Heavy Windows XP 2 2-4 GB 2-4
Windows 7 2 4 GB 2-4
When determining the user density on a single server, calculate the maximum user density
for CPU and memory and use the lower value. The formulae for maximum users are as
follows:
( ) ( )
( )
Note: Each hypervisor requires CPU and RAM to function properly. On average, the hypervisor will need 2-5
GB of RAM. For XenServer, the maximum RAM configuration for Hypervisor Overhead is 2.94 GB. For
calculation, this is rounded to 3 GB. For information on configuring Dom0 memory in XenServer, refer to
Citrix Knowledgebase Article CTX126531. As the size of the physical server increases, so too does the
overhead. It is also advisable to reserve 1 CPU for hypervisor processing, which helps mitigate the risk of over-
allocating CPUs.
Page 11
For example, if a normal Windows 7 user is considered, each user will need 2 vCPU, 2GB of
RAM and the a system will support up to 7 users per core. Assuming a 2x8 core CPU with
128GB of RAM, a single server would scale as follows:
CPU Cores: 16, Users/Core: 7
( ) ( )
Available Memory: 128GB, RAM/User: 2GB
( )
( )
Single server scalability is determined by the lower of the two values, netting a rounded
scalability of approximately 60 users per hypervisor server for this example. Note that here
the bottleneck is memory. Adding more RAM to the hypervisor server would allow more
users to be hosted. Increasing memory to 196GB would allow 95 users per hypervisor
server, and would be a better balance of CPU and memory.
Page 12
Hosted Shared Scalability
For hosted shared desktops, multiple users are simultaneously hosted on a XenApp server.
As with the Hosted VDI model, the number of users that can be hosted on a single server
depends upon user load and CPU and memory. As XenApp 6.5 is 64-bit only, it is assumed
that the processor subsystem is the primary bottleneck. If 32-bit operating systems are used,
it is more likely that memory will be the primary bottleneck.
CPU overcommit is not recommended for XenApp servers. The following table provides
some general guidance on how to estimate user density based on the number of physical
cores available. The move from dual to quad sockets is not linear and this has been
accounted for by a 15% drop in user density:
Number of Sockets Users per Physical Core
Light Normal Heavy
Dual 18 12 6
Quad 15 10 5
The following assumptions were made during the creation of these estimates:
XenApp vCPU: The optimal number of vCPUs assigned to each virtual XenApp
server will vary according to the characteristics of the users and applications
supported. However, optimal density is typically obtained when 4 vCPUs are
assigned to each virtual XenApp server.
Processor Speed: The speed of the processors has a direct impact on the number of
users that can be supported per processor. The estimates provided are based on a
processor speed of 2.7 GHz.
Workloads: Light, normal and heavy workloads are not mixed within a single virtual
XenApp server or physical virtualization hosts.
Hypervisor Overhead: The overhead from supporting the hypervisor has been
accommodated by reducing the estimated number of users per core rather than
specifically reserving virtual CPUs.
XenApp Optimization: The recommended XenApp optimization recommendations
have been applied. For more information, please refer to the Citrix Knowledgebase
Article CTX131577 – XenApp 6.x Optimization Guide.
Page 13
The host density estimates in the following table were obtained by multiplying the number
of physical cores available by the estimated user density per core values.
Sockets Cores Total Physical Cores
VM Count
User Density per Host
Light Normal Heavy
2 6 12 6 216 144 72
2 8 16 8 288 192 96
2 10 20 10 360 240 120
4 6 24 12 360 240 120
4 8 32 16 480 320 160
4 10 40 20 600 400 200
As a general rule, RAM requirements should be calculated by multiplying the number of light
users by 341MB, medium users by 512MB and heavy users by 1024MB. Therefore, each
virtual machine hosted on a dual socket host should typically be assigned 12GB of RAM and
each virtual machine hosted on a quad socket host should be assigned 10GB of RAM.
Important: Although these estimates provide a good starting point it is still important that scalability
testing be performed to account for areas of variance, including – processor speed, processor architecture,
application set, usage patterns and number of idle users.
For the hosted shared desktop servers, given a VM to CPU ratio of 1:1, we would use the
following formulae to determine how many XenApp virtual servers, and thus users, we can
host on a single hypervisor server.
( )
Following the previous example and using a 2x8 Core hypervisor host with 128 GB RAM
and a “Normal” user load, an example single server would scale as follows:
( )
Estimating Resource Pool Scalability
Pooled hypervisor hosts provide the hypervisor structure to run Windows XP/Windows 7 virtual
desktops and XenApp servers when delivering Hosted VDI and Hosted Shared FlexCast models
respectively.
When creating a resource pool structure for a virtual desktop solution, Citrix recommends the
following resource pool sizes:
FlexCast Model XenServer Resource Pool
Streamed VDI Up to 12
Page 14
Dedicated/Pooled VDI Up to 8
Hosted Shared Up to 16
Estimating the number of resource pools and desktop pods required
The final step in building out the desktop layer is to determine the number of resource pools
required. For a given number of users, the decision on the number of pods deployed is made based
on risk and contingency. As the size of the XenDesktop configuration grows, increasing the
number of virtual desktops by scaling a single pod to larger sizes can create increased risk. While
there is redundancy built into the configuration, a significant failure has the potential to impact the
entire organization. This risk can be mitigated by adding additional pods, essentially scaling out
within the configuration rather than increasing the size of a pod. For example, if the user
population to be served is 10,000 hosted VDI users, it may make sense from a risk mitigation
perspective to create an architecture consisting of 4 x 2,500 user pods rather than a single 10,000
user pod. In this configuration, an additional pod can be created to provide a failover environment
should a single pod fail. For hosted shared desktops, splitting the configuration into pods may also
be a consideration, but the threshold size can be considerably higher as there are less servers and
smaller database structures in a XenApp configuration.
From an estimating point of view, the size of the resource pools and pods is based on the following
formulae:
Continuing with the example assuming 2,500 streamed VDI users per pod, and using the scalability
examples above for users and pool sizes, resource pool size and pod size can be calculated:
( )
For this example the final value (Pools per Pod) is rounded up to ensure capacity for the target
number of users. This also provides a level of availability within the pod structure; if a single
resource pool fails, the remaining four resource pools will be able to take up the load. Note that this
needs to be evaluated on a case by case basis. If the calculation works out to a whole number, an
Page 15
additional resource pool may be added to the environment for redundancy at the resource pool
level.
Determine Control Layer Requirements
When determining the configuration of the control layer, two control environments need to be
considered; per-desktop pod control elements, and control elements that serve the entire solution.
The per-desktop pod elements are the XenDesktop and XenApp controllers, and Citrix
Provisioning Server elements. Control layer components that serve the entire solution include the
Citrix Licensing Server, and Microsoft SQL server elements.
Page 16
Per-Desktop Pod Control Elements
XenDesktop Controller
The XenDesktop Controller provides the control structure to manage virtual desktops
delivered through the Hosted VDI model. The following XenDesktop Controller
configuration is recommended for the per-desktop pod control layer:
XenDesktop Controllers
VM Host CPU/Memory
Configuration
4 vCPU, 4GB RAM
VM Host Networking Bonded virtual NIC
VM Host Storage 40GB Shared Storage for OS, XenDesktop
Windows OS Windows Server 2008R2
Windows Roles/Features Web Services (IIS)
.NET Framework 3.5.1
Windows Process Activation Service
XenDesktop XenDesktop 5.5, 5.6
A single XenDesktop Controller using this configuration can support more than 5,000 users.
For redundancy, load balanced N+1 configuration is used.
Given a load balanced configuration for XenDesktop Controllers, a pair of controllers can
support a single XenDesktop site of over 10,000 users, so three controllers are required
based on the sample load and configuration.
XenApp Controllers
The XenApp Controller provides the link between the Web Interface and the XenApp pool
providing Hosted Shared desktops. The following XenApp Controller configuration is
recommended for the per-desktop pod control layer:
XenApp Controllers
VM Host CPU/Memory
Configuration
4 vCPU, 8GB RAM
VM Host Networking Bonded virtual NIC
VM Host Storage 40GB Shared Storage for OS, XenApp
Windows OS Windows Server 2008R2
Windows Roles/Features .NET Framework 3.5.1
Remote Desktop Services Role
Windows Application Server Role
XenApp XenApp 6.5
Page 17
For environments beyond 2,000 users, or for farms where periods of heavy logon traffic is
anticipated, Citrix recommends dedicated virtual servers for XML Broker components. For
all XenApp farm configurations, a single Zone Data Collector (ZDC) is required, with a
backup for redundancy. Primary and backup ZDC servers should be dedicated for
environments with more than 10 and 20 servers respectively. For XML controllers, the
following formula is used:
Citrix Provisioning Server
The following Provisioning Server configuration is recommended for the per-desktop pod
control layer:
Provisioning Server
VM Host CPU/Memory
Configuration
4 vCPU, 32GB RAM
VM Host Networking Dual Bonded 10GB NIC for data
VM Host Storage 40GB Shared Storage for OS, Provisioning
Server
Windows OS Windows Server 2008R2
Windows Roles/Features .NET Framework 3.5.1
Citrix Provisioning Server CPS v6.1
vDisk Storage CIFS Share with SMB v2 configuration
A single virtual provisioning server will support approximately 1000 streams. In order to
provide N+1 redundancy, a minimum of two provisioning servers are required. In order to
support the target configuration size, the following formulae can be used:
( )
( )
This value should be rounded up to the next highest whole number to ensure redundancy
and adequate scalability. Note that the amount of RAM required per server may vary
depending upon the number of vDisk targets that are streamed and need to be kept in
memory to reduce disk I/O. For more information on memory and storage considerations
for Provisioning Server, refer to Citrix Knowledgebase Article CTX125126
Page 18
Cross-Configuration Control Elements
SQL Database
The SQL database provides the foundation for the XenDesktop configuration, storing all
configurations, desktop and current utilization information. The following SQL server
configuration is recommended for the control layer:
Control Brick Configuration: SQL Servers
VM Host CPU/Memory
Configuration
Calculated value – SQL Mirrors
2 vCPU, 4GB RAM - Witness
VM Host Networking Bonded virtual NIC
VM Host Storage 40GB Shared Storage for OS, SQL
SQL Database Storage
(SQL Mirrors Only)
Storage for Databases
Storage for Transaction Logs
Windows OS Windows Server 2008R2
Windows Roles/Features .NET Framework 3.5.1
SQL Software Microsoft SQL Server 2008 R2
For a standard mirrored with witness configuration for SQL Server, three virtual machines
are required. Two of the VMs will host the databases and process transactions, and the third
will monitor the production and mirror copies of the database and will initiate failover if
required. The witness server does not require a high amount of system resources as it does
not process transactions or update the databases.
For the modular architecture, a single SQL mirror set with witness can support the entire
environment, but the CPU and Memory resources for the two SQL mirror servers will need
to scale to meet the needs as user load increases. The following are guidelines for increasing
CPU and Memory:
vCPU and Memory Sizing
Less than 2,500 users, 2 vCPU, 4GB RAM
2,500 to 10,000 users, 4 vCPU, 8GB RAM
Greater than 10,000 users, 8 vCPU, 16GB RAM
Note: Beyond 20,000 users, CPU and Memory utilization on the SQL servers should be reviewed to
determine when the SQL environment should be scaled out to a new configuration. For more information on
XenDesktop scalability, refer to Citrix Knowledgebase Article CTX128700.
Page 19
The most significant factor when sizing the database server will be determining the database
and transaction log sizing. Database sizing is relatively static for XenDesktop and XenApp
load, but transaction logs for XenDesktop are difficult to size as they depends upon multiple
factors such as the database recovery model, the peak desktop launch rate, and the number
of desktops in use. Rule of thumb calculations for database space and transaction log for
Hosted VDI and Hosted Shared desktops are as follows:
Hosted VDI
Citrix recommends that a fixed-size transaction log be used and that a SQL Alert is set up so
that when the transaction log reaches 50 percent full, the transaction log is backed up, thus
freeing up the log space.
Hosted Shared
These calculations are initial estimates. Specific details on configuring SQL for XenDesktop
can be found in the Citrix whitepaper XenDesktop 5 Database Sizing and Mirroring Best
Practices.
Citrix License Server
The licensing server provides Citrix licensing for all components within the XenDesktop
architecture, with the exception of the NetScaler components in the Access pool, as they are
manually configured with license files. The following license server configuration is
recommended for the control layer:
Citrix License Server
Number of VM’s 1
VM Host CPU/Memory
Configuration
2 vCPU, 4GB RAM
VM Host Networking Bonded virtual NIC
VM Host Storage 40GB Shared Storage for OS, License Server
Windows OS Windows Server 2008R2
Windows Roles/Features Web Services (IIS)
.NET Framework 3.5.1
Windows Process Activation Service
License Server Citrix License Server 11.10
Page 20
A single Citrix License server can issue approximately 170 licenses per second or over
300,000 licenses every 30 minutes. Because of this scalability, a single virtual license server
with VM level HA can be implemented.
Determine Access Layer Requirements
The access layer provides connectivity for end users to the XenDesktop environment. It includes
two components; Web Interface and NetScaler. The NetScaler component provides both secure
remote access through integrated Access Gateway functionality as well as intelligent load balancing
for the Web Interface and XenDesktop and XenApp Controllers.
Web Interface
The Web Interface provides initial user access into the XenDesktop infrastructure. The following
Web Interface configuration is recommended for the access layer:
Web Interface
VM Host CPU/Memory
Configuration
2 vCPU, 4GB RAM
VM Host Networking Bonded virtual NIC
VM Host Storage 40GB Shared Storage for OS, Web Interface
Windows OS Windows Server 2008R2
Windows Roles/Features Web Services (IIS)
.NET Framework 3.5.1
Windows Process Activation Service
Web Interface Web Interface 5.4
A single Web Interface serve has the capacity to handle more than 30,000 connections per hour.
Two Web Interface servers should always be configured and load balanced through NetScaler to
provide redundancy and balance load.
This value should be rounded up to the next highest whole number to ensure redundancy and
adequate scalability.
Page 21
NetScaler
Citrix NetScaler provides secure remote access through integrated Access Gateway functionality and
intelligent load balancing for the Web Interface, XenDesktop controllers and XenApp controllers.
Two NetScaler appliances configured in an Active/Passive pair are used to provide availability for
the NetScaler functionality in the event of an appliance failure. The primary sizing guideline for
NetScaler is based on SSL throughput. The largest amount of traffic through the NetScaler
appliances will be the Access Gateway traffic; load balancing traffic is nominal. To determine the
throughput required, the percentage of remote users and the bandwidth consumed per session are
used.
This value is compared to the throughput capabilities for various NetScaler models to determine the
best appliance to use for the configuration. The following table represents current NetScaler
capabilities for SSL Throughput.
NetScaler Model SSL Throughput (Mb per second)
MPX 5500 500
MPX 7500 1,000
MPX 9500 3,000
MPX 12500, 15000 6,000
MPX 17000 6,500
MPX 17500 8,000
MPX 19500 10,000
MPX 21500 11,500
For example, assuming 10,000 users with 20% remote users, and 65K average bandwidth utilization,
the appropriate NetScaler model can be determined as follows:
Based on this calculation, a pair of NetScaler MPX 5500 devices would be sufficient to handle the
load.
Page 22
Design Examples
Now that all the components of the modular architecture are determined, example configurations
can be created. For design examples, this whitepaper looks at a configuration containing a mix of
FlexCast models configured across a number of configurations, as follows:
The example environment consists of a mix of FlexCast models with 40% Hosted Shared, 20%
Pooled VDI, and 40% Dedicated VDI desktops. All Hosted VDI desktops are streamed from
Provisioning Servers. The user load is considered “Normal” across all models. Based on risk, a new
XenDesktop Hosted VDI pod will be created if there are more than 5,000 users in a single site. For
Hosted Shared, a new pod will be created if there are more than 20,000 users in a single XenApp
farm. Configuration sizing for the three examples is as follows:
Model Size # Users Hosted Shared
Pooled VDI
Dedicated VDI
Small 2,500 1,000 500 1,000
Medium 15,000 6,000 3,000 6,000
Large 50,000 20,000 10,000 20,000
To stay consistent with the examples, the hardware specification for this design consists of servers
with 2x8 core CPU and 128GB RAM.
Based on these values and the formulae in the Design Planning section of this whitepaper, we can
create the following configurations:
Desktop Layer
Model Size # XenApp Servers
# XenApp Pools
# Hosted Shared Pods
# VDI Servers
# VDI Pools # Hosted VDI Pods
Small 6 1 1 25 2 1
Medium 32 2 1 150 13 2
Large 105 7 1 500 63 6
Note that the Pooled and Dedicated VDI user requirement is combined into a single Hosted VDI
configuration.
Control Layer
Model Size # XenApp ZDC
# XenApp XML
# XenDesktop Controllers
#Provisioning Servers
# SQL Cluster Servers
# License Servers
Small 2 2 2 5 3 1
Medium 2 2 4 10 3 1
Large 2 3 12 33 3 1
Page 23
Based on vCPU and memory requirements as referenced in the Design Planning section of the
document, and using the same 2x8 core CPU and 128GB memory configuration, the virtual servers
in the control layer will require the following hardware. For this calculation, 1 vCPU equals 1 core,
and CPU overcommit is not considered.
Model Size Control Layer vCPU Total
Control Layer Memory Total GB
Servers Required
Small 56 232 4
Medium 84 408 6
Large 212 1216 15
Access Layer
For this example, it is assumed that 20% of the user population will be remote, and the average
bandwidth consumed is 65 KB per second.
Model Size # Web Interface # NetScaler Appliances
Small 2 2 MPX 5500
Medium 2 2 MPX 5500
Large 3 2 MPX 7500
The small number of virtual servers required for the Web Interface components of the access layer
consumes 4-6 vCPU and 8-12 GB of RAM. Given the low resource requirements for these
components, they can be hosted within the hardware infrastructure dedicated to the control layer.
The NetScaler components are physical appliances.
Page 24
Summary
As XenDesktop environments scale to large numbers of users risk mitigation becomes increasingly
important. The loss of a single XenDesktop site through component failure or database corruption
can affect a large number of users if the design considers a single site. The modular reference
architecture presented in this paper offers organizations a way to design an environment that
mitigates risk by creating smaller XenDesktop pod instances which are analogous to traditional
XenDesktop sites and XenApp farms. The layered architecture also simplifies the design allowing
architects to focus on design elements at each layer separately, and scale each layer in a logical
fashion. The overall architecture allows organizations to create a XenDesktop environment with
multiple FlexCast models that can scale from a small, single site to an enterprise configuration that
can provide virtual desktops for large numbers of users.
Page 25
Acknowledgments
Citrix Consulting Solutions would like to thank all of the individuals that offered guidance and technical assistance during the course of this project including who were extremely helpful answering questions, providing technical guidance and reviewing documentation throughout the project
Andy Baker
Matthew Brooks
Thomas Berger Daniel Feller
Revision History
Revision Change Description Updated By Date
1.0 Initial Document Rich Meesters April 18, 2012
1.2 Revised Hosted Shared Calculations
Rich Meesters April 30, 2012
1.3 Updated XenApp scalability
Andy Baker August 24, 2012
About Citrix Citrix Systems, Inc. (NASDAQ:CTXS) is a leading provider of virtual computing solutions that help companies deliver IT as an on-demand service. Founded in 1989, Citrix combines virtualization, networking, and cloud computing technologies into a full portfolio of products that enable virtual work styles for users and virtual datacenters for IT. More than 230,000 organizations worldwide rely on Citrix to help them build simpler and more cost-effective IT environments. Citrix partners with over 10,000 companies in more than 100 countries. Annual revenue in 2011 was $2.20 billion.
©2012 Citrix Systems, Inc. All rights reserved. Citrix®, Access Gateway™, Branch Repeater™,
Citrix Repeater™, HDX™, XenServer™, XenApp™, XenDesktop™ and Citrix Delivery Center™
are trademarks of Citrix Systems, Inc. and/or one or more of its subsidiaries, and may be registered
in the United States Patent and Trademark Office and in other countries. All other trademarks and
registered trademarks are property of their respective owners.