hpe reference configuration for microsoft sharepoint and ... · hpe reference configuration for...

45
HPE Reference Configuration for Microsoft SharePoint and Exchange Server 2016 on HPE Synergy Design for 10,000 users on Unified Communications and Collaboration (UCC) platform Technical white paper

Upload: truongdat

Post on 08-Apr-2018

230 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

HPE Reference Configuration for Microsoft SharePoint and Exchange Server 2016 on HPE Synergy Design for 10,000 users on Unified Communications and Collaboration (UCC) platform

Technical white paper

Page 2: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper

Contents Executive summary .................................................................................................................................................................................................................................................................................................................................3 Introduction ................................................................................................................................................................................................................................................................................................................................................... 4 Solution overview ..................................................................................................................................................................................................................................................................................................................................... 5

SharePoint virtualization ............................................................................................................................................................................................................................................................................................................. 8 SQL Server AlwaysOn HA features .................................................................................................................................................................................................................................................................................10 Exchange Server overview ......................................................................................................................................................................................................................................................................................................10

Solution components .......................................................................................................................................................................................................................................................................................................................... 14 Storage inside each HPE Synergy frame .................................................................................................................................................................................................................................................................. 20

Best practices and configuration guidance ................................................................................................................................................................................................................................................................... 25 Compute module settings ...................................................................................................................................................................................................................................................................................................... 25 HPE Synergy resilient network configuration options.................................................................................................................................................................................................................................. 27 Deploying SharePoint Server 2016 ................................................................................................................................................................................................................................................................................. 28 Deploying Exchange Server 2016..................................................................................................................................................................................................................................................................................... 31

Capacity and sizing ............................................................................................................................................................................................................................................................................................................................. 34 SharePoint sizing ............................................................................................................................................................................................................................................................................................................................ 34 HPE Sizers ............................................................................................................................................................................................................................................................................................................................................ 37 Load testing recommendations ......................................................................................................................................................................................................................................................................................... 41

Summary ....................................................................................................................................................................................................................................................................................................................................................... 42 Implementing a proof-of-concept ................................................................................................................................................................................................................................................................................... 42

Appendix A: Bill of materials ....................................................................................................................................................................................................................................................................................................... 43 Resources and additional links ................................................................................................................................................................................................................................................................................................. 45

Page 3: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 3

Executive summary This reference configuration demonstrates the use of the HPE Synergy composable infrastructure to support enterprise applications, in this instance combining and mixing Microsoft SharePoint Server 2016 and Microsoft Exchange Server 2016 in a virtualized and physical environment. This paper describes the design principles used by the Hewlett Packard Enterprise engineering team to determine optimal virtual server and storage location and parameters, and describes the solution design for deploying two complex applications, employing the best practices for each and leveraging the HPE Synergy composable infrastructure features that provide a robust solution for enterprise needs.

The HPE Synergy composable infrastructure model offers design and scalability options for the deployment of Microsoft Exchange and SharePoint. The HPE Synergy 12000 frame houses all of the components necessary for the deployment: including compute, networking and storage modules that allow the system architect to specify an infrastructure composed of a balanced set of the elements necessary to make the applications highly available

The HPE Synergy 12000 frame offers the following HPE Synergy composable infrastructure:

• Two or four-socket compute modules (the design in this paper focuses on the two-socket HPE Synergy 480 Gen9 compute modules)

• Network modules: HPE Virtual Connect (HPE Virtual Connect SE 40Gb F8 module) with Fibre Channel connectivity, and Ethernet switch (HPE Synergy 40Gb F8 switch module)

• HPE Synergy D3940 12Gb SAS storage module, supporting up to 40 hot-pluggable small form factor (SFF) drives with dual port connectivity including traditional hard drives and solid state drives.

This solution is designed to support 10,000 Exchange mailboxes using up to 7GB of capacity per mailbox with a heavy email usage profile (each user sending and receiving a combined 150 messages per day). In addition to Exchange at 100% user concurrency, the system is designed to support 10,000 SharePoint users at a typical Enterprise concurrency of 25% (2,500 active users at peak) performing a broadly applicable mix of document management and collaboration tasks. The solution design can be used as a building block approach for larger companies, by adding additional virtual servers for SharePoint servers and additional servers to the Exchange database availability group (DAG). The solution presented in this reference configuration is deployed across two racks with one in each site.

Solution sizing and configuration guidelines for Exchange and SharePoint are based on well understood best practices developed over time by HPE and Microsoft for many generations of both applications. With new features and enhancements, Exchange 2016 requires more processor and memory resources. This need can best be met by the latest CPU and architecture of the HPE Synergy compute modules. With the SharePoint 2016 version, the majority of these best practices still hold true, with some new areas to consider around the Search Service and distributing the various Web and other services across multiple virtual machines to provide better per-service virtual machine tuning.

This document details the concepts and solution designs needed to place applications such as Microsoft Exchange, SharePoint, SQL Server, and other applications on the HPE Synergy composable infrastructure pools of compute, storage and fabric. This solution is built upon the high availability features of HPE Synergy combined with the high availability features of Exchange and SharePoint.

Target audience: This white paper is intended for system architects, Chief Information Officers (CIO), decision makers, IT support staff and project managers involved in planning and deploying unified communications and collaboration applications, notably Microsoft SharePoint Server 2016 and Exchange Server 2016. The target audience for this white paper also includes individuals who are considering the use of virtualization technology to consolidate the physical server footprint required. A working knowledge of virtualization technologies, Exchange and SharePoint deployment concepts is recommended, but this document also provides links to reference materials. In addition, architects and systems administrators may benefit from seeing the design of both a non-virtualized (physical) deployment alongside virtualization on the HPE Synergy composable infrastructure.

Document purpose: The purpose of this document is to provide a detailed configuration example for deploying Exchange Server 2016 along with a virtualized SharePoint Server 2016 configuration, and to describe the benefits of deploying on an HPE Synergy composable infrastructure.

Page 4: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 4

Introduction This paper describes the design principles used by the HPE engineering team to determine optimal virtual server and storage location and parameters, and describes the solution design for deploying two complex applications, employing the best practices for each and leveraging the composable infrastructure features that provide a robust solution for enterprise needs. This reference configuration demonstrates the use of the HPE Synergy composable infrastructure to support enterprise applications, in this instance, combining and mixing virtualized Microsoft SharePoint Server 2016 and Exchange Server 2016 in a physical environment.

HPE Synergy composable infrastructure is designed around three core principles:

• A pool of resources including compute, storage and network that can be deployed, managed and reconfigured as business and application needs change.

• The ability to use templates or profiles to simply and quickly provision infrastructure resources in a repeatable and consistent manner.

• A unified API that provides access to all infrastructure components to provide access to hardware resources and allows for the management of hardware in a unified manner.

Exchange Server 2016 was released in September of 2015. It advances on Exchange Server 2013 with more efficient cataloging and search capabilities, simplified server architecture and deployment models, faster and more reliable eDiscovery, and expanded data loss prevention features. Sizing for Exchange 2016 is very similar to the sizing process for Exchange 2013

The HPE Synergy composable infrastructure model offers design and scalability options for the deployment of Microsoft Exchange and SharePoint. The HPE Synergy 12000 frame houses all of the components necessary for the deployment: including compute, networking and storage modules that allow the system architect to specify an infrastructure composed of a balanced set of the elements necessary to make the applications highly available to customers while aligning to specific business needs and Service Level Objectives (SLO).

Design points included in this solution for deployment on HPE Synergy composable infrastructure:

• Deployment of a highly available and disaster resilient architecture on the HPE Synergy composable infrastructure which allows implementation in a scalable manner with the ability to expand the architecture as needs change.

– Design the highly available solution to maintain client access in the event of the failure of a solution component.

– Define the service interruptions expected as connections are failed over from one component to another in the HA implementation.

• Align the deployment of Microsoft Exchange with Microsoft’s Exchange Preferred Architecture. The Exchange Server Preferred Architecture (PA) is described as the “Exchange Engineering Team’s prescriptive approach to what we [Microsoft] believe is the optimum deployment architecture for Exchange 2016, and one that is very similar to what we deploy in Office 365.”

– Show examples of scale up and scale out scenarios for the Exchange server environment including multiple racks and HPE Synergy frames which include direct attached storage within each HPE Synergy 12000 frame using HPE Synergy D3940 storage modules.

– Demonstrate the advantage of pairing the HPE Virtual Connect SE 40Gb F8 module with HPE Synergy 10Gb Interconnect Link modules to extend Ethernet networking to multiple frames without top of rack switches to reduce overall solution cost and simplify management.

– Leverage the HPE Synergy Composer and HPE Synergy Frame Link modules with consolidated management for compute modules, storage modules and networking components.

• Ensure that the solution design incorporates Microsoft and HPE design best practices while meeting customer requirements.

This document details the concepts and solution designs needed to place applications such as Microsoft Exchange, SharePoint, SQL Server, and other applications on the HPE Synergy composable infrastructure pools of compute, storage and fabric. This solution is built upon the high availability features of HPE Synergy combined with the high availability features of Exchange and SharePoint.

The Exchange solution is designed around a building block approach with a multi-copy design using a database availability group (DAG) and can be used to scale to large numbers of users. The DAG is the base component of high availability and site resilience built into Exchange. A DAG is a group of up to 16 mailbox servers that host a set of database copies, and thus provides recovery from failures that affect individual databases, servers, or the entire site, when designed accordingly. The Exchange DAG in this solution provides high availability with multiple copies in a

Page 5: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 5

primary data center location, and a secondary data center or disaster recovery (DR) location. A DAG with multiple servers and multiple database copies can withstand failures due to logical corruption in an Exchange database, the failure of one or more disk drives, a single server going offline or the complete outage of the servers in the primary data center or site. This design is based on a specific number of servers per DAG which determines the number of users that can be hosted on a single server in a failover scenario. The number of servers per DAG is a flexible design decision that customers can make and the tools, resources and design considerations are described in this document. The DAG in this reference configuration can be six servers per site, withstanding a single server failover, or it can be based on pairs of servers per site, with a total of three DAGs as the base design. The advantages and disadvantages of each are detailed in a comparison and table later in this document.

In Exchange Server the workload is typically defined by how many email messages a user sends and receives per day, the average size of each message, and storage requirements such as length of retention and ultimate mailbox size. In SharePoint, the workload is typically defined by the mix of functions the users will execute and by the required throughput. Throughput defines the function frequency/intensity and is commonly defined in “requests per second” (RPS). The combination of functions and their RPS drives the solution resource capacities required in terms of server CPU, memory, storage performance and capacity, and network traffic for the application services that will be deployed on each server in the solution. SharePoint is ideally suited to scale-out multi-server strategies, whereby specific services can be moved off the Front End servers and onto dedicated application servers to provide the required service capacities. IT policies supporting business imperatives may further define the configuration – a typical requirement being for a highly-available solution.

Solution overview The HPE Synergy 12000 frame in this solution uses the following HPE Synergy composable infrastructure components:

• Two-socket HPE Synergy 480 compute modules

• HPE Virtual Connect SE 40Gb F8 modules with HPE Synergy 10Gb Interconnect Link modules (satellite)

• HPE Synergy D3940 12Gb SAS storage modules, supporting 40 hot-pluggable small form factor (SFF) drives with dual port connectivity including a mix of traditional hard disk drives (HDD) and solid state drives (SSD)

HPE Synergy 12000 frame The starting point for this design’s building block is the HPE Synergy 12000 frame. Each frame is populated with six compute modules and three storage modules as shown in Figure 1 and two frames are placed into each of two racks. The storage modules are deployed into HPE Synergy 12000 frame slots 1-6 and the compute modules are populated into slots 7-12 as shown below.

Page 6: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 6

Figure 1. HPE Synergy 12000 frames, storage and compute modules for Exchange and SharePoint design (two sites)

Page 7: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 7

The HPE Synergy 12000 frame includes a consolidated management environment running on the HPE Synergy Composer. An HPE Synergy Composer is capable of managing multiple frames in conjunction with the HPE Synergy Frame Link module which connects multiple frames into a single management collection. An additional HPE Synergy Composer module can be added to a collection of frames to enable resiliency of the HPE Synergy Composer functionality. Infrastructure configuration and changes in HPE Synergy (such as firmware updates) are delivered via operations in the HPE Synergy Composer.

The use of the HPE Synergy composable infrastructure for applications allows the deployment and management of a set of hardware and subsequent adjustment of the balance of compute, storage, fabric and management in a simple and straightforward manner by the installation of additional compute modules or storage modules and drives. The wire-once model combined with the ability to expand compute, fabric and storage gives the user the ability to maintain the balance of hardware needs as application requirements change over time. HPE Synergy Frame Link modules connect up to 20 frames to manage, monitor and maintain compute, networking and storage components.

This Exchange solution is built to show Exchange running at large scale and to capitalize on the HPE Synergy composable infrastructure features that allow for integration of hardware and management across multiple frames while reducing the overall cost of the solution. The design shows a configuration of two racks with each rack holding a set of two HPE Synergy 12000 frames, which could be expanded to four HPE Synergy 12000 frames. The second rack provides disaster resiliency and can be placed in a different part of the data center or a secondary data center. The design is intended to be run in an active/active configuration where active Outlook users connect to the servers contained in each site, or it can be run in the situation where one site is unavailable. The next level of expansion of the solution can be made by adding another two racks for a total of four racks. Both the two and four rack configurations work in conjunction with the Exchange high availability and the database availability group (DAG) model to provide continued access to Exchange services for clients in the event of failure at the database, server, storage, network, rack or site level. The SharePoint availability model will be discussed next.

High availability (HA) can be defined as the solution having no single point of failure that would prevent users from accessing the solution services, even across multiple locations. Providing HA in a physical server solution involves multiple servers of each role type. Examples for Exchange Server are to use the database availability group (DAG) with multiple database copies in each location. Examples for SharePoint are two SQL servers in an AlwaysOn configuration, a minimum of two SharePoint Front-End servers, and a minimum of two servers supporting other MinRoles. In a physical server solution this can easily result in a 10+ server deployment and the likely under-utilization of some of the server resources. An increasing number of SharePoint deployments are now leveraging a virtualized environment with the various role services being deployed on specifically configured virtual machines (VMs). This approach has a number of key advantages. Virtualization can therefore provide an improved solution with more efficient use of resources and easier management.

Table 1 shows values for the different physical and virtual machines, as a starting point when configuring the servers for Exchange and SharePoint roles. Observation of actual server and virtual machine resource use and performance will guide you towards optimal resource values for your workload and deployment. The SharePoint virtual machines are sized to consume 40% of the host server resources, most notably processing or compute power. The additional CPU resources are intended as a reserve for either running more intensive SharePoint workload (more users or increased usage), or for additional applications. For example, this reserve capacity could be used to install another server such as a load balancer. The deployment of two load balancers within virtual machines in each site would provide an easy method to add redundant load balancer functionality to better serve client needs at a small amount of additional cost.

The number of cores column shows the number of virtual CPUs (vCPUs) in virtual machines. These vCPUs appear to the operating system as single core CPUs, and perform identically whether sockets or cores are chosen, with the following differences: a) CPU hot plug is only available for virtual sockets and b) some operating systems or applications may be restricted by the number of sockets, typically for licensing purposes, and c) the socket count of a virtual machine should not exceed that of the physical host. In all virtual machine configurations shown; one (1) socket with multiple cores (4) is configured for each virtual machine.

The memory column shows a range that can be considered, and a recommended middle value in brackets: [##]. Note that SharePoint roles (and also Exchange servers if virtualized) should not be configured using dynamic memory. Although an appealing feature of virtualization, dynamic memory quite often leads to memory under-provisioning and poor performance.

The role names have changed slightly in SharePoint 2016 and the table below refers to their display names.

Page 8: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 8

Table 1. Example configuration for application physical servers and virtual machines

SERVER QUANTITY NUMBER OF CORES

MEMORY (GB) [ASSIGNED]

OS DISK (GB) DATA DISK (GB)

SIZING CRITERIA

SharePoint [Virtual] vCPU Enterprise Class 10K RPM

Enterprise Class 10K RPM

Front-End role

4 4 16 – 24 [20] 120 200 Sized for each Front-End capable of 75 RPS per 4 core, 80% CPU

Search role 2 4 16 – 24 [20] 120 200 Typical minimum of 4 cores, with 2 virtual machines for redundancy

Application server role

2 4 16 – 24 [20] 120 200 Typical minimum of 4 cores, with 2 virtual machines for redundancy

Distributed cache role

2-3 4 16 – 24 [20] 120 200 Typical minimum of 4 cores, with minimum 2 virtual machines for redundancy

SQL Server 2 4 24 – 48 [32] 120 4TB Data / 2TB logs

~6.5TB volume

Each server at 25% of total Front-End cores, sized for only one active at a time.

Exchange Physical processor

Enterprise Class 10K RPM

Midline SAS 7.2K RPM

2016 Mailbox

[Physical]

6 2 x 10-core (non-Hyper-Thread)

Intel® Xeon® E5-2640 v4

96 – 128 [96] 600 20 x 2TB 2000 users/server in failover scenario (5 servers of 6)

167 users/database (1.2TB on 2TB disk)

7GB mailbox, 150 send/receive profile

SharePoint virtualization By virtualizing key SharePoint roles such as Front-End, Search, Application and SQL Server you can deploy fewer virtualization host servers to handle all the tasks previously supported by physical servers. This in turn reduces the costs around day-to-day operations (power, cooling, physical management, etc.). A further advantage is that each virtual machine can be configured with exactly the resources needed by the services running on that virtual machine. Further, as workloads, user behavior and business needs change over time, the virtual machine resources can be fine-tuned as part of a proactive capacity planning activity that ensures capacity can meet ongoing demands. Additional virtual machines can also be created to support changes in service needs, or to quickly deploy temporary requirements such as a development or test server farm. High availability can be provided by an appropriate combination of infrastructure technology and leveraging the HA design principles of SharePoint and SQL Server. The collaboration solution presented herein uses SharePoint Server 2016 and SQL Server 2016 running on a Windows Server® 2012 R2 guest OS, on multiple VMware® vSphere virtual machines. All SharePoint services and roles possibly could be deployed in one sufficiently provisioned virtual machine, and vMotion could restore a failed virtual machine fairly quickly, however, it is often more effective to split the roles and services across multiple virtual machines, as one could do across multiple servers in a physical environment. By deploying the multiple roles in multiple virtual machines hosted on different physical host servers, a failure and recovery would not impact service availability and may not even be noticed by users. The first step in any deployment of a SharePoint farm is to determine the required services and availability model for the farm based on the business needs. These needs map into the virtualization strategy and how to deploy on to the solution hardware. Typically, SharePoint deployment consists of these major service roles:

Front-End – The Front End virtual machines running the Web service of a SharePoint farm are the front line in the communication with the clients. Built on Microsoft Internet Information Services (IIS), the Front-End server is responsible for all HTTP traffic between the clients and farm. The Front-End servers tend to consume more CPU resources on average than the other services in the farm. It is also common to deploy the Query component of the Search service (this provides the users with functionality that services search requests) on the Front-End servers; however SharePoint provides many possible configurations for the Search service components. If increased capacity (performance) is needed, Front-End servers (in this case, virtual machines) can be scaled-up (more cores and memory) or scaled-out (Front-Ends spread over more virtual machines) depending on the need to scale. A minimum of two (2) Front-End virtual machines is recommended so as to leverage HA capabilities

Page 9: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 9

inherent to SharePoint. Note that multiple Front-End servers (virtual machines) will require some form of network load balancing technology to apportion client network load across the Front-End role servers.

Search Services – The Search services of SharePoint can be configured in several ways on one or more servers in order to provide Crawl (Index) and Query throughput and service redundancy. The service comprises multiple components and these may be deployed to best suit the search throughput needs. For example:

• All components on the Front-End server – simplest, for low throughput, and can provide redundancy if multiple Front-Ends are deployed.

• Move the Crawl and/or Query component to separate server – this is the first logical scale-out step, enabling the support of high volume and/or high frequency crawl and user query response operations, reducing impact on Front-End servers.

• Move all Search components to separate servers using the Search MinRole – this can provide the highest throughput and redundancy. Multiple Query or Crawl components can be run actively in parallel on separate servers, multiple index replicas can exist to ensure index availability, or various components can be designated as “failover” where they would become active if a primary component failed.

For SharePoint, the query processing component replaces the query role of previous versions. The query processing component requires more resources and is not recommended for web servers unless these are sized appropriately.

As part of services scale-out and redundancy of role virtual machines to provide high availability, it is also possible to run the various SharePoint Search service components on separate virtual machines in order to maximize efficiency and provide an “always available” service. In order to optimize the high availability and performance of Search for an Enterprise solution, run the Search Service on multiple virtual machines, deploy the specific services on separate virtual machines, and create a 2-replica Index to ensure availability of the Index data. With SharePoint 2016 these Search configuration changes are performed using PowerShell commands. An excellent description of the SharePoint Search topology and instructions regarding the PowerShell commands is contained in Microsoft TechNet articles.

Instead of having all components on one virtual machine, we want to accomplish three key changes:

• Separate the Query processing / Index and the Crawl/Analytics/Admin/Content processing services onto different virtual machines.

• Use two replicas of the Index to ensure the integrity and availability of the Index.

• Provide redundancy (fault tolerance) by duplicating services on multiple virtual machines in differing VMware clusters.

Application servers – It is common in smaller configurations to run most of the SharePoint Application services on the Custom role servers. Depending on the workload mix and relative use of the various services, the logical first step in scaling-out services on a farm is to add a new server and move specific services to the new server – usually referred to generically as an Application server. Providing multiple Application servers and leveraging the in-built SharePoint Applications Load Balancing Service provides appropriate load balancing and service redundancy. On larger configurations, as shown in detail later, the Application server can be configured to only run a core set of application services, with the Search and Web services being hosted on other dedicated virtual machines.

Distributed Cache – The SharePoint distributed cache feature is enabled by default and the Distributed Cache service is automatically started on all web and application servers in a farm. Distributed cache improves performance by caching social data, such as news feeds, and caching authentication tokens. In very large environments distributed cache can be offloaded to dedicated servers.

SQL Server – SQL Server is the repository of all the content and site structure in the farm. Of all the services or roles in the SharePoint farm, SQL Server consumes the most memory and is by far the most I/O intensive when it comes to disk access. Additionally, when the farm is under heavy load, CPU utilization can also become quite high. As a general rule-of-thumb, a single active SQL server can support about 4 Front-End servers (of the same level of resources), plus modest application services activity. A recommended way to provide both SQL service availability and also protect the various service and content databases is to deploy SQL Server using AlwaysOn availability groups. SQL Server can be run in a two-VM “AlwaysOn” cluster configuration so as to provide non-interrupted service and total integrity of database content via two replicas each on dedicated storage.

The query processing component in SharePoint offloads much of the CPU and disk load from SQL Server. The footprint and performance requirements for SQL Server in SharePoint are lower than the older product versions. As a result of this architecture improvement, the query processing component requires more local resources than previous versions. The query role can be combined with the web server role on a

Page 10: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 10

server only if there are enough resources. Running both of these roles on a single virtual machine requires a 6-8-core virtual machine. A 4-core virtual machine typically does not provide enough resources for both the query processing component and the Front-End role.

SQL Server AlwaysOn HA features SQL Server AlwaysOn provides a highly-available solution that both ensures service availability and protects the integrity and availability of the stored data by leveraging multiple replicas stored on separate storage. It is intended to replace the previously popular “mirroring” capability that is now being deprecated in favor of this new technology.

The feature provides a capability called AlwaysOn Availability Groups, not dissimilar in concept to database availability groups used by Microsoft Exchange. It is based on a 2-node SQL Server deployment where the servers (virtual machines in this case) reside in a Windows 2012 R2 Failover Cluster; however, the storage for the two nodes is not shared but rather separate. Enabling the Availability Group provides two replicas, one each on the Primary and Secondary SQL nodes. The Primary node is in communication with SharePoint and performs the database/file operations as needed, and the Secondary node mirrors the operations to ensure the two replicas are in constant total sync. SharePoint can leverage this technology directly. The main difference in operation is when configuring SharePoint initially it is usual to supply a Database Server name. In the case of AlwaysOn, you instead supply the name of the Availability Group Listener. This Availability Group Listener has a name and IP address registered in DNS as part of the AlwaysOn configuration.

Exchange Server overview In addition to 10,000 SharePoint users at a typical Enterprise concurrency of 25% (2,500 active users at peak, performing a broadly applicable mix of document management and collaboration tasks), this solution is designed to support 10,000 Exchange mailboxes up to 7GB each, with a heavy email usage profile (sending and receiving a combined 150 messages per day). Exchange uses a 100% user concurrency. This solution design can be used as a building block approach for larger companies, by adding additional virtual servers for SharePoint servers and additional servers to the Exchange database availability group (DAG), within the sizing limits discussed below.

Microsoft has published the Exchange Server Preferred Architecture (PA) which they describe as the “Exchange Engineering Team’s prescriptive approach to what we [Microsoft] believe is the optimum deployment architecture for Exchange 2016, and one that is very similar to what we deploy in Office 365.” It is available at http://blogs.technet.com/b/exchange/archive/2014/04/21/the-preferred-architecture.aspx. This solution closely aligns with the Microsoft Preferred Architecture for Exchange 2016 as it is built using direct-attached storage (DAS) in a RAID-less JBOD configuration. Note that it would be possible to use RAID1 mirrors instead but the mailbox capacity would be reduced or additional disks would be required, although the total number of database copies deployed could be reduced.

Exchange sizing limits Microsoft guidance changed to add a maximum recommended number of 24 processor cores with a maximum recommended memory of 96GB. Additional information on this subject is detailed in the TechNet blog article: http://blogs.technet.com/b/exchange/archive/2015/06/19/ask-the-perf-guy-how-big-is-too-big.aspx. Testing of other Exchange designs by HPE has shown that increasing the RAM to 128GB allows the number of users per server or workload (messages sent and received per day) to be increased, while still meeting acceptable performance and not scaling so large as to run into the support issues that Microsoft details in the above article.

This solution was designed using the recommended maximum CPU core count and memory guidelines published in the Exchange Team blog from Microsoft. The Microsoft recommended maximum memory of 96GB typically limits the number of users to 2,000 per server (at 150 send and receive profile). By increasing the server memory to 128GB the users per server can be increased to 2,500 according to the Microsoft Exchange Server Role Requirements Calculator, however, it will warn the user that the memory restriction has been exceeded.

According to Microsoft, the Preferred Architecture for Exchange follows these design principles:

• Includes both high availability within the data center, and site resilience between data centers.

• Supports multiple copies of each database, thereby allowing for quick activation with clients shifted to other database copies.

• Reduces the cost of the messaging infrastructure.

• Increases Exchange system availability by optimizing around failure domains and reducing complexity.

Page 11: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 11

This white paper shows the configuration of the server, storage and Exchange databases to meet these design principles.

The disk storage is HPE SAS drives of 2TB each, configured as a single RAID0 drive each, using 20 drives per server for the Exchange databases, and 2 additional drives for AutoReseed volume (allows automatic re-seeding in RAID-less JBOD since a single drive failure requires re-seeding) and Exchange recovery volume (also known as Restore LUN). The bill of materials later in this document (Appendix A: Bill of materials) shows the configuration with additional drives to allow for AutoReseed volumes and Restore LUN.

Encryption of data at rest utilizing BitLocker is also part of the design, along with key system components, such as Trusted Platform Module (TPM) 1.2 or later, and CPUs that support the Intel AES-NI instruction set to accelerate encryption and reduce the storage performance impact of encryption. Note that TPM v2 may be required for Windows Server 2016, which was not yet released at the time of this document.

Figure 2 below illustrates a simplified diagram of servers and multiple database copies in an active/passive site design, showing three servers in one frame in one site. It is also possible for active databases and thus active mailbox users to be hosted in both sites simultaneously, and this would spread the load.

Exchange active/active distribution The active/active Exchange user distribution model, across two sites, ensures the lowest impact to the user base in the event that a planned or unplanned outage takes place. The entire solution is sized so that all users can be supported by either site, in case of a severe outage.

Page 12: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 12

Figure 2. Simplified architectural diagram showing Exchange database copies

Understanding Exchange building blocks Exchange uses a building block approach to adding servers to the database availability group, and the more servers that are added, the more the load can be spread over the servers, and thus the number of users per server can increase as more servers are added. This reference configuration starts with a building block of 6 Exchange servers in each site, each capable of serving 2,000 users under the “worst-case” failover scenario. With 6 servers in each site and 1,667 users per server, the DAG is capable of supporting more users, however it can only withstand the failure of 1 server before servers in the secondary (or DR) site must be used as the failover target. The choice ranges from the 2 server building block, used as pairs within a DAG, up to 8 servers in each site with database copies evenly distributed, as the Microsoft Role Planning Calculator demonstrates. To scale the solution higher, deploy multiple DAGs.

The following table provides an overview of the configuration for Microsoft Exchange Server 2016 and the server building block.

Page 13: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 13

Table 2. Exchange configuration details

CONFIGURATION DETAIL

Number of users (mailboxes) per server

• Normal operations

• Failover of one server

• Disaster recovery (secondary) site

• 1,667

• 2,000

• Same design as Primary

Average user send+receive profile (and IOPS estimate) • 150 messages per day (0.1005 base IOPS)

Maximum mailbox size • Up to 7GB (based on 2TB disks)

• Databases per server (active and totals)

• Failover of one server

• Number of Mailboxes per database

• 10 active, 10 passive copies per server (60 databases total)

• 12 active on each server, 8 passive copies

• 167

Database availability group (DAG) configuration

• Total servers

• Servers in primary / secondary locations

• Copies in primary / secondary locations

• Total users per DAG

• 12 servers (limit of 16 servers in a DAG)

• 6 primary / 6 secondary

• 2 copies primary / 2 secondary

• 10,000 users

Database volumes

Planned maximum database size

• 20 RAID0 disks per server

• 1,243GB

Additional volumes • Restore volume – for emergency database operations

• AutoReseed volume –for automatic database reseeding in the event of disk failure

• Transport volume – create a new volume on the same RAID1 pair of disks as the operating system and relocate the Exchange Transport database and logs. The operating system and transport are on the same physical disks, but on different volumes to protect the system volume from reaching 100% capacity limit (such as due to a transport issue which may consume excessive disk capacity).

Deploying Exchange Server 2016 on older servers would require more servers to be deployed (given the same user profile and other solution parameters). An in-place upgrade using existing hardware is not possible, so new server hardware and storage capacity is required to deploy Exchange Server 2016.

Planning the Exchange secondary site Configuration of the secondary data center or disaster recovery location and managing name resolution is a complex topic, very specific to the needs of your organization, and beyond the scope of this document. Please follow the guidance “Understanding High Availability and Site Resilience” at http://technet.microsoft.com/en-us/library/dd638137.aspx.

Exchange network configuration In configuring the network subsystem the network requirements from Microsoft should be adhered to: http://technet.microsoft.com/en-us/library/dd638104.aspx#NR. The solution here follows the Microsoft Preferred Architecture simplified approach of using a single network interface for both client traffic and server to server replication traffic. It has been found in testing that Exchange replication will consume most of a 1Gbps network when database replication takes place. Since a RAID-less JBOD design with many databases increases the chance for database re-seeding (when a single disk fails), it is highly desirable to use 10Gbps networking for this single interface.

The Microsoft Exchange PA recommends not using NIC teaming in order to simplify the failover model. While the PA is Microsoft’s recommendations based on their deployment methods, not all installations will be at the same scale as that deployment. In smaller scale, an

Page 14: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 14

Exchange deployment may not span data centers, or even racks within the same data center, so other deployment options should be considered. For network connectivity redundancy of each of the servers, while NIC teaming is not recommended in the Microsoft PA, the most efficient way to implement network HA in this configuration is to use Windows NIC teaming. With NIC teaming, the failure of any single NIC on a server will not impact the solution and neither will the failure of either of the Virtual Connect modules.

A network load balancer is required for Exchange high availability, and this load balancing solution should also be highly available in order to maintain the high availability of the overall Exchange service. The following link discusses load balancer solutions: http://technet.microsoft.com/en-us/library/jj898588%28v=exchg.150%29.aspx.

Solution components The following section details the components of the HPE Synergy solution.

HPE Synergy Composer modules and HPE Synergy Frame Link modules Management of the HPE Synergy 12000 frames and the components they contain is done through the use of a redundant pair of HPE Synergy Composer modules powered by HPE OneView. The HPE Synergy Composer takes care of the infrastructure lifecycle management.

1. In the deployment phase, a template is defined that abstracts the compute, network and storage information for the infrastructure specified for a workload. After providing unique information like server name, IP address and others for each server, the resulting profile can be applied to one or more compute modules to instantiate servers within the management ring across multiple frames.

2. In the day-to-day management, HPE Synergy Composer takes care of any firmware or relevant driver upgrades of the infrastructure.

3. HPE Synergy Composer provides multiple tools to monitor the environment and provide relevant troubleshooting tools.

4. HPE Synergy Composer provides the unified API for users to include in their custom scripts to integrate this infrastructure into their environment.

The components used to deliver the management features are shown in Figure 3. One HPE Synergy Composer module is placed into a slot in the front of the initial HPE Synergy frame 1 containing an HPE VC SE 40Gb F8 module and an HPE Synergy 10Gb Interconnect Link (satellite) module; and the redundant HPE Synergy Composer module is placed into the front slot of another HPE Synergy 12000 frame which also contains an HPE VC SE 40Gb F8 module and an HPE Synergy 10Gb Interconnect Link (satellite) module. This placement of HPE Synergy Composer modules ensures the remaining frames can be managed in the event that a frame holding one of the two HPE Synergy Composer modules is unavailable.

The rear of each HPE Synergy 12000 frame contains a pair of HPE Synergy Frame Link modules as shown in Figure 3 which, in this design, are cabled from the top frame to the next frame down until you reach the bottom frame and then from the bottom frame back to the top frame using 10GBASE-T cables plugged into the frame uplink and crosslink connection network port to form a management ring network. The HPE Synergy Composer modules use the management ring network to communicate and manage resources across the frames in each rack in this solution. For this solution the second rack is a duplicate of the first rack and is placed into the same or a different data center to meet the customer’s desired disaster resiliency requirement. The management network connection provides connection points from the HPE Synergy Frame Link module to the management network that is deployed in the data center.

Page 15: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 15

Figure 3. HPE Synergy management components

HPE Synergy Composer eases management The HPE Synergy Composer takes care of the infrastructure lifecycle management providing simpler, faster deployment, and ease of management in a scaled environment.

HPE Synergy compute modules For this solution, each frame holds six HPE Synergy 480 Gen9 compute modules. Each compute module is provisioned with two Intel Xeon 10-core E5-2640 v4 processors, 96GB of RAM using six – 16GB DDR4 memory DIMMs, the premium storage option (one of the three possible compute module storage options that has an HPE Smart Array controller) with two 600GB SAS 10K RPM drives, an HPE Smart Array P542D controller in mezzanine slot 1 and an HPE Synergy 3820C 10/20Gb Converged Network Adapter in mezzanine slot 3. The pair of disks contained in the HPE Synergy compute modules are configured as RAID1 pairs. For modules hosting Exchange, these hold the Windows operating system files, Exchange Server application and logging files and the Exchange message transport data (for all messages passing through the server before it is saved to a mailbox database). For the compute modules hosting SharePoint and SQL virtual machines, the disks are also configured as a RAID1 pair, but are split into three separate logical drives that contain (a) the ESXi OS, (b) the HPE StoreVirtual VSA virtual machine, and (c) the HPE StoreVirtual VSA Centralized Management Console (CMC) virtual machine (see Figure 8 for details). An example of the two-socket HPE Synergy 480 Gen9 compute module component layout is shown in Figure 4.

HPE Synergy Composer HPE Synergy Frame Link Module

Frame uplink and crosslink connection

Management networkconnection

HPE Synergy Composer HPE Synergy Frame Link Module

Frame uplink and crosslink connection

Management networkconnection

HPE Synergy Frame Link module HPE Synergy Composer

Page 16: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 16

Figure 4. HPE Synergy 480 Gen9 compute module – half-height (HH)

Benefit of HPE Synergy compute modules The use of HPE Synergy compute modules in each frame allows the selection of processor, memory, internal and external storage options and network adapters to meet a wide range of applications and workloads.

HPE Synergy D3940 storage module Every frame in this solution design contains three HPE Synergy D3940 storage modules with an example shown in Figure 5. Each HPE Synergy D3940 storage module can hold up to 40 small form factor (SFF) drives that can be assigned to any compute module in the HPE Synergy 12000 frame. The drive types can be a mix of Enterprise-class SAS, Midline, and Solid State Drives (SSD). The HPE Synergy D3940 storage module opens to provide access to the hot pluggable drives if a drive needs to be replaced.

Figure 5. HPE Synergy D3940 12Gb SAS storage module with 40 SFF Drive Bays

StorageOptions DIMMs Processors Mezzanine

Cards

Page 17: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 17

Benefit of HPE Synergy D3940 storage modules HPE Synergy D3940 storage modules in each frame provide higher density, scalability and flexibility to share the storage across the compute modules without the need for complex service operations and downtime to add or remove external storage shelves.

HPE Synergy interconnect module configuration The HPE Synergy 12000 frames are connected with Ethernet network components that carry the server to server and server to client traffic. A pair of redundant HPE VC SE 40Gb F8 modules are provisioned into slots in the rear of two separate frames in the same rack and HPE Synergy 10Gb Interconnect Link modules are in all frames. As shown in Table 3 denoted by the *** the HPE VC SE 40Gb F8 modules are in interconnect module slot three in HPE Synergy frame 1 and module slot six in HPE Synergy frame 2. Production network connectivity from the first HPE Synergy 12000 frame that contains an HPE VC SE 40Gb F8 module connects to the remaining frames located in the same rack using the HPE Synergy 10Gb Interconnect Link modules over the HPE Synergy Interconnect CXP to CXP Direct Attach Copper Cables. The placement of the interconnect module redundant pairs and the mapping to the mezzanine slots in the compute modules is also shown in Table 3. The pairs of interconnect modules are color coded to highlight the pairs deployed in each frame and to differentiate the type of functionality provided.

Table 3. Interconnect module slot and mezzanine assignment for each rack with two frames per rack

INTERCONNECT MODULE SLOT

ICM PAIRED SLOT

COMPUTE MODULE MEZZANINE SLOT

DESCRIPTION

HPE Synergy frame 1

1 ICM4 Mezz 1 HPE Synergy 12Gb SAS Connection module with 12 Internal Ports

2 ICM5 Mezz 2 Blank

3 ICM6 Mezz 3 HPE VC SE 40Gb F8 module ***

4 ICM1 Mezz 1 HPE Synergy 12Gb SAS Connection module with 12 Internal Ports

5 ICM2 Mezz 2 Blank

6 ICM3 Mezz 3 HPE Synergy 10Gb Interconnect Link module

HPE Synergy frame 2

1 ICM4 Mezz 1 HPE Synergy 12Gb SAS Connection module with 12 Internal Ports

2 ICM5 Mezz 2 Blank

3 ICM6 Mezz 3 HPE Synergy 10Gb Interconnect Link module

4 ICM1 Mezz 1 HPE Synergy 12Gb SAS Connection module with 12 Internal Ports

5 ICM2 Mezz 2 Blank

6 ICM3 Mezz 3 HPE VC SE 40Gb F8 module ***

Benefit of HPE Synergy networking The combination of HPE VC SE 40Gb F8 modules and HPE Synergy Interconnect Link modules obviate the need for a switch at the top of every rack and at the same time provides line rate performance to all frames, minimizes switch management, and simplifies the overall infrastructure management.

HPE Synergy design for high availability and disaster resiliency Feedback from customers and partners has led to a focus on failure domains to ensure that Exchange features remain available to clients during periods of time when solution components are unavailable due to either maintenance activities (planned outages) or failure events (unplanned outages). Exchange uses database availability groups (DAGs) to provide for continuous availability. In this solution, Exchange DAGs allow the placement of resources and database copies across racks, frames, network components, compute modules and storage modules to ensure that there are no single points of failure.

Page 18: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 18

Exchange DAGs are based on the underlying Windows failover cluster model, and with an even number of nodes in the DAG a witness server is used so that the cluster has an odd number of quorum votes. The same witness server can be used for multiple DAGs and it is recommended that the server holding the witness server role reside in a third site so that cluster quorum is not lost in the event of a loss of either site or a WAN connection.

There are two main Exchange DAG designs to consider:

1. A single Exchange DAG for all twelve (12) servers or

2. Multiple Exchange DAGs.

Each has its own considerations, and an organization should decide on which approach fits best beforehand. If the 10,000 user design fits the size of your organization, then the single Exchange DAG design may work. However, if you need to expand beyond the 10,000 users, then consider the multiple Exchange DAG solution presented here. Table 4 below provides a comparison of two Exchange DAG designs. In addition to expansion, consider how each DAG design affects your protection against single points of failure, e.g., ensuring that database copies do not reside in the same storage module, or if possible, frame.

Table 4. Comparison of two Exchange DAG designs - single versus multiple

SINGLE EXCHANGE DAG MULTIPLE EXCHANGE DAG

Advantages • Spreads user load over larger number of servers, failover unit is spread over remaining servers thus server resources may be less demanding

• Ease of expansion, can add additional servers to each DAG, up to 16 server limit, e.g., multiple DAGs may start with 4 of 16, thus allowing up to 12 more servers in each

• Can design to withstand the loss of a single frame and failover model stays within same site

Disadvantages / Cautions

• Limit on expansion, e.g., if already using 12 of 16 maximum servers, then must create additional DAGs to expand

• Careful placement of database copies to align with failure domain, such as no two copies in the same storage module (or frame, if desired)

• Creates smaller failover groupings, which requires more resources to run greater number of users per server, e.g., failover model may be pairs of servers

Sizing • 96GB memory and 60% CPU under single server failover • 128GB memory and 77% CPU per single server failover, but can withstand 3 failovers per site

Page 19: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 19

The solution presented in this reference configuration is deployed across two racks with one in each site with either of the DAG node distribution options shown in Figure 6. DAGs are used in this solution design with six nodes deployed in each of the two sites, either in the same DAG or striped vertically across a minimum of two frames.

Figure 6. Exchange Server database availability group layout for multiple or single Exchange DAG

In the leftmost option shown above in Figure 6, the Exchange DAGs are aligned vertically with one DAG node in each HPE Synergy 12000 frame. The DAG to compute module mapping is:

• DAG1 – placed on HPE Synergy compute module 10 in each frame.

• DAG2 – placed on HPE Synergy compute module 11 in each frame.

• DAG3 – placed on HPE Synergy compute module 12 in each frame.

By placing the members of individual DAGs in different frames this provides another level of resiliency. Removing a frame from operation should not result in a loss of service to Exchange users, by this design.

Page 20: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 20

Storage inside each HPE Synergy frame This reference design has each HPE Synergy frame configured with three storage modules, each of which support 40 SFF drives. The drives within each HPE Synergy D3940 storage module are available to be assigned to any compute module within the HPE Synergy frame. This flexible configuration allows the use of this storage for virtualization and operating systems, and applications such as SharePoint and Exchange, as detailed next.

Virtualization / HPE StoreVirtual VSA storage design Each HPE Synergy D3940 storage module uses the same configuration with drives being placed into the module in the same quantities and positions. With the use of HPE Synergy composable infrastructure, the use of consistent building blocks helps to simplify understanding how things happen inside of the frame, compute and storage modules. The storage modules have drives configured for Exchange and drives that are configured for SharePoint. In the case of the drives used for SharePoint, the drives are configured to be used in conjunction with virtualized servers.

Virtualization/HPE StoreVirtual VSA storage for SharePoint The drives are physically mapped to the three HPE Synergy compute modules where they are attached to the HPE Smart Array P542D controller. The drives in each HPE Synergy D3940 are grouped together by type and configured into SAS arrays. One SAS array is created from the 12 – HPE 900GB 12G SAS 10K rpm SFF hard drives and another is created from the four - HPE 800GB 12G SAS Write Intensive SFF SSD drives. Figure 7 shows the overview of the physical drive mapping for virtualization/HPE StoreVirtual VSA storage layers.

Page 21: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 21

Figure 7. Overview of HPE Synergy D3940 presentation of storage for Exchange servers and virtualized SharePoint servers

A simplified connection diagram of the connection flow from the HPE Synergy D3940 in slots 1 and 2 to the HPE Synergy compute module in bay 7 is shown in figure 8 listing the total amounts of usable storage available at each layer of storage.

HPE VSA1-1.VHDx: HPE VSA Vol1 – 2,143 GBHPE VSA1-2.VHDx: HPE VSA Vol2 – 8,037 GB

Logical Drive 5: HPE VSA Vol1 – 8,037 GB

HPE VSA2-1.VHDx: HPE VSA Vol1 – 2,143 GBHPE VSA2-2.VHDx: HPE VSA Vol2 – 8,037 GB

HPE VSA3-1.VHDx: HPE VSA Vol1 – 2,143 GBHPE VSA3-2.VHDx: HPE VSA Vol2 – 8,037 GB

Logical Drive 3-23: Exchange DB1-20 & LogsLogical Drive 24: Exchange Auto ReseedLogical Drive 25: Exchange Recovery

Logical Drive 1: ESXi Server OS - 500 GBLogical Drive 2: HPE VSA OS - 50 GBLogical Drive 3: CMC OS - 50 GB

Logical Drive 1: ESXi Server OS - 500 GBLogical Drive 2: HPE VSA OS - 50 GBLogical Drive 3: CMC OS - 50 GB

Logical Drive 1: ESXi Server OS - 500 GBLogical Drive 2: HPE VSA OS - 50 GBLogical Drive 3: CMC OS - 50 GB

Logical Drive 1: Host Server OS - 300 GBLogical Drive 2: Transport Queue - 300 GB

Logical Drive 1: Host Server OS - 300 GBLogical Drive 2: Transport Queue - 300 GB

Logical Drive 1: Host Server OS - 300 GBLogical Drive 2: Transport Queue - 300 GB

Logical Drive 3-23: Exchange DB1-20 & LogsLogical Drive 24: Exchange Auto ReseedLogical Drive 25: Exchange Recovery

Logical Drive 3-23: Exchange DB1-20 & LogsLogical Drive 24: Exchange Auto ReseedLogical Drive 25: Exchange Recovery

Logical Drive 4: HPE VSA Vol1 – 2,143 GB

Logical Drive 4: HPE VSA Vol1 – 2,143 GB

Logical Drive 5: HPE VSA Vol1 – 8,037 GB

Logical Drive 5: HPE VSA Vol1 – 8,037 GB

Logical Drive 4: HPE VSA Vol1 – 2,143 GB

CMC Vol1: Network RAID1 – 20,360 GB

HPE 2TB 12G SAS 7.2K rpm SFF Hard Drive

HPE 800GB 12G SAS Write Intensive SFF SSD

HPE 600GB 12G SAS 10K rpm SFF Hard Drive (in Compute Modules)

VMware vSphere ClusterFrame 1

VMWinOS

FE

VMWinOSSearch

VMWinOS

FE

VMWinOS

APP

VMWinOSCache

VMWinOS

SQL

SAS Array A – 600 GB RAID

1

SAS Array A – 600 GB RAID

1

SAS Array A – 600 GB RAID

1

CMC01 CMC02 CMC03

HPE VSA01 HPE VSA02 HPE VSA03

SAS Array A – 600 GB RAID

1

SAS Array A – 600 GB RAID

1

SAS Array A – 600 GB RAID

1

HPE 900GB 12G SAS 10K rpm SFF Hard Drive

Legend:

SAS A

rray B

SSD R

AID

5

SharePoin

t A

rray C 1

0K R

AID

5

Exchange Single Spindle RAID

0

SAS A

rray B

SSD R

AID

5

SharePoin

t A

rray C 1

0K R

AID

5

Exchange Single Spindle RAID

0

SAS A

rray B

SSD R

AID

5

SharePoin

t A

rray C 1

0K R

AID

5

Exchange Single Spindle RAID

0

Page 22: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 22

Figure 8. Presentation of storage to SharePoint virtual machines

SharePoint storage design SharePoint and SQL run on virtual machines hosted by three compute modules running in a vSphere cluster. Each compute module uses two internal 600GB 10K drives configured as a RAID1 array, and split into three logical volumes. These are used for the ESXi operating system volume, the HPE StoreVirtual VSA virtual machines volume and the HPE StoreVirtual VSA Centralized Management Console (CMC) virtual machines volume. As the storage for the SharePoint and SQL virtual machines is provided by HPE StoreVirtual VSA, it is important that the ESXi and HPE StoreVirtual VSA VM-related storage be internal, such that booting of the vSphere cluster and the HPE StoreVirtual VSA storage cluster is ensured before the other virtual machines are started.

Page 23: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 23

The startup sequence is:

• First, the three ESXi nodes boot and the vSphere cluster can form.

• On each node, the HPE StoreVirtual VSA virtual machine starts and forms the HPE StoreVirtual VSA storage cluster and presents the pool of storage.

• Related volumes and datastores mount, providing cluster-wide datastores to the SQL and SharePoint virtual machines.

• The SQL and SharePoint virtual machines boot, and the datastores are realized as LUNs on the virtual machine guest OS.

Table 5 details the storage volumes defined at each level of the virtualized storage design. The operation, reading from right to left, is as follows.

• Each of the three HPE Synergy D3940 storage modules presents a total of 16 drives to one of the three compute modules, and to the HPE StoreVirtual VSA VM running on each compute module.

• As viewed at the HPE StoreVirtual VSA level these appear as two logical drives on each of the three HPE StoreVirtual VSA nodes comprising approximately 8TB (10K HDD) and 2TB (SSD) each.

• The HPE StoreVirtual VSA cluster forms the three sets of two volumes into a single pool of storage, but also stripes each volume as a Network RAID1 array between two of the HPE StoreVirtual VSA cluster nodes. Thus Node A volumes are striped to Node B, Node B to Node C, and Node C to Node A. This ensures a copy of all storage is available from two of the nodes should a single HPE StoreVirtual VSA node failure occur.

• The formation of the storage cluster pool and the Network RAID1 stripes results on a total available pool of storage of about 20TB.

• The HPE StoreVirtual VSA CMC application can now be used to carve up the 20TB pool into separate volumes to be presented to the three vSphere cluster nodes. These will generally be the same size, or slightly larger, than the datastores that will be created to provide guest OS LUNs.

• Once presented, the volumes are visible to the vSphere cluster nodes and you can use the vSphere Client to create the required datastores, hosted on the various volumes. These datastores will vary in size depending upon purpose (e.g., virtual machines operating system, SharePoint Data volume, SQL Data and Log volumes for various groups of databases, etc.). For details of an example datastore design, see Figure 18 and Figure 19 in the SharePoint content storage section.

• Finally you can define the required virtual machines, defining the VM hard disks by reference to the relevant data stores.

• It will also be possible to move a virtual machine to another host in the vSphere cluster, as the datastores are visible to all three nodes.

Table 5. HPE StoreVirtual VSA storage from CMC available to SharePoint virtual machines in vSphere cluster

VIRTUAL MACHINE (DATASTORE) CMC (NETWORK RAID) HPE STOREVIRTUAL VSA VM (EACH) HPE D3940 STORAGE MODULE (EACH)

VM1: Volume 1 - 120GB

VM1: Volume 2 - 200GB

VM2: Volume 1 - 120GB

VM2: Volume 2 - 200GB

Etc.

CMC Volume 1 - 20,360GB

Network RAID1

Logical Drive 4 – 2,143GB

SAS Array B – 4 – 800GB SFF SSD 2,143GB (RAID5) – Usable

21,053 IOPS

Logical Drive 5 –8,037GB

SAS Array C – 12 – 900GB SFF HDD 8,037GB (RAID5) – Usable

789 IOPS

Page 24: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 24

Exchange Server storage design For this design, 22 of the 2TB 12G SAS 7.2K rpm SFF drives are presented to each compute module. The disks contained in the HPE Synergy D3940 storage module are configured in RAID-less JBOD and 20 of the disks each holds an Exchange database and corresponding log files. Two additional disks are presented to each server from a storage module with one disk used if needed for database recovery operations and the other disk used for automatic reseeding of Exchange database data in the event of a physical disk failure. See Figure 9 below.

Each Exchange database disk is encrypted with Microsoft BitLocker to ensure data security in the event a disk is removed from the system. This is discussed in more detail later.

Figure 9. Mapping for Exchange and SharePoint servers of one HPE Synergy D3940 storage module to compute modules

Page 25: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 25

Figure 10. HPE Synergy D3940 storage module drive mapping to HPE Synergy compute modules

Best practices and configuration guidance There are a number of configuration options when deploying Exchange and SharePoint onto HPE Synergy composable infrastructure.

In addition to the Microsoft Preferred Architecture for Exchange 2016 and Exchange Product Team guidance, this section lists the additional configuration options to consider and test before deploying Exchange 2016 into your production environment.

Compute module settings The default power mode for the HPE servers is Dynamic Power Savings Mode. The recommendation from the Microsoft Exchange product team is to set the server BIOS to allow the operating system to manage power, and use the "High performance" power plan in Windows. Because of the constant high workload of the Exchange mailbox role, the power mode should be changed, as shown below in Figure 11, to OS Control Mode. The second screenshot shows verifying that the server is configured for Maximum Performance.

Compute module power mode Changing the server power mode from default Dynamic Power Savings Mode to OS Control Mode has a significant benefit in performance and should not be overlooked unless the end user validates that the power saving profile can sustain the production workload.

Page 26: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 26

Figure 11. Changing power mode to OS Control Mode

Microsoft provides an Exchange Server Performance Health Checker Script at https://gallery.technet.microsoft.com/Exchange-2013-Performance-23bcca58. The output of this script is discussed later but one of the items is power plan settings. The default for Windows Server 2012 R2 is balanced power mode and this needs to be changed to high performance based on the Microsoft Exchange product team recommendation.

Storage controllers The HPE Smart Array P542D controllers for the Exchange databases and logs can be left at the default, 90% write and 10% read. Previous testing has shown that modifying this setting has little benefit, so the default was tested to see if it performed adequately. The HPE Smart Array storage controllers offers a choice of acting as a traditional RAID controller or running in HBA mode, which then presents all attached drives directly to the operating system without RAID. However, the HBA configuration does not enable the write caching and should not be used for Exchange databases. When using the default, as a traditional RAID controller, a new feature in the Smart Storage Administrator allows configuring individual drives as RAID0, which has the same net result of presenting the drives directly to the operating system. With this new feature the configuration is much simpler than in the past, and no HPE Smart Array scripting is necessary to configure disks as RAID-less JBOD. The only scripts necessary are to use Diskpart or PowerShell to configure and format the Windows volumes. Note that the screenshots shown below in Figure 12 is from another system and may not match exactly, but presents the general concept.

Page 27: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 27

Figure 12. Smart Storage Administrator configuration of individual drives as RAID0

Caution: HBA mode versus RAID-less JBOD The HPE Smart Array P542D storage controller offers a choice of acting as a traditional RAID controller or running in HBA mode, which then presents all attached drives directly to the operating system without RAID. Using the HBA configuration does not enable write caching and should not be used for Exchange databases due to the lower performance this causes. When using the default, as a traditional RAID controller, a new feature in the Smart Storage Administrator allows configuring individual drives as RAID0, which has the same net result of presenting the drives directly to the operating system while still using the HPE Smart Array write caching.

The HPE Smart Array storage controllers also offer a choice of Power Mode and should be left at the default Max Performance, since Exchange is a storage intensive application. Any deviation from this setting may degrade performance so test and apply any changes only with due caution.

HPE Synergy resilient network configuration options Each HPE Synergy frame provides redundant network connections which are provisioned to the HPE Synergy 480 Gen9 compute modules through the use of the HPE Synergy Composer. This provides each compute module with two 10GbE connections through the HPE Synergy 3820C 10/20Gb Converged Network Adapter. The Microsoft Exchange Preferred Architecture recommends the deployment of a simple design with one network connection when using 10GbE networks. The capability exists in Windows Server 2012 R2 to team the two 10GbE connections into a single logical network and the customer can decide if they prefer to use the redundant configuration that Windows teaming provides or to deploy a single (non-teamed) connection.

Page 28: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 28

Deploying SharePoint Server 2016 First will be the deployment of SQL Server AlwaysOn Availability Groups to provide the database role for SharePoint. This will be deployed as a minimum of two VMs running on separate ESXi hosts with a Windows Failover Cluster running on the VM guest OS to support SQL AlwaysOn. Separate vSphere datastores provide isolated storage for the two database replicas.

SQL Server AlwaysOn The following sequence of steps is required for the deployment of SQL Server 2012 AlwaysOn Availability Groups.

1. Creation of a two-node Windows Server 2012 R2 failover cluster.

2. Installation of SQL Server 2016 on each cluster node.

3. Enabling SQL Server AlwaysOn Availability Group features.

4. Creating and configuring the availability group.

5. Adding the SharePoint databases to the availability group, as part of the SharePoint 2016 installation and sites creation.

The Windows Server Failover Clustering (WSFC) 2-node cluster is created using the virtual machines previously created for SQL use. Both cluster nodes are part of the same Active Directory environment that represents the customer data center as part of the lab test environment. Note that each SQL virtual machine is hosted on a server that is in a different VMware cluster (and Synergy frame), thus providing a deliberate failure boundary. Each SQL server also has its own dedicated storage volumes that are presented to each physical host in the relevant cluster, thus providing dedicated storage for the two AlwaysOn replicas.

Windows Server Failover Clustering is based on a Voting algorithm where more than one half of the voters, or Quorum, must be online and able to communicate with each other. We used “Node and File Share Majority” as the Quorum Mode, since we have two virtual machine cluster nodes, and added a Remote File share as Voting Witness. This enables an odd number of votes to prevent possible ties in the quorum vote.

Select “Stand-Alone installation” of SQL Server 2016 on each node of the cluster. Select “Database Engine Services” and “Management Tools-Basic/Management Tools-Complete” in the “Feature Selection” page in the SQL Server Setup Wizard. The SQL Server installation was done in the “Default Instance” and “Windows Authentication” was used as the Authentication Mode.

The following are the key steps for enabling AlwaysOn Availability Groups:

1. On the Primary replica server we launch SQL Server Configuration Manager.

2. Create a Remote Network share which is accessible from all the replicas. This remote network share will be used in Data Synchronization.

3. In Object Explorer, select “SQL Server Services”. On the right side of the screen, right-click the SQL Server (<instance name>) and click properties.

4. Select the “AlwaysOn” tab.

5. Select the “Enable AlwaysOn Availability Groups” checkbox. Click ON.

6. Manually restart the “SQL Server (<instance name>)” Service to commit the change.

7. Repeat the above steps on the second SQL Server node in the cluster. Then specify the required parameters for the two replicas.

Next, configure an Availability Group Listener. An Availability Group Listener is a Virtual Network Name which directs incoming client connections to the Primary replica or read-only secondary replica. It consists of DNS listener name, Listener port designation and one or more IP address. It supports only TCP/IP protocol. In case of primary replica failure in availability groups, it assists in fast application failover from failed Primary replica on one instance of SQL Server to new Primary replica on another instance of SQL Server.

Finally, set up initial data synchronization by setting the path to a remote network backup shared folder. Once the SQL Server Availability Group is created you can install SharePoint in your environment.

SharePoint 2016 MinRoles In prior versions of SharePoint, the server “role” was implied by the collection of services running on that server – thus names like Web Front End, Application server, Search server, etc. became common and their purpose was well understood. SharePoint 2016 has changed such that there are

Page 29: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 29

MinRole names selected when running the configuration wizard, and the MinRole will cause the server to adopt that role and only run specific services for reasons of performance optimization. For a detailed description please refer to https://technet.microsoft.com/en-us/library/mt346114(v=office.16).aspx.

MinRole server roles are:

• Front-End

• Application

• Distributed cache

• Search

• Custom

Figure 13 illustrates the MinRole concept and how the roles fit into the overall environment (source: spjeff.com, used with permission).

Figure 13. SharePoint 2016 MinRoles and other components (source: spjeff.com)

Note that three Search servers are shown, with one or two instances of the various Search components hosted across those servers. Running at least two Query and Crawl components, with two Index replicas, is a best practice when high availability is required. Search components can be added and moved via specific PowerShell commands and the quantity and location can be adjusted to suit performance and required capacity as these change over time.

In order to achieve high availability, a minimum of two (2) of each role type are required for a full MinRole deployment; thus totaling 10+ servers or VMs. SharePoint can be initially deployed with less roles and servers (VMs) and later be scaled out and roles further defined as requirements change over time. In the example presented herein, we have used a fully deployed MinRole design incorporating all roles. Initial deployment would begin with a single Custom Role server to establish the farm, key services and Central Administration capabilities. The Custom role can run all Services. The farm can then be scaled-out by adding a second Custom role to provide redundancy and then successively scaled out by adding pairs of VMs providing specific roles (Front-End, Search, Application, Distributed Cache). Pairs of VMs are required to provide redundancy as part of SharePoint HA.

Page 30: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 30

SharePoint high availability As shown below in Figure 14 the virtual machines reside in two separate VMware vSphere clusters, and this design represents a fully redundant SharePoint farm configuration. It provides a separation of various SharePoint service MinRoles onto separate virtual machines, thus maximizing efficiency and allowing for precise virtual machine tuning matching each role. It also uses multiple redundant role virtual machines running on ESXi hosts in separate VMware clusters to establish a deliberate failure boundary, and leverages the in-built SharePoint Application Role Balancing Service to apportion the service load across multiple virtual machines. This design, and deliberate over-sizing, also handles the unlikely event of a virtual machine failure whereby the surviving virtual machines can handle the total load; or the deliberate event of taking down a virtual machine or host for periodic maintenance while continuing to provide the service to users.

The design embodies several HA concepts:

• The required number and type of roles are split evenly and identically onto two separate vSphere clusters, each of which is running across three ESXi hosts in a single HPE Synergy frame. This establishes a reasonably sized cluster failure domain.

• In each frame, a distinct set of 16 disks is presented out of a separate HPE Synergy D3940 storage enclosure to each of the three ESXi hosts which are running HPE StoreVirtual VSA to provide a clustered storage solution to each vSphere cluster, and thus also provides the separate storage required for each replica as part of the SQL AlwaysOn deployment. This design ensures storage redundancy and integrity.

• The Front-End role virtual machines are split across multiple hosts and clusters, such that the loss of a single host or virtual machine has the minimal possible impact on the overall solution.

Figure 14. SharePoint/SQL role virtual machine design within vSphere clusters

Page 31: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 31

Figure 15 illustrates the initial sizing and deployment of the MinRole virtual machines across the available hosts. As each virtual machine is initially configured with 4 vCPUs and the roles are evenly split across the available hosts, a total of 8 out of 20 vCPUs (cores) are used on each host, leaving resources available.

Figure 15. SharePoint/SQL role virtual machine design, showing estimated compute capacity

This reserve capacity can be used for several purposes:

• If a host needs to be shut down for planned maintenance (e.g., firmware update), or suffers an issue causing virtual machines to fail, the virtual machines on the host can be moved to alternate hosts as their addition to existing virtual machines would still result in only 60-80% capacity utilization and overall services performance and availability would be maintained.

• If specific business periods require higher capacity, virtual machine resources can be increased to provide higher performance (I.e., increased vCPUs and RAM).

• vCPU, RAM, storage and network resources remain to support additional applications.

Deploying Exchange Server 2016 The detailed steps to prepare the environment and install Exchange are well documented on Microsoft TechNet and are not duplicated here (see https://technet.microsoft.com/en-us/library/bb691354%28v=exchg.160%29.aspx). For a ‘greenfield’ deployment the process may be significantly different than in other organizations. Each server is prepared with the prerequisites and the latest Exchange Cumulative Update (CU) is installed (microsoft.com/en-us/download/details.aspx?id=49044).

To ease the process of deploying Exchange prerequisites for Windows Server 2012 R2, the published PowerShell script can be used to download, install and configure prerequisites for Exchange, however we did not test with Exchange 2016 yet. It is available at https://gallery.technet.microsoft.com/office/Exchange-2013-Prerequisites-3f8651b9.

To achieve optimal performance, the following guidelines and best practices should be considered.

Hyper-Threading: Also known as simultaneous multithreading (SMT), Hyper-Threading should be turned off when Exchange Server is run directly on physical servers according to Microsoft at http://blogs.technet.com/b/exchange/archive/2013/05/06/ask-the-perf-guy-sizing-exchange-2013-deployments.aspx. It is acceptable to enable Hyper-Threading on physical hardware that is hosting hypervisor software with Exchange virtual machines, so you may see it enabled in other reference architectures from HPE. But capacity planning should always be done based on physical CPUs and not total SMT or Hyper-Threaded logical processors (typically double the number of physical processors but certainly not doubling the performance).

Database availability group (DAG): This solution is built upon the database availability group (DAG) resiliency feature in Exchange Server. This feature is the base component of the HA and site resilience framework built into Exchange 2016. A DAG is a group of 2 to 16 mailbox servers that each host a set of database copies and provide database-level recovery from failures that affect individual servers or databases.

Page 32: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 32

Windows Server 2012 R2 introduced the ability to create a DAG that does not need an administrative access point, for example using the command such as below, which does not specify an IP address:

New-DatabaseAvailabilityGroup -Name DAG1 -DatabaseAvailabilityGroupIPAddresses ([System.Net.IPAddress]::None) -WitnessServer [Server] -WitnessDirectory C:\DAG1

Pre-staging the DAG computer account As noted in the Microsoft Exchange guidance “Managing database availability groups” at https://technet.microsoft.com/en-us/library/dd298065(v=exchg.150).aspx, for environments where computer account creation rights are restricted, or where computer accounts are created in containers other than the default computers container, it is highly advisable to pre-stage and assign permissions on the cluster name object (CNO). Create a computer account for the CNO and either assign full control to the computer account of the first mailbox server you are adding to the DAG, or assign full control to the Exchange Trusted Subsystem group. This ensures the necessary security context. As shown below in Figure 16, it is necessary to change the Object Types to include Computers so that you can select the Exchange Server account when setting permissions. In the example below the CNO name is DAG1 as shown in the dialog box title bar. Failure to configure the CNO will result in an error that may be confusing, such as “the fully qualified domain name for (CNO) could not be found”.

Figure 16. Pre-staging the computer account before adding the first Exchange server to the DAG

Page 33: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 33

As shown in Figure 17 below, adding server nodes to the DAG cluster will install the necessary Windows Failover Clustering components. Note that this dialog box is from Exchange 2013 and may be slightly different for Exchange 2016.

Figure 17. Adding server nodes to the DAG cluster

Windows volumes: According to the Exchange 2016 PA, Windows volumes used for databases should be formatted with ReFS with the Data integrity features disabled. A hotfix may be necessary, so that RAID-less ReFS will AutoReseed as ReFS, instead of NTFS. The AutoReseed component that automatically allocates and formats spare disks is called the Disk Reclaimer. For more information on Disk Reclaimer see https://technet.microsoft.com/en-us/library/dn789209(v=exchg.150).aspx. For more information on ReFS and storage options see: https://technet.microsoft.com/en-us/library/ee832792%28v=exchg.150%29.aspx.

The following is an example command of formatting a volume mount point with the integrity feature disabled:

format /fs:ReFS /q /i:disable <volume mount point>

Database paths need to be consistent across all servers in the DAG that have a copy of the database, thus volume mount points are used and each drive is mounted under C:\ExchangeVolumes\ or any path that is preferred. However, Exchange AutoReseed functionality and the Desired State Configuration use a specific structure. See: http://blogs.technet.com/b/mhendric/archive/2014/10/17/managing-exchange-2013-with-dsc-part-1-introducing-xexchange.aspx.

Disk Encryption: With the use of RAID-less JBOD disks it is important to use disk encryption such as that provided by the HPE Smart Array controller or BitLocker. Otherwise a disk could be removed and the database contents mounted and read in an undesirable scenario. The Microsoft Preferred Architecture outlines using Windows BitLocker for at rest data protection. For effective use of BitLocker, each server should be configured with the Trusted Platform Module (TPM) 1.2 or later. This eases the use of BitLocker by storing and securing the encryption keys local to the server without requiring a BitLocker password each time the server boots. To also ease the performance impact of BitLocker, the CPUs used in this solution include the Intel AES-NI instruction set, which is used by BitLocker to reduce CPU and performance impact. More information about Intel AES-NI is available at: intel.com/content/dam/doc/white-paper/enterprise-security-aes-ni-white-paper.pdf

Information on deploying BitLocker is available at: https://technet.microsoft.com/en-us/library/hh831713.aspx and https://technet.microsoft.com/en-us/library/jj612864.aspx

Page 34: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 34

Exchange Transport location A new volume (T: for example) is created to move the Exchange Transport files off of the C:\ operating system volume. This is done using the script in “C:\Program Files\Microsoft\Exchange Server\V15\Scripts.”

.\Move-TransportDatabase.ps1 -queueDatabasePath 'T:\Queue' -queueDatabaseLoggingPath 'T:\Queue\Logs'

Exchange antivirus (antimalware) Exchange Server includes built-in antivirus (antimalware) protection, however since customers may choose to deploy a third-party product, such a product can be installed on the Exchange Servers instead. The built-in antivirus protection is then disabled using the restart-service command below to make the change take effect. Product selection can be made on desired features instead as the built-in antivirus protection is not a full-featured product.

& $env:ExchangeInstallPath\Scripts\Disable-Antimalwarescanning.ps1

Restart-Service MSExchangeTransport

Exchange Server Performance Health Checker Script The Microsoft Exchange Server Performance Health Checker Script checks common configuration settings such as product versions, pagefile settings, power plan settings, NIC settings, and processor/memory information. It is recommended to run the Exchange Server Health Checker Script periodically to ensure that your environment is operating at peak health and that configuration settings have not been inadvertently changed. Download the script at: https://gallery.technet.microsoft.com/Exchange-2013-Performance-23bcca58.

Capacity and sizing The following sections provide a practical example of how to define an Exchange Server and virtualized SharePoint configuration based on a broadly-applicable workload, and a requirement for HA in an enterprise-sized business of about 10,000 users. The exact workload specified and the requirements for content storage and data retention policies may not match your own needs; however you can substitute your own requirements and values in the various examples shown to yield a recommended solution. The following section will discuss using your own requirements and the HPE Sizers for these applications.

SharePoint sizing Microsoft recommends a minimum of 4 vCPUs (cores) for any role virtual machine for SharePoint 2016, and the virtual machine characteristics shown in Table 1 earlier reflect that requirement. However, the key to sizing a SharePoint solution is to correctly size the Front-End role as this has the highest impact on available throughput capacity expressed in Requests per second (RPS). The goal of sizing is to determine the number of vCPUs to use to provide a specific throughput capacity. Note also that the allocation should be one core per vCPU with no over-subscription. This is discussed in more detail later.

Sizing the Front-End role If we know the proposed workload (users and required request throughput), and also know the capabilities of a prototypical compute module (or similar server) we can determine the Front-End role sizing. The following presents a worked example, based on knowing the performance of an HPE ProLiant BL460c server blade using typical E5-2690 v4 CPUs based on prior testing with SharePoint 2016. The intent is to get a best estimate of sizing on HPE Synergy compute modules on the assumption that those will perform similarly when configured with the same CPUs.

We know that a virtual machine using 4 vCPUs running on an HPE ProLiant BL460c server blade will support about 60 RPS in a Custom role, and as much as 75 RPS as a dedicated Front-End role. We can expect an HPE Synergy compute module supporting a similar 4 vCPU virtual machine will perform about the same for purposes of estimation.

Page 35: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 35

We can propose a prototypical workload for 10,000 subscribers that will yield the required request throughput (RPS) as follows:

• Assume 25% concurrency, thus 2,500 active users at peak

• Assume a mix of user types as follows:

– 10% Light users (20 requests per hour) = 2 RPH / user

– 10% Typical users (40 requests per hour) = 4 RPH / user

– 60% Heavy users (60 requests per hour) = 36 RPH / user

– 20% Extreme users (120 requests per hour) = 24 RPH / user

• The above mix sums to (2 + 4 + 36 + 24) = 66 requests per hour per active user

• This converts to (2500 users x 66 RPH per user / 3600) = 46 RPS

The workload therefore generates only 46 RPS at peak to support that user mix and population, which per prior testing shows can be supported by a single 4 vCPU Front-End role virtual machine. However there are other considerations in order to provide stable, predictable performance and to provide redundancy in support of HA needs, and to minimize impact in the event of planned downtime (or accidental loss) of a virtual machine or ESXi host.

Sizing SharePoint for high availability HA, and Microsoft best practices for SharePoint 2016, require a minimum of two servers or virtual machines per role type – thus we should deploy at least two Front-End virtual machines. We should also deploy each virtual machine onto a separate host, or even separate cluster to take advantage of failure domains. We should also consider that the impact of the loss (or planned downtime) of one of the virtual machines in a two-VM scenario will cause a loss of half that available capacity and leave no redundant component for the duration of the loss. In an Enterprise environment it will be better to deploy a total of four Front-End virtual machines (at 4 vCPU each per Microsoft recommendation), where the four virtual machines are split across both clusters and hosts. The loss of a single virtual machine or even a complete host will therefore have a minimal effect and should not reduce capacity below the point that users would be impacted. Further, splitting the total load across four Front-End virtual machines will take better advantage of the hardware load balancing.

SharePoint content storage The quantity of actual content (documents, graphics, etc.) to be stored in SharePoint will vary based on the number of users creating content, business processes and retention policies. The following presents a hypothetical calculation and the process shown may be re-used with your own values if they differ from those shown in the example. The intent is to estimate the actual storage space required for the farm content and space for the other databases and data structures. Determination of this will lead towards sizing storage.

While only active users require compute resources, even inactive users require storage; therefore we have to consider the needs of all users in a typical business generating content. We also need to make a set of reasonable assumptions, supported by anecdotal data from HPE field personnel and various customers:

• To start, assume a need for a minimum of 1TB of content to be stored.

• If we assume a collaboration site is between 1.0-1.5GB, then we can store between 670 and 1,000 such sites.

• Assume 2 organizational units each running 100 projects/year.

• Each project content size can be about 500MB (Office files, PDFs, small images, Visio diagrams, etc.).

• Thus the users generate about 100GB of total content per year.

• Assume 50% of storage volume is allocated for content, and the other 50% for Service Application databases, TempDB, Search Index catalog and the other data structures SharePoint requires to function (a rule-of-thumb from the field).

• This means 1TB of storage space could support 1,000 users content for about 5 years (consider retention policies).

• If backup to disk is required, then we also need to allocate about 650GB from the total storage.

To summarize, 1TB of real content can require 2TB of storage, and as much as 2.65TB if on-disk backups are required.

Page 36: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 36

Figure 18 below shows an example of the HPE Sizer inputs for SharePoint sizing. The HPE Sizer for SharePoint 2016 will be discussed later.

Figure 18. HPE Sizer inputs for SharePoint sizing

Figure 19 below shows the HPE Sizer for SharePoint calculated LUN (volume) sizes.

Figure 19. HPE Sizer for SharePoint calculated LUN (volume) sizes.

Networks With a non-virtualized deployment of SharePoint the best practice recommendation is to use two network adapters in the Front-End and Index servers to segment the traffic. This ensures the amount of network bandwidth the IIS / Client communication can consume, and separates that load from the IIS / SQL Server communication. However, in a virtualized deployment, when the entire farm can exist within the virtual network, the only real concern for deployment is the amount of network capacity that can be handled by the physical NIC. In cases where multiple Front-Ends will be deployed, and there is concern that a single physical network connection cannot support the amount of network load required, the use of multiple physical NICs and multiple virtual networks is recommended. Additionally, in cases where more than a single host server is being used to support the farm, segmenting traffic is still recommended.

Page 37: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 37

HPE Sizers HPE Synergy Planning Tool The HPE Synergy Planning Tool, shown below in Figure 20, guides you to a proper configuration. It is available at no charge at https://sizersllb.itcs.hpe.com/sb/installs/HPESynergyPlanningTool.zip. The overview document for the HPE Synergy Planning Tool is available at: http://h20195.www2.hpe.com/v2/GetDocument.aspx?docname=4AA6-4949ENW.

Figure 20. HPE Synergy Planning Tool – overview

Figure 21. HPE Synergy Planning Tool - Frame configuration with storage and compute modules

Page 38: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 38

SharePoint Server Sizer The easiest way to develop a practical SharePoint configuration is to use the HPE Sizer for Microsoft SharePoint, which takes detailed workload input and provides guidance on a physical or virtualized solution, including deployment of specific virtual machines on the various solution servers. Taking this information will allow you to map the configuration design onto the Synergy platform and determine virtual machine resource values and deployment details.

Figure 22. HPE Sizer for SharePoint 2016

Exchange Server Sizer The HPE Sizer for Exchange, shown below in Figure 23, is a downloadable application allowing the end-user to customize input and receive an exact specification for servers and storage for Exchange. The configuration presented here was designed using the HPE Sizer for Exchange 2016. Additionally, the HPE Sizer for Exchange can be used to explore custom solutions or to make slight changes to this reference architecture. The test data presented here is also used by HPE to verify the accuracy of the results from the HPE Sizer for Exchange.

The HPE Sizer for Exchange 2016 takes into account many factors such as email client usage profiles and mailbox size, including personal archives. The Sizer generates Bills of Material for various Exchange configurations, allowing the end-user to customize for their Exchange deployment. The reference architecture presented here can be customized using the HPE Sizer for Exchange, which can be found at hpe.com/solutions/microsoft-exchange2016-sizer.

Page 39: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 39

Figure 23. HPE Sizer for Exchange 2016 – main page

Figure 24 below shows the HPE Sizer output for Exchange 2016. Note that the primary site hardware configuration is the same as the secondary site, which is not shown just to avoid repetition.

Figure 24. HPE Sizer output for Exchange 2016 – primary site, same as secondary site

Page 40: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 40

The power of the HPE Sizer The HPE Sizer for Exchange enables generating bills of materials for various Exchange configurations, allowing the end-user to customize for their Exchange deployment. It takes into account many factors such as email client usage and mailbox size. The Sizer can also import the Microsoft Exchange Role Requirements Calculator (spreadsheet) as input, making it even easier to use the HPE Sizer.

In addition to the HPE Sizer, it is recommended that customers use the Microsoft Exchange Server Role Requirements Calculator. If this Calculator is used as a starting point, it can be imported into the HPE Sizer using the button shown above in Figure 23 (Load Exchange Calculator Workload). Shown below in Figure 25 is the database layout and failover from the Microsoft Exchange Server Role Requirements Calculator. The diagram below shows one server in the Primary site as offline, with the databases active on the remaining servers.

……

Figure 25. Details of database layout and failover from Exchange Server Role Requirements Calculator

Page 41: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 41

Load testing recommendations To understand how each application performs, they should be tested with their respective load test tools. Microsoft Visual Studio can be used as the SharePoint workload engine emulating users, running load tests comprised of various Webtest components developed to represent a broadly applicable SharePoint collaboration workload. For Exchange 2016, it should be tested using both Microsoft Exchange Server Jetstress 2013 and Microsoft Exchange Server Load Generator 2013 tools to demonstrate server and storage performance. Note that both of these tools are compatible with Exchange 2016.

Exchange Server load testing When evaluating the scalability and performance of an Exchange building block in a lab environment, our engineers use tools provided by Microsoft to generate a simulated Exchange workload on the systems and analyze the effect of that workload. One tool, Jetstress, is designed for disk subsystem testing, and the other tool, LoadGen, evaluates server performance with a given set of client messaging and scheduling actions generating server load and mail flow within the organization.

Microsoft Exchange Server Jetstress Tool Microsoft Exchange Server Jetstress Tool (Jetstress) works with the Exchange Server database engine to simulate the Exchange database, log and replication load on disks. Jetstress helps verify the performance and stability of a disk subsystem before putting an Exchange 2016 server into production. Jetstress simulates only Exchange database load, for a specific number of users or determines that maximum load at given latency thresholds.

The Jetstress 2013 tool is used to test the storage performance by simulating the Exchange database and transaction log workloads while measuring disk performance to determine achievable read and write IOPS. The results should validate if the configuration is adequately sized to support the mailbox users with the message/user work profile with additional performance headroom even in a failover scenario where one server is offline.

After successful completion of the Jetstress disk performance (2 hour) and stress (24 hour) tests, an Exchange Server disk subsystem is considered adequately sized (in terms of the performance criteria you establish) for the user count and profiles you selected. For more information on Jetstress, visit: microsoft.com/en-us/download/details.aspx?id=36849.

Microsoft Exchange Server Load Generator Tool The second phase of testing, using LoadGen to simulate client load and validate the server configuration, can be analyzed to verify that each server is adequately sized to support the mailbox users with the message/user work profile with additional performance headroom. Since each test tool provides a different function it is important to use both.

Microsoft Exchange Server Load Generator Tool (LoadGen) produces a simulated client workload against a test Exchange deployment. This workload evaluates how Exchange performs and is used to analyze the effect of various configuration changes on Exchange behavior and performance while the system is under load.

LoadGen simulates the delivery of multiple MAPI client messaging requests to an Exchange server. To simulate the delivery of these messaging requests, you run Exchange Load Generator tests on client computers. These tests send multiple message requests to the Exchange server, which causes a mail load.

After the tests are complete, use the results to perform the following tasks:

• Verify that the server and storage configuration performed within design specifications.

• Identify bottlenecks on the server.

• Validate Exchange settings and server configuration.

• Verify that client load was accurately represented.

For more information on LoadGen, visit: microsoft.com/en-us/download/details.aspx?id=40726.

The database volumes are sized for a certain size database on each, and Jetstress creates similar databases on each of the volumes, thus pushing the system toward its designed maximum capacity. LoadGen cannot create such large databases quickly and even initializing for the typical mailbox sizes of 2GB takes nearly a week for a few thousand users. However LoadGen does not need such large databases to provide

Page 42: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 42

stress testing and validation of the CPU, memory and network resource consumption. In addition, Jetstress databases cannot be used for network replication testing, but LoadGen creates actual Exchange Server databases and copies can be created on other Exchange servers. These are all industry-accepted best practices for using these Exchange test tools. Another area of focus for testing is the location of the Exchange Transport files, and standard LoadGen testing may not provide the stress seen in organizations with heavy inbound and outbound mail flow, in addition to expansion and delivery of many Distribution Lists.

SharePoint load testing Microsoft Visual Studio Professional can be used to emulate web-based user activity and workloads. This provides a rich set of recording, customization and test execution capabilities, coupled with in-build data collection, analysis and reporting. It also contains many features and capabilities designed specifically to make performance/load testing of SharePoint easier to develop and accomplish. Its capabilities and concepts will be familiar to anyone who has used similar load generation tools and the learning curve is not steep. Its methodology includes the recording and customization of so-called “Webtests” where each can represent a portion of a total desired workload. An example test bed can include SharePoint Site Collections (Docs, Portal, Teams, MySite and Search) and a Webtest workload for each site. Once each Webtest has been developed and fully tested (single-user) they can be incorporated into a “Loadtest” which defines what Webtests are to be included (and in what percentage mix), and other relevant test run parameters and definition of the data collection metrics.

Summary Even with the best technology available, it takes an appreciable time to design, specify, build and deploy a multi-application virtualized solution. The following are key challenges in deploying applications such as Microsoft Exchange Server and SharePoint Server, in either a physical or virtualized environment:

• Determining solution sizing and configuration to support the business workload in an HA environment.

• Managing design/deployment/hardware/operations costs by leveraging efficient use of virtualized resources.

• Achieving rapid deployment to minimize time-to-productivity, yet ensuring consistency during the build process, reducing problems from configuration errors.

• Defining a strategy to handle workload change and growth over time.

The solution described in this paper represents an enterprise-sized configuration of Exchange Server and SharePoint (with SQL Server) designed to support 10,000 users while providing enterprise-class HA features. HPE Synergy composable infrastructure adds new capabilities to address all of the key challenges above. This reference configuration demonstrates the use of the HPE Synergy composable infrastructure to combine and mix both a virtualized and physical environment for Microsoft SharePoint Server 2016 and Exchange Server 2016.

The HPE Synergy composable infrastructure provides options that have not existed before now with the ability to use large quantities of direct attached storage within the HPE Synergy 12000 frame. The use of integrated networking interconnect modules allows the deployment of a simple networking model with a reduced number of components. The HPE Synergy composable infrastructure is a modular design that can be deployed quickly and modified easily, using the HPE Synergy Composer as needs change due to new application demands and requirements or a change in user and business needs.

Solution sizing and configuration guidelines for Microsoft Exchange and SharePoint are based on well understood best practices developed over time by HPE and Microsoft for many generations of both applications. With new features and enhancements, Exchange 2016 requires more processor and memory resources. This need can best be met by the latest CPU and architecture of the HPE Synergy compute modules. With the new SharePoint 2016 version, the majority of these best practices still hold true, with some new areas to consider around the Search Service and distributing the various Web and other services across multiple virtual machines to provide better per-service virtual machine tuning.

Implementing a proof-of-concept As a matter of best practice for all deployments, HPE recommends implementing a proof-of-concept using a test environment that matches as closely as possible the planned production environment. In this way, appropriate performance and scalability characterizations can be obtained. For help with a proof-of-concept, contact an HPE Services representative (hpe.com/us/en/services/consulting.html) or your HPE partner.

Page 43: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 43

Appendix A: Bill of materials

Note The bill of materials does not include complete support options or rack and power requirements. For questions regarding ordering, please consult with your HPE Reseller or HPE Sales Representative for more details. hpe.com/us/en/services/consulting.html

Table 6. Bill of materials for two racks representing two sites in a disaster resilient design

QTY DESCRIPTION

Rack and network infrastructure

2 HPE 642 1200 mm Shock Intelligent Rack

2 HPE 42U 1200mm Side Panel Kit

2 HPE Air Flow Optimization Kit

4 HPE 1U Black Universal 10-pk Filler Panel

HPE Synergy 12000 frame components

4 HPE Synergy 12000 Configure-to-order Frame with 1x Frame Link module 10x Fans

4 HPE 6X 2650W AC Titanium Hot Plug FIO Power Supply Kit

4 HPE Synergy Composer (one pair of HPE Synergy Composer modules for each rack is configured for high availability and redundancy)

4 HPE VC SE 40Gb F8 module

4 HPE Synergy 10Gb Interconnect Link module

4 HPE Synergy Frame Link module (one additional HPE Synergy Frame Link module for redundancy in each frame)

3 HPE FLM 2 foot CAT6A cable

1 HPE FLM 10 foot CAT6A gray cable

8 HPE Synergy 12Gb SAS Connection module with 12 Internal Ports

2 HPE Synergy Interconnect CXP to CXP 1.1m Direct Attach Copper Cable

2 HPE Synergy Interconnect CXP to CXP 1.6m Direct Attach Copper Cable

2 HPE Synergy Interconnect CXP to CXP 2.1m Direct Attach Copper Cable

HPE Synergy 480 Gen9 compute module components

24 HPE Synergy 480 Gen9 Configure-to-order compute module

48 HPE Synergy 480 Gen9 Intel Xeon E5-2640 v4 (2.4GHz/10-core/25MB/90W)

144 HPE 16GB (1x16GB) Dual Rank x4 DDR4-2133 CAS-15-15-15 Registered Memory Kit

48 HPE 600GB 12G SAS 10K rpm SFF (2.5-inch) SC Enterprise

24 HPE Smart Array P542D Controller

24 HPE Synergy 3820C 10/20Gb Converged Network Adapter

24 HPE Trusted Platform Module 2.0 Kit

Page 44: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 44

QTY DESCRIPTION

HPE Synergy D3940 storage module components

12 HPE Synergy D3940 storage module

12 HPE Synergy D3940 Redundant I/O Adapter

264 HPE 2TB 12G SAS 7.2K rpm SFF (2.5-inch) SC 512e 1yr Warranty Hard Drive

48 HPE 800GB 12G SAS Write Intensive SFF 2.5-in 3yr Wty Solid State Drive

144 HPE 900GB 12G SAS 10K 2.5in SC ENT HDD

Page 45: HPE Reference Configuration for Microsoft SharePoint and ... · HPE Reference Configuration for Microsoft SharePoint and ... Best practices and ... as length of retention and ultimate

Technical white paper Page 45

Sign up for updates

Rate this document

© Copyright 2016 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for HPE products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HPE shall not be liable for technical or editorial errors or omissions contained herein.

Microsoft, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Intel and Xeon are trademarks of Intel Corporation in the U.S. and other countries. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions

4AA6-6586ENW, July 2016

Resources and additional links HPE Reference Architectures hpe.com/info/ra

HPE Sizers

• HPE Sizer for SharePoint 2016

• HPE Sizer for Exchange 2016

hpe.com/info/sizers

HPE Synergy hpe.com/info/synergy

HPE composable infrastructure hpe.com/info/composableinfrastructure

HPE Servers hpe.com/servers

HPE Storage hpe.com/storage

HPE Technology Consulting Services hpe.com/us/en/services/consulting.html

To help us improve our documents, please provide feedback at hpe.com/contact/feedback.

Learn more at hpe.com/info/synergy