contents planning a private enterprise cloud?

11
The KEMP LoadMaster Application Delivery Controller Free Trial KEMP Virtual Load Balancer/ADC Texas A&M and Cloud Services Provider Peak 10 Choose KEMP ADCs for High Availability, Scalability and Performance From the Gartner Files:Enterprises Plan- ning Private Clouds Should Include About Kemp Technologies Welcome The ability to bring up new applications that enable new virtual workloads in seconds is not easily supported today by the highly static nature of provisioning and configuration available in traditional networks. Virtual Machine (VM) mobility is a key function for server virtualization and resource management, but dynamically shifting application workloads creates high-volume traffic between servers. Existing data center networks were not designed with this flexibility in mind, and thus often lead to network congestion and performance issues. Virtualization creates a pool of manageable and flexible server capacity that makes it possible to optimize resource utilization by moving workloads between physical resources. The management of this server resource pool can be highly automated, and thus become an enabler of agile private cloud solutions. That’s where virtual ADCs (Application Delivery Controllers) from KEMP Technologies can make a difference, which is why I am pleased to offer you this complimentary high value research report from Gartner, “Enterprises Planning Private Clouds Should Include Software-Defined Networking With Major Network Changes,” published 7 June 2013 by Analyst Bjarne Munch. Contents KEMP Technologies has long supported the undeniable trend of application virtualization, since 2009 when we launched the first virtual ADC, and ever since we have been committed to expanding the options enterprises have when deploying ADC’s and enabling cloud-based workloads. Virtualization, and the ease with which application environments can be delivered, have accelerated this trend, evidenced by KEMPs rapid growth to the #3 ADC player in the market measured by units shipped, and the share of that growth being propelled by virtual ADC’s. KEMP will continue the push toward ADC’s that run on every platform and in every environment, and that truly enable our customers to easily deliver application availability, optimal performance and visibility assurance to end users. I believe enabling Line-of-Business purchasing and agile deployment via virtual ADC’s, confidently cements KEMP’s place in Gartner’s Magic Quadrant for Application Delivery Controllers 1 for years to come. Featuring research from Planning a Private Enterprise Cloud? What you should know about SDN and Load Balancing from Gartner and KEMP Technologies 2 2 3 4 11 1 Source: Gartner Research, Magic Quadrant for Application Delivery Controllers, G00253420, M. Fabbi, N. Rickard, B. Munch A. Lerner, 7 June 2013 Ray Downes, CEO, KEMP Technologies

Upload: others

Post on 08-Jun-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Contents Planning a Private Enterprise Cloud?

The KEMP LoadMaster Application Delivery Controller

Free Trial KEMP Virtual Load Balancer/ADC

Texas A&M and Cloud Services Provider Peak 10 Choose KEMP ADCs for High Availability, Scalability and Performance

From the Gartner Files:Enterprises Plan-ning Private Clouds Should Include

About Kemp Technologies

Welcome

The ability to bring up new applications that enable new virtual workloads in seconds is not easily supported today by the highly static nature of provisioning and configuration available in traditional networks.

Virtual Machine (VM) mobility is a key function for server virtualization and resource management, but dynamically shifting application workloads creates high-volume traffic between servers. Existing data center networks were not designed with this flexibility in mind, and thus often lead to network congestion and performance issues.

Virtualization creates a pool of manageable and flexible server capacity that makes it possible to optimize resource utilization by moving workloads between physical resources. The management of this server resource pool can be highly automated, and thus become an enabler of agile private cloud solutions.

That’s where virtual ADCs (Application Delivery Controllers) from KEMP Technologies can make a difference, which is why I am pleased to offer you this complimentary high value research report from Gartner, “Enterprises Planning Private Clouds Should Include Software-Defined Networking With Major Network Changes,” published 7 June 2013 by Analyst Bjarne Munch.

Contents

KEMP Technologies has long supported the undeniable trend of application virtualization, since 2009 when we launched the first virtual ADC, and ever since we have been committed to expanding the options enterprises have when deploying ADC’s and enabling cloud-based workloads. Virtualization, and the ease with which application environments can be delivered, have accelerated this trend, evidenced by KEMPs rapid growth to the #3 ADC player in the market measured by units shipped, and the share of that growth being propelled by virtual ADC’s.

KEMP will continue the push toward ADC’s that run on every platform and in every environment, and that truly enable our customers to easily deliver application availability, optimal performance and visibility assurance to end users. I believe enabling Line-of-Business purchasing and agile deployment via virtual ADC’s, confidently cements KEMP’s place in Gartner’s Magic Quadrant for Application Delivery Controllers1 for years to come.

Featuring research from

Planning a Private Enterprise Cloud?What you should know about SDN and Load Balancing from Gartner and KEMP Technologies

2

2

3

4

11

1Source: Gartner Research, Magic Quadrant for Application Delivery Controllers, G00253420, M. Fabbi, N. Rickard, B. Munch A. Lerner, 7 June 2013

Ray Downes, CEO, KEMP Technologies

Page 2: Contents Planning a Private Enterprise Cloud?

2

The KEMP LoadMaster Application Delivery Controller

Since 2000, KEMP Technologies has been a leader in Load Balancers and Application Delivery Controllers. KEMP has a global presence with 17,000 worldwide customer deployments and regional offices in the Americas, Europe, Asia and South America. To meet the demanding needs of its enterprise customer base, KEMP delivers its ADC solutions optimized for a variety of platforms including traditional hardware, virtualized environments, and public cloud infrastructures as well as specialized ADC software that takes advantage of native operation on high-performing computing platforms such as Cisco UCS.

All ADCs in KEMP’s award-winning LoadMaster line of products deliver full-featured layer-7 content switching, SSL acceleration and security services. KEMP has created an ideal family of products for enterprises looking for optimum performance and the flexibility needed to enable agile deployment by departments and lines of business within enterprises.

Free Trial KEMP Virtual Load Balancer/ADC

The Most Powerful Virtual ADC

Source: Kemp Technologies

Click the image below to download a Free Trial

Page 3: Contents Planning a Private Enterprise Cloud?

3

Texas A&M and Cloud Services Provider Peak 10 Choose KEMP ADCs for High Availability, Scalability and Performance

Texas A&M University Chooses KEMP for High Availability & Traffic Management for Web Server Farms

Texas A&M University is the sixth-largest university in the country with 50,000 students, and its Provost IT Office (PITO) maintains IT assets and services for approximately 40 different departments that include Texas A&M’s admissions, financial aid, registrar, career placement, and study abroad student services. Because these departments represent many of the core services required to keep the University’s student-facing services operational, PITO needed to ensure that these services could continue to operate with minimal disruption in the event of hardware or software failures, security incidents, or routine maintenance needs. To meet this need, PITO has turned to load balancing web services across web server farms that help eliminate any single point of failure.

Problem: Security and Day to Day Operations

Prior to using KEMP’s load balancing solutions, PITO had deployed web server farms behind another vendor’s load balancing product. While this product served to meet some of the office’s needs, several deficiencies in the product caused efficiency issues, while other needs could not be met by the product’s available features. As a result, PITO began searching for a price-conscious load balancer replacement that could deliver a speedy interface, operated on Microsoft’s Hyper-V hypervisor platform, and delivered high availability, security, and simplicity without sacrificing performance.

Case Study Key Highlights:

• A very large environment in the higher-education vertical where the application use-case is a custom web-based client services app that serves 50,000 users across 40 different departments.

• Use case for KEMP’s IPS security functionality

• Use of KEMP’s Web administration UI which is not Java based

• Centralized and optimized SSL certification administration

Click here to Read the Complete Case Study

Cloud Services Provider PEAK 10 Delivers Reliability with Cisco and KEMP

Peak 10 leverages LoadMaster Software on Cisco UCS to maximize availability and performance of unified data center environment.

As demand for services grew, Peak 10 found that its existing infrastructure could not keep up with demand. The company’s multiple cloud clusters were difficult to scale and manage, with labor-intensive cabling and large clusters of disparate rack-mount servers. “The physical infrastructure associated with cloud became our biggest obstacle to growth,” says Ken Seitz, director of product strategy at Peak 10. “We chose the Cisco Unified Computing System (UCS) environment as our new platform for its unified fabric that allows us to scale much more efficiently.”

A leader in the application delivery controller (ADC) space, KEMP Technologies offered the final piece of the puzzle with its innovative solution, LoadMaster Software. Working from within the Cisco Unified Fabric, LoadMaster Software offers load balancing with the highest possible performance and responsiveness. “We pride ourselves on customer relationships, particularly when it comes to helping our customers meet their needs,” says Seitz.

Click here to Read the Complete Case Study

Source: Kemp Technologies

Page 4: Contents Planning a Private Enterprise Cloud?

4

From the Gartner Files

Enterprises Planning Private Clouds Should Include Software-Defined Networking With Major Network Changes

• Network architects should begin to acquire knowledge about SDN and vendors’ road maps, and should plan for SDN to be a vital component of the enterprise private cloud solution, with mature products available within the next three years.

Analysis

The introduction of server virtualization, dynamic server resource management via VM mobility and the emergence of private cloud solutions are, at best, difficult to manage with traditional network solutions. Server virtualization has already been the source of the most significant impacts on the data center network in recent time, but it is only the starting point. As outlined in “The Road Map From Virtualization to Cloud Computing,” server administrators typically introduce server virtualization to cost optimize their compute resources. Virtualization creates a pool of manageable and flexible server capacity that makes it possible to optimize resource utilization by moving workloads between physical resources. The management of this server resource pool can be highly automated, and thus become an enabler of agile private cloud solutions.

The key problem for network architects is that existing data center networks can’t easily support the evolution to such private cloud solutions, especially in three key areas:

• Virtualization views physical servers as one very large virtualized resource pool where workloads can be moved freely to enable efficient server resource management and resilience, but it requires very large virtual LANs (VLANs), leading to network instability.

• VM mobility is a key function for server resource management, but moving workloads creates high-volume traffic between servers. Existing data center networks are not designed for this, and thus lead to network congestion and performance issues.

The migration to server virtualization and the evolution to private clouds require agile workload provisioning and large virtualized server resource pools for workload mobility. Traditional network solutions will inhibit private clouds if network architects are unable to meet these requirements.

Impacts

• Private cloud solutions require network architectures and operational agility that network architects can’t deliver via traditional data center networks.

• Network fabrics have been introduced to improve the network architecture, but they constrain virtual server resource pooling, and they do not offer network managers sufficiently improved operational agility.

• Software-defined networking (SDN) is emerging with the promise of flexible workload mobility as well as programmatic configuration and control, which will assist network architects in meeting the needs of private clouds better than traditional alternatives.

Recommendations

• Network architects should design the data center network architecture based on the planned use of live virtual machine (VM) mobility and the number of servers within a mobility domain.

• Network architects that need to support private cloud solutions must move to new network architectures that support VM mobility and orchestration, which may require new software, new hardware, or a new vendor.

• Network architects should plan to use switch clustering and network fabrics to enhance traditional hierarchical designs for any short-term need to improve network support of VM mobility within the mobility domain.

Page 5: Contents Planning a Private Enterprise Cloud?

5

• The ability to provision new applications and new virtual workloads in seconds is not easily supported by the highly static provisioning and configuration available in current networks.

Impacts and Recommendations

Private cloud solutions require network architectures and operational agility that network architects can’t deliver via traditional data center networksData center networks continue to be built on decade-old technology and only incrementally improved design principles. Essentially, networks are still designed based on a tiered hierarchical architecture, which is designed for “north-south” traffic flows, as illustrated in Figure 2.

These traditional networks make it difficult for network architects to adapt their networks to emerging needs, especially in three areas:

1 The increased use of server virtualization requires more VM mobility between servers, and as this moves applications around, it also leads to more application communication

between servers. This creates congestion in the network as well issues adjusting services such as application delivery controllers (ADCs) and firewalls (see Note 1). As network traffic between top-of-rack switches increase, new network architectures will be needed.

2 The desire to increase virtualized server resource pools is leading to an increased number of servers per VLAN because live VM mobility must be done within an IP subnet, which requires large Layer 2 domains (see Note 2). This creates network issues due to too-large VLANs, which can’t be easily solved with traditional network solutions; new architectures are needed.

3 The need to provision server workloads fast requires faster network provisioning. Historically, all network devices from all vendors have been configured via command line interfaces (CLIs), which is time consuming and fault-prone (see Note 3). This makes it difficult to improve agility of the network, and network architects that need to improve network agility are then forced to look for new solutions.

Source: Gartner (June 2013)

FIGURE 1 Impacts and Top Recommendations for Software-Defined Networking

Page 6: Contents Planning a Private Enterprise Cloud?

6

Source: Gartner (June 2013)

FIGURE 2 Traditional Hierarchical Network Architecture

Recommendations:

• Network planners and architects that need to support private cloud solutions must move to new network architectures that support VM mobility and orchestration, which may require new software, new hardware, or a new vendor

• Network architects need to design their data center network architecture based on the use of live VM mobility and the number of servers within a mobility domain.

Network fabrics have been introduced to improve the network architecture, but they constrain virtual server resource pooling, and they do not offer network managers sufficiently improved operational agilityTwo network solutions have emerged that provide work-around solutions to the constraints of Spanning Tree Protocol (STP) topology design and VLAN resource limitations: switch clusters and network fabrics.

A switch cluster is a group or cluster of switches that has been interconnected such that they operationally appear as one switch that interconnects with a spine switch via multichassis link aggregation group (MCLAG), as illustrated in Figure 3. Within the cluster, the switches will appear as having a common backplane and there is no spanning tree or broadcast traffic. Depending on the vendor design, there is typically shared configuration and traffic control. This increases the size of the virtualized server pool within a VLAN domain, and also enables VM mobility within the cluster with mobility of all port configurations. Vendor solutions available today have a scalability limit of up to 10 switches, depending on the vendor.

Page 7: Contents Planning a Private Enterprise Cloud?

7

A network fabric is a group of switches that are interconnected in a fabriclike design. There is no spanning tree within the fabric; instead, logical paths are established between switches via various protocols, such as Transparent Interconnection of Lots of Links (TRILL), Shortest Path Bridging (SPB) and MCLAG, enabling flexible topology design where multiple paths are simultaneously active. Because fabrics may have shared configuration and control, they can enable VM mobility within the fabric with mobility of port configurations. However, VLANs are configured at the edge of each switch in the fabric, leading to broadcast traffic within the fabric. This may limit the scalability of VLANs, and thus the server resource pool, depending on the vendor’s specific fabric design (see Figure 4). For example, TRILL (RFC 5556) states the size of a server pool is “around 1,000 end-hosts in a single bridged LAN of 100 bridges.”

Both switch cluster and network fabric solutions solve some of the issues related to STP and VM mobility; however, for both solutions, the key issue of limited operational agility remains. Both solutions are essentially incremental enhancements to existing switches, and simply enhancing a traditional switch does not fundamentally change the configuration challenges inherent with CLIs. This means that for both solutions the network architect will need to manually design the logical network and all

associated policies, and subsequently configure the switches via CLI, as described earlier. This also means that these network fabric constructs do not directly offer a good network solution for a private cloud deployment where configuration needs to be highly agile.

Recommendations:

• Network architects should plan to use switch clustering and network fabrics to enhance traditional hierarchical designs for any short-term need to improve network support of VM mobility within the mobility domain.

• Network architects need to evaluate the required scalability of the server resource pool as a key criterion of network fabric selection.

• Network architects that need to support private cloud solutions must focus on orchestration capabilities, in particular the degree of customization needed to automate network configuration and integrate with data center orchestration.

SDN is emerging with the promise of flexible workload mobility as well as programmatic configuration and control, which will assist network architects in meeting the needs of private clouds better than traditional alternatives

Source: Gartner (June 2013)

FIGURE 3 Illustration of Two Switch Clusters Interconnected via Two Spine Switches

Page 8: Contents Planning a Private Enterprise Cloud?

8

Source: Gartner (June 2013)

FIGURE 4 Illustration of Switches Interconnected in a Meshed Fabric Topology

SDN has emerged as a highly flexible and agile way to design and operate networks. SDN is still emerging and value propositions are still unproven, but the potential benefits are significant and network planners should explore SDN in the context of virtualization and private cloud plans. The SDN concept is based on moving most control logic away from each individual switch and placing all control logic in a centralized controller that has a holistic view of the network, as illustrated in Figure 5.

From a virtualization point of view, SDN has the potential to remove the existing VM mobility constraints in the network. With SDN STPs and VLANs, broadcast domains can be eliminated and will no longer limit the size of server resource pools and constrain VM mobility. Instead, there is a centralized controller that can establish end-to-end paths based on workload specific policies, such as latency, hop count or security boundaries. Network architects should therefore view SDN as an opportunity to create a network that offers full flexibility for how server administrators want to manage their server resources, that is, support uninhibited VM mobility within highly scalable resource pools.

From a private cloud point of view, SDN has the potential to remove the static nature of existing network technologies. With SDN, there is no local control logic in each switch that needs to be configured via CLIs. Instead, network configuration can become programmable, simplifying network configuration, as well as integrating with an end-to-end orchestration system. SDN solutions such as HP Virtual Application Networking have emerged with APIs that interconnect network configuration with virtualization orchestration systems and SDN controllers. Network architects should therefore view SDN as a realistic opportunity to create a network that can enable private cloud solutions, that is, create a highly agile network.

Recommendations:

• Network architects should begin acquiring knowledge about SDN and vendors’ road maps, and should plan for SDN to be a vital component of the enterprise private cloud solution, with mature products available within the next three years.

• Evaluate SDN solutions based on their functional ability to support virtualization, including:

Page 9: Contents Planning a Private Enterprise Cloud?

9

Source: Gartner (June 2013)

FIGURE 5 Illustration of Switches Interconnected in an Open Topology Defined

Dynamically by a Controller

• Highly scalable server resource pools

• Uninhibited live VM mobility

• Highly agile network operations and integration with data center orchestration systems

Evidence

Network Configuration Is Becoming More Complicated1 According to the 2012 Network Barometer Report from Dimension Data, the total number of configuration mistakes per networking device increased from 29 to 43, which is a regression

to 2009 levels. Compounding the problem is the fact that networking equipment is becoming more complicated due to an increased level of functionality, as well as increased interdependency on other devices in solution design (see www.dimensiondata.com/networkbarometer).

2 “The Evolution of Network Configuration: A Tale of Two Campuses.” Hyojoon Kim, Theophilus Benson, Aditya Akella and Nick Feamster.

Page 10: Contents Planning a Private Enterprise Cloud?

10

Note 1. Network Traffic Patterns Are Changing and Leading to Performance Issues

The traditional hierarchical architecture was designed for a usage scenario where users consumed applications running on dedicated physical servers and network traffic typically flowed from the client through the data center network tiers to the application and back. This pattern is often described as “north-south.” However, server virtualization requires VM mobility between physical servers, and many applications now also require server to server communications. This pattern is often described as “east-west,” and existing networks and service appliances, such as ADCs and firewalls, can’t easily adapt to support these new traffic needs leading to congestion and latency issues.

A key problem with a traditional Layer 2 network is that the logical network topology is based on STP, which only calculates a single forwarding tree for all connected servers of a particular VLAN. This not only means that the network is underutilized, but that all traffic is routed through higher-tier switches, creating congestion and increasing latency. The lack of multipaths and traffic management prevent optimized traffic flow between servers and lead to traffic bottlenecks and congestion.

Note 2. The Number of Servers per VLAN Is Increasing Beyond Control

The need for server administrators to increase server resource pools is leading to an increased number of servers per VLAN, because live VM mobility must be done within an IP subnet, which requires large Layer 2 domains. This because the IP address of the VM can’t change during a live migration and the physical switch port configurations need to maintain VLAN identities and other policies for the VM after migration. In traditional network designs, one can only move a VM across a relatively small number of physical servers. Traditional network design guidelines from network vendors stipulate a few hundred (250 to 500) servers per VLAN domain, depending on the amount of application broadcast traffic. Many network designers have “flattened” their network by stretching their VLANs to increase the size of the physical server pool that can be used for live VM motion. This can work well for applications with more limited nonbroadcast/multicast traffic, but the risk is unpredictable and unreliable networks.

Network architects have, for many years, been working around data center network issues by placing Layer 3 routing as an overlay on the data center network. Routing is thus used instead of STP, and IP subnets are used to subdivide bloated VLANs to contain broadcast traffic. However, because VM mobility traffic can’t be routed, at least part of the network must remain Layer 2.

Note 3. Network Configuration Prevents Network Agility

Network configuration and provisioning are still difficult, labor-intensive and time-consuming activities that prevent the network from supporting the agility needed from applications and servers. Historically, all network devices from all vendors have been configured via CLIs. This is not designed for dynamic and programmable implementations but for static configuration of low-level parameters per port and per switch. Basic forwarding has been enhanced over the years with a significant degree of control features that not only need to be configured within each switch individually but designed and consistently configured across all switches to ensure end-to-end forwarding controls for functions such as IP address, subnet, VLAN, access control list (ACL), and quality of service (QoS). Both the design and configuration is difficult, time consuming and fault prone.

While there are network configuration systems available to assist in this process, these systems do not remove complexity in the actual design phase, and they do not easily integrate with cloud orchestration systems. They may guide the network designer through changes to network devices and when automatically generating templates, and execute the resulting configuration. However, every change in the network and reconfiguration still needs each CLI template to be manually redone. While network configuration systems have improved the ratio of full-time equivalents to the number of network devices managed by a factor of two to as much as 10 times the number of devices, the time required to configure the network is still measured in hours or days. Network configuration and change management (NCCM) tools are still in an adolescent maturity stage adopted by less than 20% of enterprises.

Source: Gartner Research, G00251641, T. Applebaum, Bjarne Munch, 7 June 2013

Page 11: Contents Planning a Private Enterprise Cloud?

11

Planning a Private Enterprise Cloud? is published by “Kemp Technologies. Editorial content supplied by Kemp Technologies is independent of Gartner analysis. All Gartner research is used with Gartner’s permission, and was originally published as part of Gartner’s syndicated research service available to all entitled Gartner clients. © 2013 Gartner, Inc. and/or its affiliates. All rights reserved. The use of Gartner research in this publication does not indicate Gartner’s endorsement of Kemp Technologies’s products and/or strategies. Reproduction or distribution of this publication in any form without Gartner’s prior written permission is forbidden. The information contained herein has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information. The opinions expressed herein are subject to change without notice. Although Gartner research may include a discussion of related legal issues, Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner is a public company, and its shareholders may include firms and funds that have financial interests in entities covered in Gartner research. Gartner’s Board of Directors may include senior managers of these firms or funds. Gartner research is produced independently by its research organization without input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartner research, see “Guiding Principles on Independence and Objectivity” on its website, http://www.gartner.com/technology/about/ombudsman/omb_guide2.jsp.

About Kemp Technologies

Since 2000, KEMP Technologies has been a leader in the application delivery controller and load balancer market, used by thousands of enterprises that consider IT, e-commerce, web and business applications as mission-critical to their long-term success. KEMP helps enterprises rapidly grow their business by providing 24/7 infrastructure availability, scalability, better web performance and secure operations.

KEMP’s LoadMaster products include Layers 4-7 load balancing, content switching, session persistence, SSL Offload/Acceleration and web front-end capabilities (caching, compression, intrusion prevention system), plus one full year of product support – delivering industry leading value and deployment flexibility in the ADC market.

Contact KEMP U.S & North America HQ: +1 631-345-5292 European HQ +353 61-260-101 KEMP Technologies APAC +65 62222429 kemptechnologies.com