emerson_embedded_computing_comparing_servers_whitepaper

8
A White Paper from the Experts In Business-Critical Continuity. Comparing Servers The role of communications servers and enterprise servers in telecom network architectures John Fryer, Director of Technology Marketing Rob Pettigrew, Manager, Technical Field Operations

Upload: emerson-network-power-embedded-computing

Post on 17-Mar-2016

212 views

Category:

Documents


0 download

DESCRIPTION

Rob Pettigrew, Manager, Technical Field Operations John Fryer, Director of Technology Marketing A White Paper from the Experts In Business-Critical Continuity™. 5 Feature, Performance and Capacity 6 Communications Network Lifecycle 7 Conclusions Table of Content EmersonNetworkPower.com/EmbeddedComputing 2 Comparing Servers

TRANSCRIPT

A White Paper from the Experts In Business-Critical Continuity™.

Comparing Servers The role of communications servers and enterprise servers in telecom network architectures

John Fryer, Director of Technology Marketing

Rob Pettigrew, Manager, Technical Field Operations

2 Comparing Servers EmersonNetworkPower.com/EmbeddedComputing

Table of Content

3 Introduction

3 Communications Servers and Enterprise Servers

3 Blades and Rack-based Servers

3 Server Requirements

4 Environmental and Regulatory

4 Availability and Serviceability

5 Feature, Performance and Capacity

6 Communications Network Lifecycle

7 Conclusions

3 Comparing Servers EmersonNetworkPower.com/EmbeddedComputing

What are the most appropriate roles for communications servers and enterprise servers in creating next generation networks? The answer depends on multiple factors including price, reliability, implications of outages and expected growth. This paper discusses the differences between these two classes of computing platforms to help define the suitability of each for particular network functions.

Introduction The competitive nature of the communications marketplace is forcing network equipment providers (NEPs) to re-examine how they architect and deploy new services and the equipment that enables those services on behalf of their customers, network service providers. The challenge is to deploy new services quickly and efficiently while maintaining the level of service which customers expect and which may be dictated by regulation.

To address this challenge, many NEPs have adopted the concept of a common platform. This involves building a single platform that can be used to deploy many applications. To meet the requirements of platform flexibility, convergence of communications and computing, and rapid application deployment, network service providers are increasingly demanding—and NEPs are responding with—commercial off-the-shelf (COTS) platforms.

Communications Servers and Enterprise Servers There are two types of platforms available today—communications servers and enterprise servers.

Communications servers, based on open industry standards (such as AdvancedTCA®, MicroTCA™, Carrier Grade Linux and Service Availability Forum™ high availability specifications) operate as a carrier-grade common platform for a wide range of communications applications and allow for value-add at many levels of the system architecture.

Enterprise servers have traditionally been used to provide communications back-office and management functionality, such as billing, repair and data centers. These are functions where the storage strengths of enterprise servers can be effectively used, availability requirements are not as stringent, and service availability is generally not affected by system outages.

Blades and Rack-based Servers Communications servers are by definition, and by the requirements of the applications they run, a blade-based architecture. Enterprise servers are available as either a blade-based architecture or rack-based architecture, sometimes referred to as “pizza boxes”.

Using a blade-based architecture could offer double or triple the compute density within a rack versus a “pizza box” approach. Blade servers have an integrated system management backplane that all blades plug into, and internalized switches to outside networks and storage, all of which cut down substantially on wiring. This saves money on system administration and real estate.

And by having an integrated backplane, the blade server chassis allows something not available with rack-based servers: account control. There is still no industry-managed standard form factor for commercial or enterprise blade servers.

Pseudo-standards, which seek to make a particular enterprise server’s bladed-architecture chassis the standard to which other vendors have to adhere, do not have the power to level the playing field and deliver economies of scale through wide industry participation.

Server Requirements As Figure 1 illustrates, in some areas the distinction between the roles of communications servers and enterprise servers is clear. However, in areas such as the higher layer functions of the control plane and services plane, there is confusion about which platform is most suitable. The answer depends on multiple factors, including price, network reliability, implications of network outages and expected network growth.

A fundamental question that is facing the communications industry today is this: “What are the most appropriate roles for communications servers and enterprise servers in creating the next generation of networks and their rich multimedia content?”

Comparing Servers The role of communications servers and enterprise servers in telecom network architectures

4 Comparing Servers EmersonNetworkPower.com/EmbeddedComputing

Pull-out quote

The communications server is primarily designed to meet the requirements of core network computing platforms. These clearly contrast with the require-ments of the enterprise server, which is optimized by the needs of the commercial data center. These requirements can be divided into the following categories:

• Environmental and regulatory

• Availability and serviceability

• Feature, performance and capacity

ENVIRONMENTAL AND REGULATORY The applications typically deployed on communica-tions servers must meet strict standards such as those imposed by the Network Equipment Building System (NEBS) or European Telecommunications Standards Institute (ETSI). These cover requirements such as emissions, earthquake survivability, extended temperature operation and acoustic levels.

These standards significantly exceed those imposed by the standards bodies that govern the commercial com-puting world. In fact, in order to deploy some classes of commercial equipment such as routers, special exemptions to these standards must be negotiated.

Service outages in the core network are not tolerated. Equipment is designed to operate throughout a power failure and be able to offer superior earth-quake and fire survivability. Power is provided via redundant 48V battery supplies, rather than AC power, requiring all equipment to be able run off these DC power sources. Although the central offices are usually air conditioned, this air conditioning will fail in the event of a power outage. Therefore equipment must be capable of withstanding temperatures as high as 55 degrees Celsius (131 degrees Fahrenheit) for limited periods of time.

By contrast, enterprise servers are designed to be deployed in the enterprise data center. This is typically the computing room in an office building, which is well air conditioned, and supplied with AC power. The survivability of this equipment in the event of an

earthquake or cooling failure is not governed by government regulation, but rather by the needs of the enterprise. These needs can often be met by dispersing servers across multiple locations.

Commercial server vendors often produce telecom variants of their enterprise servers to address the communications server market by modifying the design of their products. However, unless a system is designed from the ground up to meet these requirements, they are very difficult, if not impossible to achieve. This is analogous to the car manufacturer who designs a convertible by cutting the roof off a coupe. It may look good, but lacks the structural rigidity of a true convertible design.

AVAILABILITY AND SERVICEABILITY The applications that are deployed on communications servers typically must meet very strict availability metrics—essentially they must never fail. This is achieved by a process of very thorough availability engineering that considers every fault that could possibly occur, how that fault could be detected, and a procedure to isolate and recover from the fault without loss of service.

In addition, once a fault has been isolated, the system must allow failed components to be replaced while the system is in service. Availability budgets do not allow for downtime for scheduled maintenance or software upgrades. The entire system must be designed for:

• In-service upgrades

• Testing updated systems and software before being brought into service

• Recovery from failed upgrades or maintenance

This conflicts with the enterprise computing world, where scheduled downtime is quite normal. How often do we receive an e-mail from the IT department telling us that a service will be unavailable for an extended period of time—often an entire week-end? How would we react if our cell phone carrier also tried the same approach? Not only would they lose customers and revenue, they would also incur the wrath of the

5 Comparing Servers EmersonNetworkPower.com/EmbeddedComputing

“What are the most appropriate roles for communications servers and enterprise servers in creating the next generation of networks and their rich multimedia content?”

appropriate government regulatory agency, such as the American Federal Communications Commission (FCC).

This is not to say that enterprise servers do not need to be highly available. They are typically designed to meet targets of approximately 3NINES, or 99.9% availability. This equates to approximately 9 hours per year, often excluding scheduled downtime such as operating system upgrades. These targets are often met on a best-effort basis.

Unlike the availability requirements of the commercial computing environment, the availability requirements of communications servers are not ambiguous—they are very strictly defined by Telcordia and TL 9000 metrics—and are as high as 6NINES, or 99.9999%. This equates to 30 seconds per year, including scheduled downtime. If somebody tells you that these require-ments can be met by cutting the roof off a coupe, ask them to commit to it in writing!

Communications servers must be designed from the inception of the design cycle to meet these very specific metrics. This includes thorough availability modeling and analysis very early in the system architecture definition process.

In addition, a Failure Modes and Effects Criticality Analysis (FMECA) must be performed, which analyzes every system fault, and determines how that fault can be identified, and the effect on the system operation if the fault occurs.

The system design must include a combination of redundant architectures, hardware sensors, a carrier-grade operating system and appropriate fault management software. Examples include:

• Full hardware redundancy, such that no single points of failure in the system architecture will bring the system down

• Hardware designed to optimize the Mean Time Between Failures (MTBF), a statistical measure of how long a component can be in service before a failure is expected

• Diagnostic software and fault tolerant drivers to identify faults as soon as they occur

• Fault management software to control failing over service from failed to standby components, which also triggers alarms to inform field maintenance personnel that the system must be serviced

• Systems, procedures and documentation to minimize the Mean Time To Repair (MTTR) such that, when a fault occurs, it has been isolated and the system has recovered, a failed component can be replaced without bringing the system out of service

• Hardware design features such as redundant boot banks of flash memory to accommodate in-service upgrades, fallback and recovery from corrupt images

• Point-to-point rather than bussed inter-connects, which allow reliable communication and fault isolation

FEATURE, PERFORMANCE AND CAPACITY This category of requirements encompasses the processing, storage and external interface require-ments of the platform. This includes perhaps the most obvious differences between the communications and enterprise server. In a broad sense, the processing requirements of telecom applications can be divided into the following three categories:

• Server Class

• Control Plane

• Data (or Transport) Plane

Processing optimized for each class requires different architectures and fundamental technologies.

Enterprise servers, traditionally deployed in data center environments, are clearly suitable for server class processing or compute-intensive tasks. They are optimized for computing performance, typically multi-way symmetric multi-processing (SMP) processing architectures, large amounts of memory and sizeable external disk storage arrays. They are optimized for high throughput, often processing and transferring large amounts of data in each transaction.

Telecom network applications requiring this type of processing include authentication applications such as Home Location Register (HLR), Home Subscriber Server (HSS) and IPTV middleware.

Communications servers can also be configured to meet the needs of server class applications, although there are power and performance limitations resulting from the regulatory environment and the need for longevity—covered elsewhere in this paper.

Where communications servers excel is being flexible enough to also address the control plane and data plane classes with the same core infrastructure. Therefore, adopting communications servers means that the same platform can be configured for a wide range of applications delivering the benefits of a common platform.

Control plane applications, such as signaling gateways and call control servers (e.g., x-CSCF) are typically characterized by transactional processing of small messages. The optimal architecture for such trans-actions is to have a large number of loosely coupled processing cores operating in parallel. Because there are a large number of messages, these applications are more sensitive to memory latency than memory

6 Comparing Servers EmersonNetworkPower.com/EmbeddedComputing

throughput. Memory latency is the amount of time it requires to perform a single memory access. Enterprise servers are typically designed to optimize throughput over latency and do not typically feature a large number of processing cores.

Communications servers can also be configured to process the traffic that flows across a telecom network—referred to as data plane processing. This processing could include digitally encoding and decoding speech and video channels, transmitting both packet and circuit-switched data, and performing deep packet inspection processing. The processors traditionally used in enterprise servers are not optimized for this type of computing.

The performance level of a processor in an enterprise server does not compare well, in terms of raw perfor-mance or throughput and power consumption, with specialist processors such as digital signal processors, network processors and security processors. In contrast, communications server payload blades are available with these specialized processors.

Finally, telecom networks typically have legacy interfaces such as T1/E1 and OC-3 that must be accommodated. Communications servers can deploy these interfaces using open standard mezzanine modules based on the PCI Mezzanine Card (PMC) or Advanced Mezzanine Card (AdvancedMC™) specifications.

Enterprise servers typically only deploy standard computing interfaces such as Ethernet. Adding telecom interfaces to an enterprise server requires using a standard PCI slot card, which typically will not meet telecom reliability or serviceability standards.

Since the AdvancedMC specification was optimized for the open standard form factors used in communications servers, if they are deployed in enterprise servers, they typically are done so in a less than ideal carrier that restricts airflow or module accessibility.

Communications Network Lifecycle A fundamental requirement for systems based on communications servers is that the product lifecycle must be much longer than that of commercial enter-prise servers. This is driven by some very fundamental market and customer business factors.

Enterprise servers are primarily driven by performance and cost. They are therefore quite rapidly obsolete. End users can often build a business case to upgrade their servers as often as every 18 months, and can afford the downtime required for this frequent maintenance.

Communication servers are primarily driven by availability and regulatory requirements. This is not to say that performance is not important. However, once a product is deployed, network service providers typically demand that it remain in service for as long as ten years.

During this interval, it must be possible to purchase new equipment and spares, as well as get the equipment repaired. Performance upgrades and new functionality must be added without removing the equipment from service. This requires the communications server vendor to have a different business model from that typically adopted by the enterprise server vendor.

The ability to achieve this longevity of supply and repair impacts all aspects of the communications server business, from system architecture and design, component selection, stocking, inventory and repair procedures. This is not the typical business model of an enterprise server vendor.

Product longevity is achieved by three means: shipping the same product for extended periods, designing new products with upward and backward compatibility, and architecting the product such that it can be upgraded in the field without loss of service.

Communications server advantages in compatibility and in-service upgradeability begin to become evident between 18 and 30 months of use, generally about the time systems move into initial service deployment.

7 Comparing Servers EmersonNetworkPower.com/EmbeddedComputing

The first major inflection point comes after around three years of use. This is generally the lifetime limit of enterprise technology when continued deployments, maintenance or in-service upgrades (technology refreshes) require the complete replacement of enterprise equipment, often referred to as a forklift upgrade.

This is costly not only from an equipment (i.e., capital expenditure) perspective, but also from an operating expense viewpoint. Removing equipment from a central office, ensuring service is maintained, recabling, reconfiguring and testing are very time consuming and labor intensive.

Communications servers, because they are designed for communications networks, exhibit none of these costly characteristics. The long lifecycle design re-quires that seamless field upgradeability be supported and that new generations of technology are backwards compatible with existing deployments. This requires the network service providers and NEPs to work together to determine the best way to continue to expand a network and enables technology refreshes to be introduced when appropriate for the network—not dictated by the availability of equipment. This enables a very linear cost of expansion throughout the growth of a network service.

Figure 2 illustrates the different lifecycle economics which should be considered when comparing enterprise and communications servers. While the initial capital cost of the enterprise server may be lower, the cost of disruption and forklift upgrades required by their relatively short lifecycle result in a much higher total cost of ownership.

Over the 10-plus year life of network elements, enterprise server deployments will continue to have discontinuities and associated cost spikes every three years. Assuming enterprise technology becomes available at the point a new service is developed, this means three compulsory technology refreshes over a 10-year lifecycle as illustrated in Figure 3.

In contrast, communications servers remain in service with the same shelf over the entire period of deploy-ment, and new technology can be introduced seam-lessly as appropriate. For example, the introduction of new payload blades, with significantly improved price/performance, can be qualified into and reside within the existing shelf with much less disruption and risk than a forklift upgrade of an enterprise server.

Conclusions From a simple compute-centric application perspective, it is easy to believe that there is little difference between using a communications server and an enterprise server. Indeed, when compared with ATCA based communications servers, enterprise servers may appear to be the cheaper solution. Where the application is primarily content related or traditional back-office processing, such as billing or network management, this may indeed be true.

However, as equipment is deployed in many locations throughout a network and where high availability, long lifecycle deployment and platform flexibility are required, communications servers may be the more economic choice.

Communications servers are not only designed to meet the traditional lifecycle requirements of communications networks, but they have inherent flexibility derived from their open standards heritage, which enables the same platform to be deployed for many applications, and also offers the ability to combine multiple applications in a single server.

The result is that over the lifecycle of a deployed communications service, in virtually all applications, communications servers provide both the optimal technical solution and business investment.

Emerson Network Power. The global leader in enabling Business-Critical Continuity™. EmersonNetworkPower.com/EmbeddedComputing

Emerson is a registered trademark, Business-Critical Continuity, Emerson Climate Technologies and Emerson Network Power are trademarks and service marks of Emerson Electric Co. AdvancedTCA, ATCA, AdvancedMC, CompactPCI and MicroTCA are trademarks of PICMG. Service Availability Forum is a proprietary trademark used under license. All other trademarks are the property of their respective owners. ©2007 Emerson Electric Co.

COMPARINGSERVERS-WP1 01/08

About Emerson Network Power Emerson Network Power, a business of Emerson (NYSE:EMR), is the global leader in enabling Business-Critical Continuity™. The company is the trusted source for adaptive and ultra-reliable solutions that enable and protect its customers’ business-critical technology infrastructures.

Through its Embedded Computing business, Emerson Network Power enables original equipment manufacturers (OEMs) and systems integrators to develop better products quickly, cost-effectively and with less risk. Our business was strengthened by the acquisition of Motorola's Embedded Communications Computing group, which has driven open standards and pioneered technologies based on them for more than 25 years.

This positions Emerson Network Power as the recognized leading provider of products and services based on open standards such as AdvancedTCA®, MicroTCA™, AdvancedMC™, CompactPCI®, Processor PMC, VMEbus and OpenSAF™. Our broad product portfolio, ranging from communications servers, application-ready platforms, blades and modules to enabling software and professional services, enables OEMs to focus on staying ahead of the competition.

Manufacturers of equipment for telecommunications, defense, aerospace, medical and industrial automation markets can trust Emerson’s proven track record of business stability and technology innovation. Working with Emerson helps them shift more of their development efforts to the deployment of new, value-add features and services that create competitive advantage and build market share.

Emerson’s commitment to open, standards-based solutions and our deep understanding of the embedded computing needs of OEMs provide the foundation for the market to look to us for leadership and innovation.

Regional Offices: Tempe, AZ U.S.A. +1 (800) 759 1107 or +1 (602) 438 5720, Shanghai, China +86 215292 5693,

Paris, France +33 1 69 35 77 00 , Tokyo, Japan +81 3 5424 3101, Munich, Germany +49 (0) 89 9 608 2 333,

Hong Kong, China +852 2966 3210, and Tel Aviv, Israel +972 3 568 4387

AC Power Systems Connectivity DC Power Systems

Embedded Computing Embedded Power Integrated Cabinet Solutions

Outside Plant Power Switching & Control Precision Cooling

Services Site Monitoring Surge & Signal Protection