the data center of the future - dell · the data center of the future 2 the data center of the...

13
metagroup.com 800-945-META [6382] March 2003 The Data Center of the Future A META Group White Paper

Upload: others

Post on 25-May-2020

21 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The Data Center of the Future - Dell · The Data Center of the Future 2 The Data Center of the Future Integration & Development Strategies Nick Gall Executive Summary The data center

m e t a g r o u p . c o m • 800-945-META [6382]

March 2003

The Data Center of the Future

A META Group White Paper

Page 2: The Data Center of the Future - Dell · The Data Center of the Future 2 The Data Center of the Future Integration & Development Strategies Nick Gall Executive Summary The data center

208 Harbor Drive • Stamford, CT 06912-0061 • (203) 973-6700 • Fax (203) 359-8066 • metagroup.com Copyright © 2003 META Group, Inc. All rights reserved.

The Data Center of the Future

1

Contents

Executive Summary .................................................................................................2 Introduction ..............................................................................................................2

Four Evolutionary Forces................................................................................................................... 2 Commoditizing the Data Center of the Future .......................................................3 Virtualizing the Data Center of the Future..............................................................5

Scale-Out Design............................................................................................................................... 6 Miniaturization: From Refrigerators to Pizza Boxes and Beyond ...................................................... 6 Grid Computing.................................................................................................................................. 6

Integrating the Data Center of the Future ..............................................................7 Web Integration.................................................................................................................................. 7 From Client/Server Architecture to Service-Oriented Architecture.................................................... 7 Stateless Integration Enables Scale-Out Applications ...................................................................... 8 Storage and Server Integration.......................................................................................................... 8

Innovating the Data Center of the Future...............................................................9 Virtualizing Provisioning and Change Management.......................................................................... 9 Virtualizing the Relationships Among Systems ................................................................................. 9 Standardization Versus Innovation.................................................................................................. 10 Sustainable Rate of Innovation: The Virtuous Lock-In? .................................................................. 10

Conclusion..............................................................................................................11

Page 3: The Data Center of the Future - Dell · The Data Center of the Future 2 The Data Center of the Future Integration & Development Strategies Nick Gall Executive Summary The data center

208 Harbor Drive • Stamford, CT 06912-0061 • (203) 973-6700 • Fax (203) 359-8066 • metagroup.com Copyright © 2003 META Group, Inc. All rights reserved.

The Data Center of the Future

2

The Data Center of the Future Integration & Development Strategies Nick Gall Executive Summary The data center of the future will be shaped by four forces: commoditization, virtualization, integration, and innovation. We are witnessing a long-term shift in scope from assembling and configuring standard client/server systems out of standard components based on standard designs, to assembling and configuring standard data centers out of virtual components based on standard designs. As a result, automated, virtualized, operational services (e.g., design, assembly, provisioning, monitoring, change management) become essential for managing the complex dynamic relationships among the components. Introduction This White Paper discusses the current complexity challenges facing large-scale data centers and how the Internet, Web, and especially Web services are not only creating those challenges, but also offering the solution to those challenges during the next 5-10 years. The central focus of this White Paper is to describe the evolution of the data center of the future (DCFuture), the forces driving that evolution, and its impact on products and markets. Four Evolutionary Forces If we look inside any large-scale, contemporary, corporate data center, there is no question that the systems inhabiting it are very different from those of 20, 10, or even five years ago. Certainly, many of the product and vendor logos adorning the hardware are different. But the changes of fortunes experienced by products and vendors are merely a symptom of more fundamental long-term changes. Four fundamental economic forces have shaped the data center up until now and will continue to shape the systems inhabiting the data center of the future:1 • Commoditization2: The standardization and specialization of function, with the progressive shift in

focus toward quality, complementary products and services, and price. • Virtualization: The systems are composed of increasing numbers of functionally identical

subsystems of decreasing size and complexity to increase utilization. • Integration: The network-centric interconnectedness of the systems is far greater — the data

center becomes the network. • Innovation: The rate of change of the integrated systems and their interconnections is accelerating

— from network topology changes, to software configuration changes, to application integration changes.

Simply put, during the next five years, the DCFuture will contain ever larger numbers of ever smaller components offered by diverse vendors, but based on common standards. Although the complexity of the basic building blocks will decrease, the complexity of integrating them into services and solutions

1 However, it is important to emphasize that these four economic forces apply not only to technology, but also to business, because the dichotomy between business and technology is a false one: each business is itself a set of technologies — a set of technologies for organizing resources for the production of products and services. Such technologies range from tangible manufacturing technologies to intangible (but patentable) business processes, such as credit scoring, risk analysis, and financial derivatives. More fundamentally, the organization of the economy into firms and markets is itself a technology for producing and consuming goods and services. In other words, it’s technology all the way up, and business all the way down. 2 Technically, “commoditize” is not a word. Economists use the term “commodify” — to turn (as an intrinsic value or a work of art) into a commodity. [Merriam-Webster Dictionary Online]. However, the IT industry uses “commoditize,” so this paper shall do so as well.

Page 4: The Data Center of the Future - Dell · The Data Center of the Future 2 The Data Center of the Future Integration & Development Strategies Nick Gall Executive Summary The data center

208 Harbor Drive • Stamford, CT 06912-0061 • (203) 973-6700 • Fax (203) 359-8066 • metagroup.com Copyright © 2003 META Group, Inc. All rights reserved.

The Data Center of the Future

3

grows. This will be especially true of software components, which are innovating much more quickly than hardware components. Thus, the forces of commoditization and virtualization are driving down data center costs, because it is cheaper to produce software than hardware, and it’s cheaper to produce mass quantities of standard systems than a few custom systems, and it’s cheaper to build small things than big things. However, the forces of integration and innovation are driving up data center costs, due to the increased complexity of managing more interdependent integrated systems that are changing at an ever-increasing rate. This complexity is typically dealt with by people — the most flexible, but therefore most expensive, resource for managing complex change. Commoditizing the Data Center of the Future It is unfortunate that the term “commodity” has a negative connotation in the minds of some, for it is the evolution of handcrafted products into commodities that signals the use of advanced production technology and the existence of liquid markets — in short, economic progress. Commoditization depends on a vast underlying network infrastructure of buyers, distributors, suppliers, and assemblers. Wherever you see a commodity you’ll see a complex network underneath it — producing it, distributing it, supporting it, and leveraging it. The process of commoditization in a maturing market gradually shifts the focus of competition from functionality to quality, to complementary services, and ultimately to price. Commoditization can be broken down into two complementary forces: standardization and specialization. Twenty years ago, in the mainframe/minicomputer era, users were essentially required to buy their servers, terminals, storage, and networks from one vendor. Today, users have the flexibility to mix and match networks from one vendor with servers from another vendor, with storage from a third vendor, and with clients from yet another. All this is due to the emergence of standards and the increasing use of networks among the various tiers. While the shift from the mainframe to the client/server era during the past 20 years certainly changed many aspects of the data center, it did not fundamentally change the proprietary nature of the data center resources. Each of the major systems vendors sold a fairly proprietary vertical stack of software, hardware, storage, and even network components. Only in the last decade have we seen these stacks broken up into distinct standards-based horizontal layers and markets, each with different leading vendors. That evolution continues to rapidly accelerate. Hence, in the DCFuture, systems’ interfaces will be far more standardized — TCP/IP, SCSI, IA-32 (Intel’s 32-bit architecture), J2EE (Java 2 Enterprise Edition), Windows .Net, Linux. Yet each system will be far more specialized — app server, integration server, database server, directory server, firewall, load balancer, Web server, portal, and desktop. Although built on common platforms, they are specialized primarily by their software, not their hardware. In the DCFuture, we are likely to see different rates of standardization within each tier of an n-tier architecture (roughly presentation, application, and data). Although the data tier will likely have a diverse mix of OS/CPU combinations (e.g., Windows/IA-32, Solaris/SPARC, AIX/PowerPC, HP-UX/PA-RISC, z/OS-zSeries), the presentation and application tier will be predominantly Windows or Linux on IA-32. As discussed below, we expect the footprint of IA-32 to continue its expansion from the presentation tier up through the data tier during the next five years. This will be part of a gradual shift from tiers being distinguished by different hardware (e.g., a Windows/IA-32 presentation tier and HP-UX/PA-RISC data tier) to tiers being distinguished by different numbers of standard hardware configurations with different software (e.g., four 1-way IA-32 servers running Linux and Apache in the presentation tier and two 4-way IA-32 servers running Linux and Oracle).

Page 5: The Data Center of the Future - Dell · The Data Center of the Future 2 The Data Center of the Future Integration & Development Strategies Nick Gall Executive Summary The data center

208 Harbor Drive • Stamford, CT 06912-0061 • (203) 973-6700 • Fax (203) 359-8066 • metagroup.com Copyright © 2003 META Group, Inc. All rights reserved.

The Data Center of the Future

4

Network: The data network layer is clearly standardized on TCP/IP. The DCFuture will increase standardization of the network stack up through the middleware layers via Web services, as discussed below. Storage: In storage, the trend is toward storage-area networks (SANs), clearly standardized on SCSI as the interface between servers and SANs. In the DCFuture the SAN technology will begin to significantly converge with IP-network technology by 2004 and will have substantially converged via the iSCSI standard by 2007 (SCSI protocol over IP instead of FC). However, through 2005, 2Gb Fibre Channel (FC) will remain the dominant SAN topology for Global 2000 data centers. Beginning in 2004, emerging 10Gb Ethernet, iSCSI, and InfiniBand-based I/O subsystems will interconnect and coexist with FC in SAN backbones. Server Hardware: After many years of diverging standards, the server market is standardizing on IA-32 running Linux, and Windows. IA-32 dominates the DCFuture by 2007. Server OS: With distributed n-tier server hardware standardizing on IA-32, proprietary Unix (Solaris, HP-UX, AIX) will recede, joining z/OS and iOS (OS/400) as high-end, low-unit-volume, legacy-platform status by 2005/06, displaced by commodity OSs designed for commodity hardware: Windows and Linux. By 2007, z/OS software costs (currently 2x-5x those of high-end Unix and about 10x those of Windows) and 15%-20% annual hardware price/performance improvement will slow the annual net capacity growth of zSeries to 10% (versus Windows at 55%, Linux at 75%+, and proprietary Unix at 30%). Windows data center capacity surpasses proprietary Unix by 2007. Linux data center capacity surpasses proprietary Unix a few years later. By 2012, the capacity composition of enterprise data centers will be dominated by Windows (45%), followed by Linux (35%), proprietary Unix (15%), and legacy z/OS (a minor 5%). Linux: Linux will rapidly mature and continue gaining momentum as an ISV reference platform, moving beyond high-volume Web, technical computing, and appliance server environments into mainstream application and DBMS server roles by 2004/05. Linux server growth will initially be at the expense of Unix (2003/04), but will eventually vie for dominance with Windows (2006/07). Although estimates vary, we believe Linux currently comprises roughly 15%-20% of new server OS shipments. Our research indicates that Linux penetration will double by YE03, though small and medium businesses (rather than large data center environments) will lead the charge. By 2006/07, we project that Linux on IA-32 OS shipments will increase to about 40%-45% for new market share (approximately 25%-30% CAGR). However, Windows will still enjoy a dominant market position with an estimated 50%-55% share. The remaining 5% will be composed of “legacy” Unix (e.g., Solaris, AIX, HP-UX) and other “heritage” OSs (e.g., NetWare). Nonetheless, we expect IA-32-based servers to account for 95%+ of all new servers, up from 85%+ currently. Middleware: Going forward, middleware will begin to standardize around Web services interoperability standards, as well as J2EE and .Net implementation standards. Web services standards and their inherent network-centric architecture will enable the refactoring of today’s single-vendor monolithic applications into standards-based application services sourced from diverse vendors. J2EE and .Net application servers will be key business application implementation platforms through 2007. In addition, most new business applications (whether bought or built) will be assembled from components based on or integrated with J2EE or .Net application servers. Open Source: Just as the evolution of software portability has virtualized and commoditized hardware during the past 20 years, the evolution of XML Web services network interoperability will virtualize and commoditize software during the next 20 years. This is one of the reasons that open source software,

Page 6: The Data Center of the Future - Dell · The Data Center of the Future 2 The Data Center of the Future Integration & Development Strategies Nick Gall Executive Summary The data center

208 Harbor Drive • Stamford, CT 06912-0061 • (203) 973-6700 • Fax (203) 359-8066 • metagroup.com Copyright © 2003 META Group, Inc. All rights reserved.

The Data Center of the Future

5

such as the Apache Web server, the Linux OS, and the JBoss application server, is so successful. Such software implements Web standards that are functionally identical to commercial implementations, and whose non-functional capabilities (e.g., scalability, availability, manageability) are on par as well. Open source software will have a substantial presence (25% of infrastructure software capacity) in the DCFuture by 2007. Bottom Line on Commoditization: The DCFuture will be populated by storage, server, and network elements from diverse vendors, defined by increasingly standardized interoperable interfaces. This benefits the end user due to the competitive landscape of hardware and software vendors. Virtualizing the Data Center of the Future Another unmistakable DCFuture architectural trend is systems composed of increasing numbers of functionally identical subsystems of decreasing size. Two obvious examples are RAID (redundant arrays of inexpensive disks) arrays and Web server farms. The design principle underlying these diverse examples is to build a highly reliable and highly scalable virtual system out of large numbers of moderately reliable and moderately scalable subsystems. Virtualization3 can be broken down into two complementary forces: • Miniaturization: The system’s size and complexity, relative to its performance, decreases over

time — whether the system is at the chip, disk, or port level, the board level, or even the server level.

• Massification4: The number of subsystems composing a system’s instances grows over time — thousands of processors, servers, spindles.

For example, Moore’s Law is a combination of the miniaturization and massification forces of virtualization applied to chips: the size of a transistor shrinks by half every 18 months (miniaturization), which enables double the number of transistors to be put on a chip (massification), which in turn roughly doubles the chip’s scalability (virtualization). Virtualization is the answer to the question on the minds of every CIO: What is the right approach to simplifying data center complexity to cut costs? One of the primary benefits of virtualization is that it can increase utilization beyond the traditionally low utilization rates (<25%) for Unix and Windows servers. By more dynamically distributing workloads across server, storage, and communications resources, utilization rates can begin to increase to 50%-75%, slowing the need to buy additional resources. However, the benefits of virtualization go far beyond increased utilization. Other benefits include: • Increased performance • Increased availability (rolling upgrades and shared/spared resource pools) • Increased innovation • Increased commoditization of underlying resources • Increased commoditization of management skills In short, virtualization is the key to providing utility- or carrier-grade IT services as those capabilities become mainstream.

3 Generally speaking, to virtualize a set of diverse concrete resources is to access them through a single uniform interface that, from the users’ perspective, enables them to behave as one unified resource that can be shared with varying degrees of dynamic behavior. Virtualization creates a single-system illusion for certain. 4 Massification refers to the combination of multiplexing and multiprocessing: multiplexing of ever larger numbers of users (whether those users are people or systems) onto ever larger numbers of uniform processors (whether those processors are transistors, CPUs, or disk drives). It is derived from the use of the term in the context of mass production.

Page 7: The Data Center of the Future - Dell · The Data Center of the Future 2 The Data Center of the Future Integration & Development Strategies Nick Gall Executive Summary The data center

208 Harbor Drive • Stamford, CT 06912-0061 • (203) 973-6700 • Fax (203) 359-8066 • metagroup.com Copyright © 2003 META Group, Inc. All rights reserved.

The Data Center of the Future

6

Scale-Out Design However, while virtualization is fairly mature at the network layer (e.g., VLAN — virtual LAN) and storage virtualization is well underway, the foundations of server virtualization are just beginning to be laid. In fact, server virtualization refers to different technologies in different tiers. In the presentation tier, it typically refers to farms of Web servers virtualized by IP load balancer technology. In the application tier, it typically refers to OS virtualization technology such as VMWare and Microsoft’s recently acquired Connectix. In the data tier, it typically refers to distributed database technology, such as Oracle9i Real Application Clusters. What unifies all these examples is the use of high-volume commodity servers (1-way, 2- to 4-way) in an IP-network-based scale-out5 software design composing an unlimited number of loosely coupled servers. This is in contrast to the conventional wisdom of the past 50 years — symmetric multiprocessing (SMP) — which is to use a synchronous-bus-based scale-up hardware design composing a fixed number of tightly coupled processors. Although the complexity of SMP is no longer a challenge up to four ways, and in fact has rapidly commoditized in the IA-32 server market, the complexity and, therefore, cost of large-scale SMP designs (16-, 24-, 32-, or 64-way) make them inappropriate for partitioning into many smaller systems. The procurement costs for a 32-way server for use as 16 2-way partitions are often 2x-3x more than those for 16 2-way servers. Despite this economic and architectural reality, many IT organizations naively focus on reducing the number of physically distinct servers to reduce operational costs. The problem with this approach is that the total operational costs are driven primarily by the number of logically distinct servers. Thus, over the long run, the daily operational costs of N servers, each with M CPUs, is roughly equal to the costs of one NxM server divided into N partitions, each with M CPUs. Hence, there is not enough operational cost savings to justify the much greater price of the large scale-up SMP server. Consequently, IT organizations should transform their data center consolidation initiatives, which mistakenly attempt to reduce complexity by consolidating onto a few expensive resources, as data center unification initiatives, which reduce complexity by standardizing and virtualizing commodity resources. Miniaturization: From Refrigerators to Pizza Boxes and Beyond While most of the attention is on the amazing price/performance improvements made possible by the continual miniaturization of the chip, derivative forms of miniaturization are also shaping the DCFuture. For example, the past several years have witnessed the shrinking of the server form factor from standalone/pedestal, to rack mounted, to the current 1U “pizza boxes,” to the emerging blade form factor. Thus, the DCFuture will evolve toward racks upon racks of processor, storage, and communications modules, all with roughly the same form factor, power, interconnect, and environmental designs, so it is difficult to tell which is doing what. This uniformity will enable both greater density (saving precious data center floor space) and more balanced power and cooling management, simplifying the increasingly complex rack space arrangements required by diverse components. Grid Computing The overarching virtualization trend that will significantly shape the DCFuture, albeit a bit further out (2005-10), is grid computing. Basically, grid computing is the application of concepts developed over the past decade in technical/scientific supercomputing for scaling out workloads (across hundreds or

5 The more common terms are scale out or horizontal scaling in contrast to scale up or vertical scaling. However, up and out do not usefully distinguish the two designs: an unlimited number of asynchronous processors vs. a fixed number of synchronous processors. Scale free and fixed scale clearly highlight this essential difference. Scale free also emphasizes that such designs share a robust design principle common to the Internet, the Web, ecosystems, biological systems, etc. See Linked: The New Science of Networks.

Page 8: The Data Center of the Future - Dell · The Data Center of the Future 2 The Data Center of the Future Integration & Development Strategies Nick Gall Executive Summary The data center

208 Harbor Drive • Stamford, CT 06912-0061 • (203) 973-6700 • Fax (203) 359-8066 • metagroup.com Copyright © 2003 META Group, Inc. All rights reserved.

The Data Center of the Future

7

thousands of servers and storage units, sometimes across geographical distances) to commercial computing. In other words, given that we can simulate the weather on a grid, how can we run SAP on it? While grid is a longer-term technology, it is an absolutely essential one, because it is the only approach that offers some hope of simplifying the rapidly growing complexity and inefficiency of managing large numbers of small servers dedicated to specific applications. The current practice of throwing increasing numbers of underutilized processors and storage at applications cannot continue indefinitely. The emerging standard in commercial grid computing is OGSA (Open Grid Services Architecture), which is the next generation, Web-services-based version of the Globus Toolkit, a de facto standard in the technical grid computing community. It’s important to note that, while the industry will move this way over the long term, IT organizations should display caution in separating hype from operational reality. Bottom Line on Virtualization: Resource architectures for storage, computation, and communication will trend toward dynamically adjustable IP-network-based scale-out software designs. Integrating the Data Center of the Future Although the forces of commoditization and virtualization will dramatically drive down the costs of the individual components inhabiting the DCFuture, the explosion in the number of components in the largest data centers from thousands (2003-05) to tens of thousands (2005+) could drive coordination costs of integrating so many components so high that they completely overwhelm such cost savings. Fundamentally new approaches to integrating such components must be found. The various approaches to loosely coupling within and among the various domains (middleware, storage, servers) are unified by one relentless trend — integration via asynchronous, message-based, routable, hot-pluggable, standard networks. The DCFuture is in transition from a proprietary-platform-centric world to a standard-network-centric world. Clearly, traditional approaches to integrating systems — parallel buses, server partitioning, clustering, distributed objects, TP monitors — will not suffice. The only proven approach to successfully integrating millions of servers on a global scale is the worldwide Web. Web Integration Although it has been 10 years since the dawn of the Web era in 1993, the transformation of the DCFuture is still accelerating due to major extensions to the Web, such as XML Web services. The fact that virtually all new applications are intended to be accessed via the Web and Web services is causing a fundamental redesign of the entire IT stack, from applications to the data center, during the next 10-20 years. This new network-centric and service-centric Web era stands alongside the mainframe and client/server eras as a major shift in the role of the data center. From Client/Server Architecture to Service-Oriented Architecture This fundamental shift is exemplified by XML Web services standards, which are transforming and standardizing software architecture at every layer above the IP stack. From grids, to J2EE, to .Net, to portals, and to EAI, Web services standards with cryptic names (e.g., WSDL, SOAP, UDDI, OGSA, BPEL4WS, WS-RP)6 are redefining these software technologies around ubiquitous, interoperable standards.

6 WSDL (Web Services Definition Language), SOAP (no acronym), UDDI (Universal Description Discovery & Integration), BPEL4WS (Business Process Execution Language for Web Services), WS-RP (Web Services Remote Portal).

Page 9: The Data Center of the Future - Dell · The Data Center of the Future 2 The Data Center of the Future Integration & Development Strategies Nick Gall Executive Summary The data center

208 Harbor Drive • Stamford, CT 06912-0061 • (203) 973-6700 • Fax (203) 359-8066 • metagroup.com Copyright © 2003 META Group, Inc. All rights reserved.

The Data Center of the Future

8

The underlying paradigm shift that is important from a data center perspective is that XML Web services make network-based identifiers (e.g., URLs), formats (e.g., HTML, XML), and protocols (e.g., HTTP, SOAP) — collectively referred to as IFaPs — the sole focus of its service-oriented architecture.7 To invoke a service, one needs to know everything about the service’s behavior over the wire as defined by its IFaPs, and absolutely nothing about the software underlying such behaviors. This is a complete inversion of the client/server paradigm. Client/server attempted to virtualize diverse networks by requiring a standard software model on every client and every server (e.g., DCE, CORBA, COM, J2EE) — and failed. The Web is virtualizing these diverse software models by requiring a standard IFaP model across the network — and succeeding. No one knows, and few care, what software a client or server is running. Stateless Integration Enables Scale-Out Applications Not only has the Web transcended the limitations of the client/server paradigm, but also it has accelerated the shift toward scale-out applications due to what is referred to as the “stateless design of the Web.”8 It is this stateless design that has enabled Web sites to be deployed on racks filled with high-volume servers instead of a mainframe filled with low-volume processors. In the DCFuture, the Web’s stateless design practices will be standardized (and commoditized) by evolving Web services standards such as BPEL4WS, WS-Coordination, WS-Transaction, WS-ReliableMessaging, and their successors. The DCFuture becomes more of a network center as more of the scaled-up servers become scaled-out networks of commodity servers — just as vertically integrated firms are slowly replaced by horizontally integrated markets. Or put another way, as the thickness of the network increases, the size of servers decreases. Storage and Server Integration While the impact of Web standards on integration will be primarily at the application and middleware layers, other integration trends at the hardware and storage layers are trending in the same scale-out, network-centric direction. There is no question that the storage domain has been transformed by the fundamental shift from parallel-bus-based, direct-attached storage, to FC-based SCSI-networked storage, and eventually to IP-based SCSI*networked storage (iSCSI IFaPs). (See above.) The server hardware itself is the last bastion of the traditional synchronous parallel hardware bus. Although the specific standards have yet to emerge, during the next five years, the clear trend is away from customized parallel synchronous buses, to standardized asynchronous serial networks, and away from point-to-point signal lines to routable packets (e.g., InfiniBand9, PCI-Express). Bottom Line on Integration: XML Web services will standardize the integrated software infrastructure stack, and IP-network designs will standardize the integrated hardware interconnects of the DCFuture.

7 See Architecture of the World Wide Web, W3C Working Draft 15 November 2002 (http://www.w3.org/TR/2002/WD-webarch-20021115/). The W3C now refers to identifiers, formats, and protocols as identifiers, representations, and interactions. 8 This name is somewhat of a misnomer as anyone who has filled a shopping cart with electronic purchases can attest. It is more accurate to describe the state-handling design of the Web as incorporating a general-purpose distributed state-management architecture based on explicit representations of soft state that are exchanged via stateless network-level interactions. For example, the use of cookies to keep track of a shopping session is an explicit representation of state. And the use of timeouts to terminate abandoned shopping sessions is an example of soft state. 9 In fact, InfiniBand is based on IPv6 packet headers and addressing (IFaPs).

Page 10: The Data Center of the Future - Dell · The Data Center of the Future 2 The Data Center of the Future Integration & Development Strategies Nick Gall Executive Summary The data center

208 Harbor Drive • Stamford, CT 06912-0061 • (203) 973-6700 • Fax (203) 359-8066 • metagroup.com Copyright © 2003 META Group, Inc. All rights reserved.

The Data Center of the Future

9

Innovating the Data Center of the Future The exploding complexity of the DCFuture is not only driven by the degree of integration among all systems, but also compounded by the fact that the rate of change of those integrated systems and their interconnections is far faster — from network topology changes, to software configuration changes, to application integration changes. For example, eBay adds up to 2 million new items a day, registers up to 40,000 new users a day, and changes up to 30,000 lines of code per week — all while operating continuously. One strategy is simply to slow the pace of data center change to reduce the complexity. For example, eBay’s IT organization could ask the business to limit the creation of new auctions to once every three days or to allow 24 hours of downtime every quarter for system upgrades to cut IT costs dramatically. Unfortunately, slowing the pace of data center change has one potentially catastrophic side effect: it could slow the pace of the business. In those businesses in which IT innovation is essential to business innovation, which includes more businesses every day, some other way of managing the complexity of rapid innovation must be found. To prevent the data center from eventually consuming the entire IT budget, increased manageability and resource utilization through standardization and automation are essential. However, while virtualizing the use of diverse resources is straightforward — virtualizing their management is not. Virtualizing Provisioning and Change Management First, to deal with the complexity generated by increasing massification, the DCFuture must shift moves/adds/changes from ad hoc manual processes in the hardware domain, to standardized automated processes in the software domain, and ultimately to the metadata (configuration) domain. During the next five years, the DCFuture will dynamically provision and change-manage specialized software stacks (OS, middleware, app) across relatively uniform server farms. For example, 16 identical servers (2-way, 512Mb memory, SAN storage) connected by a VLAN will be automatically software-provisioned into 10 Web servers, four app servers, and two database servers, without physically touching a cable or a box. The Google data center is an extreme example of this approach. Google automatically deploys software stacks to over 14,000 identical servers (Linux on IA-32) via rolling upgrades for continuous operations. A few DCs have already taken this further and have created J2EE grids, in which the only software components that are automatically provisioned across the servers are J2EE configuration files (e.g., EAR, WAR, and JAR files). Such practices will become mainstream in the DCFuture. Eventually, 5-10 years out, each server will include a standard XML parser, generator, execution engine, and store. Consequently, all of what makes each server unique will be represented simply as standard XML Web services configuration documents (e.g., XSLT, XQUERY, WS-RP [remote portal], BPEL4WS). We are witnessing a long-term shift in scope from assembling and configuring diverse physical server boxes out of standard subcomponents based on standard designs, to assembling and configuring diverse virtual data centers out of virtual components deployed across arrays of standard server boxes. Such new configuration management software will eventually transform the DCFuture into a virtualized assembly plant. Virtualizing the Relationships Among Systems Second, to deal with the complexity generated by increasing innovation, the DCFuture must virtualize the system operational life cycle, especially the evolution of relationships among the components inhabiting it.

Page 11: The Data Center of the Future - Dell · The Data Center of the Future 2 The Data Center of the Future Integration & Development Strategies Nick Gall Executive Summary The data center

208 Harbor Drive • Stamford, CT 06912-0061 • (203) 973-6700 • Fax (203) 359-8066 • metagroup.com Copyright © 2003 META Group, Inc. All rights reserved.

The Data Center of the Future

10

The trend in data center complexity over the past decade or so is a shift from a few complex systems (mainframes) with simple relationships among them (e.g., RJE batch links) to many simple systems (commodity IA-32 servers) with complex relationships (e.g., Web services). Accordingly, the emphasis in operations in the DCFuture will shift from managing complex elements to managing complex relationships. Such relationships will be represented by metadata models we call technology relationship maps (TRMs). Relationships are now largely static, but they will become increasingly dynamic with the growth of advanced technologies and services like grid computing, Web services, and IP telephony. Accurately modeling such rapidly changing relationships will require both automated discovery and explicit publication of changes by the components involved in the relationships. A standardized TRM framework will be unrealistic before 2008, so meanwhile, IT organizations will need to integrate multiple maps. Map components can be reused, but since relationship maps are multidimensional, the various arrangements and domains will use different map subsets. Vendors such as SMARTS and Aprisma are already providing basic TRMs for network management. Standardization Versus Innovation With all the emphasis on standardization in the DCFuture, many end users and vendors express concerns that so much standardization will impede innovation. However, the apparent paradox of standards is that they both impede and enable innovation. Network interoperability standards provide a number of compelling examples of this. Successful network standards change very slowly because they are embedded in every endpoint and network element. For example, TCP/IPv4 has not substantially changed for 20 years.10 Likewise, the IEEE 802.3 (Ethernet) is now 23 years old. The SCSI standard is 17 years old, and was originally the Small Computer Systems Interface. However, while such interface stability impedes innovation of the interface itself, it enables rapid innovation on both sides of the interface — implementation and usage. Thus TCP/IP is now implemented on virtually every type of subnet and has enabled myriad uses, from FTP to telephony. The IEEE 802 family of standards has been implemented on dozens of underlying technologies — from Coax Ethernet buses (802.3) to the latest Wi-Fi Wireless LANs (802.11). Innovation has driven implementation speeds from 1Mb/sec to 10Gb/sec. As a result, these standards have displaced almost every other data communication standard for Layer 2, and they are beginning to displace Fibre Channel in the storage domain and PBXs in the telephony domain. The secret behind such successful standards is their average general-purpose architecture, which enabled greater adaptability to new uses and new implementations. Over the long run, an average architecture with above-average adaptability will displace best-of-breed architectures with only average adaptability. The Internet architecture has never been best of breed; neither has the IA-32 architecture, Ethernet architecture, or the SCSI architecture. But each has proven to be incredibly adaptable to implementation and usage innovations and, therefore, has been successful at displacing competitors. In large part, this is due to the fact that collective innovation of the industry is made available to all players as systems standardize. Customers benefit from the leveraged R&D of the entire industry, not just a single vendor. This approach frees the customer from being locked in to a single vendor’s proprietary solution, again enabling competition. Sustainable Rate of Innovation: The Virtuous Lock-In? Vendors have one other problem with standards — it makes it more challenging for them to differentiate their products and services and to lock in user loyalty. They fear that, in a commodity market, they face a race to the bottom of price. Although price competition certainly becomes more

10 The entire Internet was rebooted to cut over to IPv4 on January 1, 1983. Its successor, IPv6, has been a ratified standard for several years.

Page 12: The Data Center of the Future - Dell · The Data Center of the Future 2 The Data Center of the Future Integration & Development Strategies Nick Gall Executive Summary The data center

208 Harbor Drive • Stamford, CT 06912-0061 • (203) 973-6700 • Fax (203) 359-8066 • metagroup.com Copyright © 2003 META Group, Inc. All rights reserved.

The Data Center of the Future

11

important in a commodity market, vendors must also focus their innovation on other product aspects as well, such as quality (including performance and reliability), and complementary products and services (including sales and support), to improve the overall customer experience. But regardless of how they feel about commoditization, vendors can no longer effectively compete using proprietary features and functions that interfere with interoperability. This is due to the overwhelming economic impact of the Internet’s network effect. The value of an interoperability standard increases with the number of vendors supporting it and the number of users using it. In economics, this is known as the “network effect,” because the increase in value comes not from the unique features of the product embodying the standard (or the economies of scale of its production), but from the value of the standards-based service that spans the network of users and providers.11 The ultimate goal of any standard is not to achieve unique technical elegance for its own sake (or for the sake of lock-in), but to be a means of maximizing its ubiquity and thereby the value of the network effect. Given the overpowering value of open Internet standards, combined with the trend toward all systems being integrated via Web services standards, we believe that vendors whose business model is to innovate via proprietary standards with limited network effects will face an inevitable erosion of their competitiveness. To thrive, vendors must innovate through ubiquitous interoperable standards with global network effects, and drive the commoditization enabled by such standards. Bottom Line on Innovation: Given the economic network effect of the Internet, vendor innovation will be channeled toward driving commoditization through ubiquitous interoperable standards, not resisting commoditization through proprietary alternatives. Conclusion The data center of the future will be a simplified, commoditized, virtualized set of computational, storage, and network resources that is enabled by ubiquitous, interoperable standards. Such standardization of components and interfaces will enable the transformation of what is essentially a craft industry for manufacturing hand-crafted infrastructure configurations for one or a few applications, into a standardized industry for assembling machine-generated infrastructure configurations for many applications. The path of the four forces that has shaped the data center throughout its history is not that of a pendulum that reverses direction periodically; rather, it is the path of a ratchet that is inexorably evolving the data center in a single direction: toward larger numbers of smaller, simpler, more specialized component systems that are more dynamically composed into solution systems via increasingly powerful networks. This is the data center of the future. Nick Gall is senior vice president and principal analyst for Integration & Development Strategies, of META Group, Inc. For additional information on this topic or other METAmeasurement services, contact [email protected]. This META Group White Paper was prepared on behalf of Dell.

11 This is also known within the IT community as Metcalf’s Law: the value of the network grows by the square of the number of users.

Page 13: The Data Center of the Future - Dell · The Data Center of the Future 2 The Data Center of the Future Integration & Development Strategies Nick Gall Executive Summary The data center

About META Group Return On IntelligenceSM

META Group is a leading provider of information technology research, advisory services, and strategic consulting. Delivering objective and actionable guidance, META Group’s experienced analysts and consultants are trusted advisors to IT and business executives around the world. Our unique collaborative models and dedicated customer service help clients be more efficient, effective, and timely in their use of IT to achieve their business goals. Visit metagroup.com for more details on our high-value approach.