easing the space, power, and cooling crunch with green...
TRANSCRIPT
![Page 1: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/1.jpg)
EASING THE SP
A
CE
, P
O
WER
, AND
COOLING CRUNCH WITH
GREEN INTERNET D
ATACENTER RE-DESIGN
Bruce Baikie
Steve Gaede
White PaperSecond Edition January 2008
![Page 2: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/2.jpg)
Sun Microsystems, Inc.
T
able Of Contents
Introduction
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1
Power Down
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
Increasing Server Power Efficiency
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
Sun x64 Servers
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Sun UltraSPARC T1 and T2 Processor-Powered Servers . . . . . . . . . . . . . . . . . . . . . . 6
Increasing Storage Power Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Increasing Power and Cooling Efficiency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Power Efficiency Improvement Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Cooling System Efficiency Improvement Techniques . . . . . . . . . . . . . . . . . . . . . . . 9
Beyond the Datacenter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Consolidate
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12
Solaris Containers Technology
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
Sun Logical Domains
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
Sun xVM Hypervisor Software
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
15
VMware Virtual Infrastructure Software
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
The Basic Equation
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17
Use Alternative Energy Systems
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
Alternate Power Sources
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
Cogeneration
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Solar Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Fuel Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Wind Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
The Role of DC Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Sun’s DC-Powered Datacenter Demonstration . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Commercial DC Power Distribution Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Sun’s Energy-Efficient Datacenters
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
Addressing Inefficiency
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
Efficiency Through Density
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
22
Efficient, Pod-Based Design
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
23
Power
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Lessons for Communication Carriers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Conclusion
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
27
For More Information
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
28
About the Authors
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
Acknowledgments
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
![Page 3: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/3.jpg)
1
Introduction
Sun Microsystems, Inc.
Chapter 1
Introduction
Most large corporations, including communication carriers, are evaluating how to be
environmentally friendly as a good public-relations gesture as well as a business choice
that is essential to the bottom line. Indeed, even the growing evidence of global
climate change has not spurred many governmental or business organizations to take
dramatic steps to reduce carbon dioxide emissions by reducing energy consumption
and using alternative energy sources. Until now.
Datacenter operators everywhere are finding themselves in a space, power, and cooling
crunch that using “green” technology can help them to alleviate. The crunch is born of
the need for IT organizations to continually deploy new applications and support larger
numbers of subscribers, all within the constraints of an aging datacenter infrastructure
that was supposed to last for up to two decades. IT organizations are saddled with silo-
architected applications with no resource sharing between silos. These silos are
powered by legacy, energy-inefficient servers, all sized to handle the maximum-possible
workload. And in the remaining space, datacenter operators must squeeze in new
applications that consume even more energy and produce even more heat. Consider:
• The U.S. Environmental Protection Agency (EPA) estimates that the IT industry used
nearly 1.5 percent of the United States’ total energy consumption, a cost of USD $4.5
billion. Given rising energy costs, nearly every analyst has statistics to show that, in a
relatively short amount of time, the cost of electricity for a low-cost server may be
more than its acquisition cost. This brings organizations to recognize that energy
efficiency affects their bottom line, and that upgrading their server infrastructure and
preparing their datacenters for higher densities can provide a short path to increased
energy efficiency.
• For every watt a datacenter consumes, it requires the same or more power to remove
the heat generated. According to a Computerworld article (“Sidebar: Doing the
Math,” April 3, 2006), this “burden factor” ranges from 1.8 to 2.5. The article cites
that, at a conservative estimate of 6 cents per kilowatt hour, a 3 megawatt
datacenter consumes roughly $3.15 million in electricity per year. Even a 10 percent
power savings would save such a company $315,000 per year.
• Increased power consumption leads to more rapid thermal rise following a power
failure that affects datacenter cooling requirements. A 40 W per square foot
datacenter has 10 minutes without cooling before the temperature rises 25 degrees
Fahrenheit. A 300 W per square foot datacenter has less than a minute. More rapid
heat rise requires datacenters to deploy even more costly, redundant, uninterruptible
cooling systems (“Power and Cooling in the Datacenter,” Advanced Micro Devices
white paper, 2005).
![Page 4: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/4.jpg)
2
Introduction
Sun Microsystems, Inc.
Fortunately, the same “green” technology that can help datacenters reduce power
consumption can also help them save money. Sun Microsystems has technology to help
communication carriers reduce their power, cooling, and space requirements while
making it straightforward to use alternative energy sources now and in the future. This
white paper describes three simple steps that IT organizations can take today:
• Power down by using more energy-efficient servers including Sun x64 servers and
Sun Fire™ servers powered by the UltraSPARC® T1 and T2 processors with
CoolThreads™ technology. Today’s more powerful servers can be used to create more
dense datacenters that can be powered and cooled more efficiently with upgraded
infrastructure.
• Consolidate workloads to increase overall utilization levels and increase efficiency
using virtualization technology available from Sun.
• Use alternate power sources to help decrease the production of greenhouse gases,
and consider using DC power, long a favorite of communication carriers due to its
efficiency and reliability.
Sun’s new energy-efficient datacenters provide a real-world example of how Sun has
saved on both its real estate and power bills. By replacing aging, inefficient servers and
upgrading its power and cooling systems, Sun has reduced its Santa Clara, California
datacenter space by more than 80 percent, and has cut its power consumption by more
than 60 percent.
![Page 5: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/5.jpg)
3
Power Down
Sun Microsystems, Inc.
Chapter 2
Power Down
Today’s communication carrier datacenters are running at capacity in terms of available
space, power, and cooling resources. As new applications are deployed, they put even
greater strain on existing infrastructure, forcing carriers to address the larger issue of
how to reduce their overall power consumption.
Clearly, carriers can continue to build more and more datacenters, consuming more
and more power, but this is not a responsible choice from either a financial or an
ecological perspective. In the United States, the Environmental Protection Agency (EPA)
reports that datacenters overall are consuming 1.5 percent of the nation’s power
consumption, topped only by heavy industry. As the Uptime Institute reports (“Institute
Opinion on EPA Report,” Uptime Institute, August 6, 2007), this statistic suggests the
degree to which datacenters have an impact on the carbon emissions resulting from
the burning of fossil fuels. From a purely economic standpoint, the three-year capital
and operating cost for a server is a factor of approximately 1.5 today, with the ratio
expected to worsen as server costs decline and other costs — including electricity —
rise (“The Invisible Crisis in the Datacenter: The Economic Meltdown of Moore’s Law,”
Uptime Institute, August 2007).
Rather than building more datacenters, the solution is to “power down” by working
within, or even improving upon, existing power, space, and cooling envelopes. This
solution requires a two-pronged approach of improving server efficiency and datacenter
power and cooling efficiency. One step is to replace aging, energy-inefficient servers
with current, more powerful, yet energy-efficient systems. This helps to reduce overall
datacenter space requirements, while reducing the energy required to deliver a given
unit of compute power. The companion step is to increase the overall power efficiency
of the datacenter, by improving its power distribution and cooling systems.
Because server replacement results in higher density, and higher spot-cooling
requirements, both of the above steps need to be taken in a coordinated fashion. The
good news is that server power consumption reductions are amplified by savings in the
datacenter power and cooling requirements. Estimates of this impact vary, but they all
agree every watt delivered to IT equipment requires at least one additional watt in
overhead including power distribution and cooling. The Uptime Institute paper cited
above reports its findings that 2.2 kW of power are delivered for every 1 kW delivered to
servers, or approximately 45 percent of datacenter power actually drives the IT load.
Estimates by American Power Conversion (APC) illustrated in Figure 1 suggest that the
number is even lower, approximately 30 percent. Regardless of which numbers are
correct, it’s clear that reducing both IT load and increasing efficiency of power and
cooling systems can have a dramatic impact on overall datacenter power consumption.
![Page 6: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/6.jpg)
4
Power Down
Sun Microsystems, Inc.
Figure 1. The fact that only a fraction of incoming power actually powers IT equipment in a datacenter
suggests that both server and infrastructure efficiency improvements are needed.
Increasing Server Power EfficiencySun’s energy-efficient server product line can help carriers reduce their power
requirements for both new and existing applications:
• New applications. Carriers can deploy new applications onto a smaller number of
high-performance servers from Sun. These servers consume less power, and use less
datacenter real estate than older, more power-hungry servers.
• Existing applications. Carriers can migrate applications from older, less power-effi-
cient servers to state-of-the-art platforms that conserve both energy and space.
Using virtualization technology discussed in the following chapter, multiple serv-
ers can be consolidated onto a smaller number of more powerful, energy-efficient
servers.
Other than I/O subsystems, there are three aspects to server power efficiency to
consider:
• Processors. The CPU is one of the largest power consumers in today’s servers, and the
choice of processor has a significant impact on the server’s power consumption. The
UltraSPARC T1 and T2 processors consume less power per core and thread than any
other processors in their class. Sun’s x64 server product line is powered by AMD
Opteron™ and Intel® Xeon® processors, each of which provide advantages in power
consumption depending on the workload.
• Memory. As processor speeds have increased, manufacturers have begun moving to
the use of even faster, high-power DIMMS. For applications requiring a large memory
footprint, such as virtualization applications, there is often no choice but to configure
the most memory possible in a server, and this is often the best choice because a
UPS 18%
CRAC 9%PDU 5%
Humidifier 3%Switchgear/Generator/Lighting 2%
IndoorDatacenter
Heat
Electrical PowerIN
Waste HeatOUT
Chiller 33%
IT Load 30%
Data Source: APC
![Page 7: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/7.jpg)
5 Power Down Sun Microsystems, Inc.
large amount of memory is needed to support applications that might each have
required a dedicated server in the past. For applications with modest memory
requirements, some Sun servers have an advantage because they are designed with a
comparatively larger number of DIMM sockets per processor. When less memory is
required, the servers’s DIMM slots can be populated with less dense, less expensive
memory that consumes less power overall than the most high-density memory
available.
• Power and Cooling. In an era where many server manufacturers design their systems
using similar parts, one differentiating factor is the efficiency with which their power
and cooling systems are designed. Sun’s Sun Blade™ product line offers the efficiency
benefit of a shared power and cooling system amortized across multiple server
modules, allowing carriers to take advantage of both the density improvements and
the power savings they offer. The Sun Blade 6000 Modular System discussed below is
one such product that offers server modules with each of Sun’s three processor
choices, the UltraSPARC T1, AMD Opteron, and Intel Xeon processors.
Sun x64 ServersHeterogeneity is the norm in today’s datacenters, and Sun x64 servers give carriers the
opportunity to standardize on a single platform that is capable of running the Solaris™
Operating System, Microsoft Windows, and Linux. This means investment protection for
carriers that can purchase one set of datacenter-ready servers today, and then redeploy
the same hardware with a different operating system choice the moment their needs
change. AMD Opteron and Intel Xeon processors can support both 32 and 64-bit
applications, and the low pricing of this server family makes it an attractive choice as a
standard carrier platform.
Sun’s x64 server product line includes both Sun Fire rack-optimized servers and the Sun
Blade 8000 and 6000 Modular Systems. The Sun Fire server product line includes
products with two, four, or eight processor sockets, each capable of supporting single
and dual-core AMD Opteron processors, and a maximum of from 32 GB to 128 GB of
memory per server. Most of Sun’s x64 server products have standard or optional
redundant, hot-swappable power and cooling that help carriers maximize uptime.
Sun Blade modular systems provide a flexible platform that uses high-efficiency power
supplies and shared cooling systems. The new Sun Blade 6000 Modular System has
been designed from the ground up to deliver greater efficiency than the corresponding
number of rack-mount servers. This system is designed to host 10 server modules in 10
rack units, with options for a single UltraSPARC T1 processor, two dual-core AMD
Opteron processors (with options for quad-core processors when available), and two
dual or quad-core Intel Xeon processors, for up to a total of up to 80 processor cores in
10 rack units.
![Page 8: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/8.jpg)
6 Power Down Sun Microsystems, Inc.
Power-Efficient Processors
What makes Sun x64 servers and blade systems attractive to carriers wishing to reduce
their power requirements is the energy efficiency of the processors Sun offers with its
products.
Designed to maximize computational power per watt, AMD’s dual-core processors work
in the same thermal envelope and have the same form factor as their single-core
processors, allowing carriers to nearly double performance with virtually the same
power requirements. Part of this power efficiency comes from the sharing of power-
hungry ‘edge’ components including drivers, voltage regulators, and bus interfaces,
between multiple cores on the same chip. More energy savings is garnered by
integrating memory-management functions onto the chip, increasing performance and
reducing power consumption over the traditional model using an external device.
While AMD Opteron processors have enjoyed a power efficiency advantage over Intel
Xeon processors, there are two factors giving Intel Xeon processors the advantage
depending on the application and workload. First, Intel Xeon processors have a
significantly different cache structure than AMD Opteron processors. Depending on the
application and workload, Intel Xeon processors can outperform AMD Opteron
processors, potentially delivering better performance per watt. Second, the two
processor families use a significantly different main memory subsystem. AMD Opteron
processors have memory associated with each processor, while Intel Xeon processors
use a flat memory structure. With AMD Opteron processors, increasing memory beyond
the number of DIMMs attached to a single processor requires an additional processor.
With Intel Xeon processors, all memory is equivalent, so memory can be increased to
the maximum number of DIMMs in the server or server module with only a single
processor present. For applications and workloads requiring large amounts of memory
but comparatively less CPU power, the Intel Xeon architecture can have the advantage
because one less CPU per system needs to be powered.
Sun UltraSPARC T1 and T2 Processor-Powered ServersFigure 2. The UltraSPARC T1 and T2
processors.
The UltraSPARC T1 and T2 processors with CoolThreads technology are the highest
throughput and most eco-responsible processors ever created. Designed for highly
multi-threaded applications such as those deployed by communication carriers, Sun’s
Chip Multithreading Technology (CMT) enables each processor core to handle multiple
threads concurrently. The UltraSPARC T1 processor can handle up to four threads per
processor core for up to 32 concurrent threads. The UltraSPARC T2 processor can handle
up to eight threads per processor core for up to 64 concurrent threads. An eight-core
UltraSPARC T1 processor consumes approximately 72 watts, while the UltraSPARC T2
processor consumes between 60 and 123 watts.
Power efficiency in the UltraSPARC T1 and T2 processors begins with their more efficient
use of each CPU cycle. With each watt of energy that the processor consumes, the more
![Page 9: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/9.jpg)
7 Power Down Sun Microsystems, Inc.
useful processing can be completed. This inherent benefit is amplified by the
processor’s balance of performance and power-conserving design:
• Each processor core has a simple design that uses less energy per core. The processor
is fully SPARC® V9 compatible and is binary compatible with earlier SPARC processors.
The processor is simplified by factoring out some functional units, such as floating
point and cryptographic accelerators. The UltraSPARC T1 processor, has one of each
units per processor; the UltraSPARC T2 processor includes one of each units per
processor core.
• A high-speed, on-chip crossbar interconnect between processor cores implements a
multiprocessor environment without the need for power-hungry external switch
ASICs that would otherwise be necessary to interconnect individual processors.
• A high-bandwidth, shared L2 cache provides an optimum size for multi-threaded
processors, achieving a balance between high throughput, low complexity, and low
power consumption.
• On-chip memory controllers deliver high processor-memory bandwidth, giving direct
access from processor to memory with low latency and low power consumption.
• While both processors utilize shared floating-point and cryptographic processors, the
UltraSPARC T2 processor is designed as a system on a chip, with the addition of two
on-chip Gigabit Ethernet interfaces and PCI expansion interfaces.
The UltraSPARC T1 processor is available in Sun Fire T1000 and T2000 servers, with up to
16 GB of main memory and 1 or 4 internal drives, respectively. The UltraSPARC T2
processor is available in the Sun Fire T5120 and T5220 servers, with up to four or eight
internal disk drives, respectively and up to 64 GB of main memory. The Sun Blade T6300
Server Module, built for the Sun Blade 6000 Modular System, is powered by an
UltraSPARC T2 processor, and supports up to four internal disk drives and up to 32 GB of
main memory.
The servers’ power requirements are so low that in the State of California, Pacific Gas &
Electric now offers rebates for customers purchasing Sun’s UltraSPARC T1 processor-
powered servers as part of its Non-Residential Retrofit Program. As with all of Sun’s
datacenter-ready servers, Advanced Lights-Out Management allows carriers to manage
servers in the field from centralized locations.
The UltraSPARC T1 Processor at Work
The Sun BluePrints™ article “Consolidating the Sun Store onto Sun Fire T2000 Servers”
describes Sun’s own experience re-hosting the Java™ technology-powered business tier
of its online store onto UltraSPARC T1 processor-powered servers. Hosted in four
separate domains on Sun Enterprise 6500 and 10000 servers, the middle tier of the store
used 38 400 MHz UltraSPARC II processors. Although the store could have been hosted
on a single Sun Fire T2000 server, a pair of servers was deployed for increased
availability. Even considering this doubling of capacity, power consumption plummeted
![Page 10: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/10.jpg)
8 Power Down Sun Microsystems, Inc.
from an estimated 6728 watts to a mere 688 watts, and heat generation was reduced
from an estimated 26,000 BTU/hr to only 2358 BTU/hr. The result was an overall
reduction of approximately 90 percent in both input power and heat output.
Increasing Storage Power EfficiencyOften overlooked is ways in which organizations can improve the power efficiency of
their storage. In general, centralizing storage allows organizations to manage it as a
single pool of resources. When virtual volumes can be carved out of a global pool and
allocated to the servers and applications that need them, overall utilization levels can
be raised, and less energy spent on spinning disks that are only partially utilized.
Tape storage is the ultimate in energy efficiency, as it takes zero electricity to power a
cartridge tape when it is not in use. When organizations pay attention to their
information lifecycles, they can optimize which data is stored online and which is stored
offline, leading to further energy cost reductions.
Increasing Power and Cooling EfficiencyGiven that, according to Figure 1, almost two thirds of a datacenter’s power is lost due
to inefficiencies in power distribution and in the energy required to cool its IT
infrastructure, a communication carrier’s “power down” plans must include a strategy
for increasing power and cooling efficiency.
Power Efficiency Improvement TechniquesThere are several ways in which carriers can improve their datacenter power
distribution systems to improve overall efficiency. Most of these techniques are
described in “Use Alternative Energy Systems” on page 18, including:
• Reduce power conversions. A typical carrier datacenter converts power many times
before it reaches the servers, each step introducing another degree of inefficiency.
Utility feeds are often medium-voltage, three-phase power which is first reduced to
480 V before entering the UPS. The UPS converts AC power to DC to charge the UPS
batteries. This power is then converted back to 480 VAC, sent to Power Distribution
Units (PDUs) which convert to 120 or 208 VAC, then powering the actual server racks.
For more information on this topic, please
refer to American Power Conversion’s white
paper 128, “Increasing Data Center Efficiency
by USing Improved High Density Power
Distribution” at www.apcc.com/wp/
?wp=128.
One way to economize is to make these power conversions more efficient. “World
power” at 480 VAC and 277 VAC allows the use of widely available, more efficient
transformers, and eliminates a layer of PDUs. Instead of converting 480 VAC down to
120/208 VAC at a PDU, a more simple conversion is performed at the output from the
UPS to provide 400/230 VAC to racks. Since virtually all computer equipment is
manufactured to handle the worldwide 230 VAC standard, this approach to power can
be used in most datacenters.
![Page 11: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/11.jpg)
9 Power Down Sun Microsystems, Inc.
Another possibility is to use high-voltage DC power from the UPS directly to servers
equipped to handle this power source; another strategy is using high-voltage DC
power to a cluster of racks, then reducing the power to -48 VDC and powering telco
standard servers, such as Sun Netra™ servers, over a short -48 VDC bus.
• Remove PDUs from the datacenter. Most datacenters configure their PDUs on the
datacenter floor, adding to the cooling load. Sun’s energy-efficient datacenters
(described in “Sun’s Energy-Efficient Datacenters” on page 22) use transformers
outside of the datacenter to feed a modular busway in the datacenter. These
transformers are located outside of the datacenter itself, and do not require the same
level of cooling as if PDUs were located inside the datacenter.
• Use alternate power sources. Alternate power sources including hydro-electric, wind,
solar, and even cogeneration can reduce a datacenter’s overall carbon footprint.
Cogeneration systems, even though they burn fossil fuels including natural gas and
propane, can increase efficiency because they eliminate the power lost between the
utility’s generation plant and the datacenter. Their waste steam can be put to use in
driving cooling-system turbines and heating office space. Some large organizations
are turning to hydro-electric power, re-locating datacenter space to areas where this
renewable power source is plentiful and inexpensive.
Cooling System Efficiency Improvement TechniquesWith the cooling system typically requiring more power than even the IT infrastructure
itself, improvements in the cooling system can have a high impact on overall power
consumption. To some degree, cooling system efficiency can be improved in much the
same way as server power consumption can be reduced: by replacing old, inefficient
infrastructure with state-of-the-art, energy-efficient systems. Improved cooling system
efficiency is a requirement for high-density datacenters, and increased density and
cooling-system efficiency go hand in hand.
Replacing the chiller by itself, however, is likely to have only a limited impact on
cooling-system overhead. Cooling-system improvements need to be part of an end-to-
end strategy that more efficiently moves heat from servers to the outdoor cooling
tower. This transport mechanism can be improved at every step, from the choice of
servers to the choice of chilled water systems. Some options for improving efficiency,
ordered from servers to the chiller, include the following:
More information on the Sun Blade 6000
Modular System’s cooling system can be
found in the Sun white paper titled: “Sun
Blade 6000 Modular System Power and
Cooling Efficiency.”
• Strict, front-to-back airflow. Choosing servers with a strict, front-to-back airflow is a
first step in a coherent cooling strategy within the datacenter. One Sun product that
excels in this regard is the Sun Blade 6000 Modular System. While all of Sun’s rack-
mount servers and blade systems use front-to-back cooling, the Sun Blade 6000
Modular System uses high-efficiency fans whose speed is controlled based on
measured internal temperatures. The server’s straight airflow reduces turbulence,
and even its modules are designed for adequate cooling at low airflow volumes.
![Page 12: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/12.jpg)
10 Power Down Sun Microsystems, Inc.
Figure 3. The Sun Blade 6000 Modular System uses a highly efficient, straight, front-to-back airflow.
• Block unused rack spaces. Simply closing off unused rack spaces can increase
efficiency by reducing the mixing of hot and cold air within the datacenter.
• Use a hot/cold aisle configuration. Most datacenters already use a hot/cold aisle
configuration, where overhead or under-floor cold air is fed to the front sides of two
rows of racks, with hot air exiting into a “hot” aisle from which air is recirculated to
the Computer Room Air Conditioner (CRAC) units.
• Contain hot aisles. Efficiency can be increased further by fully containing hot aisles
with a ceiling that floats above equipment racks (while still allowing for cabling), and
doors that close off the ends of the aisle. In-row cooling units can be used to cool the
hot air as it is drawn out of the hot aisle and recirculated to the cold aisles. These in-
row cooling units can use chilled water and nearly eliminate the use of CRAC units.
Variable-speed fans provide only the cooling that is necessary.
• Efficient spot cooling. Virtually every datacenter ends up with hot spots due to the
inability to delivery exactly the right amount of cooling to every rack — especially in
datacenters where server density varies significantly. In datacenters without hot-aisle
containment in place, overhead, refrigerant-based spot-cooling systems can chill air
from the hot aisle as it moves it to the cold aisle, feeding specific racks. These cooling
systems exchange heat with the chilled-water system at the datacenter’s perimeter,
avoiding the use of water on the datacenter floor itself.
• Increase air temperature. the legacy of cold air in datacenters is one that arises from
the inefficiency of mixing hot and cold air in uncontrolled ways. As long as hot and
cold air can mix on the datacenter floor, supply air temperatures need to be kept low.
If techniques including hot-aisle containment and spot cooling are able to minimize
this mixing, then datacenter air temperatures can be higher, placing less of a load on
the cooling system.
Fans
Server Modules Fans
PSU I/O
Networking
![Page 13: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/13.jpg)
11 Power Down Sun Microsystems, Inc.
• Install Variable-Frequency Drives (VFDs). Datacenters use industrial motors
throughout their cooling infrastructure. Using demand-driven, variable-frequency
drives allows the cooling system to adjust its output to closely match the load,
increasing efficiency by providing only the cooling that is needed. VFDs are available
in CRAC units, chilled-water pumps, chillers, and cooling tower fans.
Beyond the DatacenterMore information on the LEED program and
the U.S. Green Building Council can be found
at: http://www.usgbc.org.
Increasing the efficiency of a datacenter’s IT infrastructure, power distribution, and
cooling systems is an important step that addresses the low-hanging fruit and helps to
reduce both real estate and power costs. For communication carriers wishing to go
further in adopting ‘green’ building practices, the Leadership in Energy and
Environmental Design (LEED) certification program developed by U.S. Green Building
Council is a holistic design process that addresses the power and cooling issues above,
plus overall ways to reduce energy consumption, make responsible use of materials,
reduce pollution, and improve employee health. It defines a point system for building
construction or remodeling that can lead to an increasingly higher set of certification
levels depending on the green building steps that are taken.
Although a LEED certification specifically for datacenters has not yet been completed,
the general LEED point system can be used as a guide for communication carriers
wishing to further reduce costs and their impact on the environment.
![Page 14: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/14.jpg)
12 Consolidate Sun Microsystems, Inc.
Chapter 3
Consolidate
Traditional, silo-oriented architectures use non-interchangeable server, storage, and
network elements, with each application component hosted on its own set of
components. Each set of servers, storage, and network elements were sized to handle
the worst-case workload with no possibility of resource sharing between silos. The
result is low utilization rates that are no longer affordable in terms of the excess power
they use and heat they generate.
Virtualization technologies allow carriers to consolidate multiple workloads onto a
single server, resulting in more efficient use of each server, and a reduced overall power
demand. Virtualization is a technique that has been available for years, and it can take
place at many levels. For example, everyone knows that the Apache Web server can
support multiple sites, a feature that service providers have used for years in preference
to hosting a single Web site per server. If a customer needs their own instance of
Apache, including their own set of configuration files, virtualization needs to happen at
a lower level in order to run multiple customers on a single server.
Figure 4. Virtualization technologies give the illusion of a dedicated environment to the client in the
layer above.
Virtualization begins with a single environment and creates the illusion of multiple
ones to the client logically above the virtualization layer (Figure 4). Regardless of layer,
the application or operating system has the illusion that it owns its environment.
Virtualization technologies are used in products from Sun and its partners include the
following:
Users
Application
Operating System
Hardware
APPLICATION VIRTUALIZATION
CONTAINERS
HYPERVISORS
HARDWARE PARTITIONING
![Page 15: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/15.jpg)
13 Consolidate Sun Microsystems, Inc.
• Hardware partitioning uses multiple, secure, electrically-isolated domains on a
single server platform. Sun’s Dynamic System Domains technology is available on
mid-range and high-end UltraSPARC processor-powered servers.
• Hypervisors create an illusion that each operating system has its own, dedicated
hardware, despite the fact that each operating system only ‘owns’ a part of the
hardware platform. Sun Logical Domains, available on UltraSPARC T1 and T2
processor-powered servers, allows server resources to be partitioned between guest
domains down to the granularity of a single processor thread. Sun xVM hypervisor,
virtualization software for Sun x64 servers, runs multiple virtual machines on a single
server. VMware Virtual Infrastructure software, available for x64 servers from Sun,
creates a virtualized environment for each guest operating system so that each
operating system instance believes that it is running on the bare hardware.
• Containers partition a single operating system instance to give each application the
illusion that it has its own environment and its own dedicated set of resources.
Solaris Containers, which run on UltraSPARC, AMD Opteron, or Intel Xeon processor-
powered servers, combines partitioning and resource management to allow multiple
applications to use the same operating system instance, while believing that they
have the operating system to themselves. Multiple instances of the Apache Web
server, for example, could be deployed into separate Solaris Containers, allowing
each one to have full access to the peripherals, network interfaces, and system
resources that are allocated to it.
• Application Virtualization refers to the use of applications such as Web and Mail
servers that can support many different clients, each with their own distinct envi-
ronments.
Virtualization technologies help carriers economize in two ways. They can be used to
consolidate multiple workloads onto a smaller number of servers, raising utilization
levels and increasing energy efficiency. Once a carrier invests in a virtualized
datacenter, the choice of which workload to run with which resources becomes a much
more dynamic one, as virtualization technologies facilitate dynamic resource allocation
within a single server, and the ability to move workloads from server to server.
Solaris Containers TechnologySolaris Containers provide a set of up to 8191 virtualized environments per Solaris 10 OS
instance, each container appearing to users, administrators, and applications as an
independent, isolated system (Figure 5). A global administrator can create containers,
allocate resources to them, and boot them as if they were an operating system
instance. Once booted, Solaris Containers provide a secure environment that includes:
![Page 16: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/16.jpg)
14 Consolidate Sun Microsystems, Inc.
• A virtual platform including a unique root, shared user, and other administrator-
configured file systems — plus network interfaces, inter-process communication
objects, console devices, and sub-container resource management facilities;
• Independent system identity settings;
• Secure isolation from other containers;
• Fault isolation that restricts the propagation of faults to a single container.
Figure 5. Solaris Containers give the illusion of a dedicated operating system instance to each hosted
container.
Sun Logical DomainsSun Logical Domains (LDoms) is a virtualization technology available for UltraSPARC T1
and T2 processor-powered servers. LDoms allow the server hardware to be partitioned
between logical domains through a thin layer of hypervisor software (Figure 6). Each
logical domain has its own dedicated processor, memory, and I/O resources. I/O is
handled by the Logical Domain Manager, an instance of the Solaris OS contained in
domain 0. The hypervisor can support as many as one LDom per processor thread,
allowing up to 32 LDoms on UltraSPARC T1 processor-powered servers, or 64 LDoms on
UltraSPARC T2 processor-powered servers.
LDoms provide an isolated environment for each logical domain, allowing a single
server to host multiple Solaris OS instances, including multiple Solaris OS versions, on
the same server. LDoms are more flexible than Dynamic System Domains because they
can allocate resources on a sub-CPU granularity. Similar to Dynamic System Domains,
LDoms can re-allocate threads dynamically, with the Solaris OS adapting to changing
processor and memory resources. LDoms are more efficient than virtual machines
because processor emulation is not required, reducing the number of context switches
the hypervisor must manage. Logical Domains are a complementary technology to
Solaris Containers, and each logical domain can support many lightweight containers.
Container
Users
Container
Users
Sun UltraSPARC, AMD Opteron, or Intel Xeon Processor-Powered Server
Container
Users
Single Solaris 10 Operating System Instance
Allocated Resources
Allocated Resources
Allocated Resources
Applications Applications Applications
![Page 17: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/17.jpg)
15 Consolidate Sun Microsystems, Inc.
Figure 6. Logical Domains support multiple OS instances through resource partitioning on UltraSPARC
T1 and T2 processor-powered servers.
Sun xVM Hypervisor SoftwareSun xVM hypervisor is based on the Solaris OS. It runs on Sun x64 servers and combines
two technologies: virtualization and paravirtualization. This combination of
technologies help it to flexibly support a heterogeneous set of operating system
instances, including the Solaris OS, Microsoft Windows, and Linux, all sharing the same
server hardware, and all with kernel-level isolation.
• Virtualization. Sun xVM hypervisor provides a layer of abstraction between the server
hardware and unmodified operating system instances that run on it. With this model,
each guest operating system is given the illusion that it owns the server itself. This
helps Sun xVM hypervisor to support a variety of operating systems, including
Microsoft Windows XP, 2003, and Vista, in such a way that the operating systems can
run unmodified.
• Paravirtualization. Paravirtualization incorporates knowledge of the hypervisor in the
guest operating system so that it can make requests of the virtualization layer
directly rather than relying on traps and instruction re-writing as is the case in with
full virtualization. Operating systems that can run in the more efficient
paravirtualized mode include the Solaris OS, and Linux.
As Figure 7 illustrates, Sun xVM hypervisor software is layered on top of a Sun x64
server. The hypervisor layer supports both unmodified and paravirtualized operating
system instances. An instance of the Solaris OS, known as the control domain, or
‘dom0,’ handles I/O for virtualized environments. Sun xVM hypervisor software is
currently available with the OpenSolaris™ Operating System.
Hypervisor
UltraSPARC T1/T2 Processor-Powered Server
LogicalDomain 1
Guest OSImage
LogicalDomain 0
LogicalDomain
Manager
LogicalDomain …
Guest OSImage
LogicalDomain n
Guest OSImage
I/O
![Page 18: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/18.jpg)
16 Consolidate Sun Microsystems, Inc.
Figure 7. Sun xVM hypervisor supports both unmodified and paravirtualized operating system
instances, providing an optimal mix of flexibility and performance.
VMware Virtual Infrastructure SoftwareVMware Virtual Infrastructure is commercially available hypervisor software that
provides a layer of abstraction between the server hardware and the multiple operating
system instances that run on it. This, like Sun xVM hypervisor, allows it to support a
different set of consolidation requirements than Solaris Containers — namely support
for applications requiring kernel-level isolation and support for multiple operating
system types and instances on the same server.
By virtualizing the hardware, a single server running VMware Virtual Infrastructure
software can support a heterogeneous environment including multiple instances of the
Solaris 10 OS, and different instances and versions of Linux, FreeBSD, Novell Netware,
and Microsoft Windows (Figure 8). VMware Virtual Infrastructure runs directly on
supported Sun x64 servers to provide a secure, uniform, platform for deploying,
managing, and remotely controlling multiple operating system instances. The
virtualization layer provides an idealized hardware environment that virtualizes
physical resources. This makes it easy to move virtual environments from server to
server without having to exactly match CPUs, disk drivers, and network interfaces.
Platform-independent virtual environments also make it possible to support operating
systems and applications no longer supported by hardware or software vendors, freeing
carriers from vendor-imposed upgrade cycles. For example, VMware Virtual
Infrastructure supports Microsoft Windows NT guest operating systems, helping IT
organizations to increase power efficiency by migrating existing applications from
obsolete hardware platforms to state-of-the-art Sun x64 servers. Indeed, VMware offers
physical-to-virtual software tools that migrate such applications to Sun servers without
even requiring the original operating system or application to be re-installed.
Sun xVM Hypervisor
Sun x64 Server
DomaindomU1
Guest OSImage
(Xen Linux)
ControlDomain (dom0)
SolarisOS
DomaindomU2 …
Guest OSImage
(Solaris OS)
DomaindomUn
UnmodifiedGuest OS Image
(MicrosoftWindows XP,2003, Vista)
I/O
![Page 19: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/19.jpg)
17 Consolidate Sun Microsystems, Inc.
Figure 8. VMware Virtual Infrastructure software presents the illusion of a dedicated hardware
platform to each guest operating system, including the Solaris 10 OS, Linux, and Microsoft Windows.
The Basic EquationThe basic equation is this: more power in today’s servers means that multiple
applications hosted on less powerful and less power-efficient servers can be
consolidated onto a smaller number of servers using virtualization technology.
Powering down with Sun Fire UltraSPARC T1 and T2 processor-powered servers plus
consolidating multiple applications onto Sun x64 servers nets greater efficiencies in
communication carrier datacenters. In general, any new hardware deployed in a
datacenter should be deployed with a virtualization layer, if only to prepare for future
consolidations.
Virtualized Hardware
Applications
Users
Solaris 10 OS
Virtualized Hardware
Applications
Users
Sun x64 Server
Linux
Virtualized Hardware
Applications
Users
Windows
VMware Virtual Infrastructure Software
Allocated Resources
Allocated Resources
Allocated Resources
![Page 20: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/20.jpg)
18 Use Alternative Energy Systems Sun Microsystems, Inc.
Chapter 4
Use Alternative Energy Systems
There are a variety of ways in which communication carriers can utilize alternative
energy sources, each of which can have an impact on the overall cost of power and the
environmental impact of producing that power. In addition, using DC power within the
datacenter can increase efficiency of the power distribution system and more readily
utilize natural DC power sources including solar and fuel cells.
Alternate Power SourcesOne way in which carriers can both reduce their energy costs and carbon footprint is to
relocate near plentiful sources of renewable energy such as hydroelectric power. In the
United States, the Pacific northwest has been attracting large datacenters because of
the low cost and low impact of electric power there.
For communication carriers, relocation is often not an option. Instead, carriers can
choose from converting to, or supplementing with, a variety of alternative power
sources.
CogenerationCogeneration typically uses natural gas-powered turbines to generate electricity for a
datacenter. In contrast to backup generators, cogeneration systems are designed for
continuous operation. One of the arguments for cogeneration is that, despite the fact
that it uses fossil fuel, it uses it more efficiently because local power generation
eliminates the transmission loss between a utility’s power plant and the datacenter.
The U.S. Environmental Protection Agency’s
August 2007 report to Congress on Server and
Data Center Efficiency discusses cogeneration
and Clean Heat and Power (CHP) systems in
Chapter 6. Please refer to:
www.energystar.gov/index.cfm?c=
prod_development.server_efficiency_study
Cogeneration produces hot water or steam as a by-product, which can be used to drive
steam-absorption cooling systems for the datacenter and to provide hot-water heat for
building occupants.
With cogeneration as a primary power source, carriers can use both diesel generators
and utility power as backup power sources. In order to continue power generation
through a failure of the gas supply system, on-site propane storage can be used to
power the cogeneration system.
Solar CellsSolar panels are quickly becoming a viable alternate power source, with a small
number of datacenters powered entirely on solar power. One small Web hosting facility,
Affordable Internet Services Online (http://www.aiso.net), is able to generate all of its
power on site using solar panels. It reduces its overall cooling power load by using
outside air whenever possible. The hosting service uses Sun Fire servers powered by
AMD Opteron processors. Whether a carrier datacenter is able to generate all of its
![Page 21: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/21.jpg)
19 Use Alternative Energy Systems Sun Microsystems, Inc.
power through solar energy or not, using rooftop solar cells to augment the facility
power is a way to reduce the datacenter’s carbon footprint.
Fuel CellsFuel cells are becoming an interesting power-generation technology because different
fuel cell types can utilize a range of different fuels. While hydrogen is the simplest fuel,
methanol, methane, and ethanol can also power fuel cells. The ability to convert
methane into power is particularly significant, as methane is a source from biomass
and landfill decomposition. Harvesting this gas for power generation helps dispose of a
greenhouse gas more potent than carbon dioxide.
Wind PowerAs with rooftop solar panels, onsite vertical wind power generators can be used as a
supplemental source of power. Where some utilities allow customers to purchase their
electricity specifically from wind generators, communication carriers can use this
option to reduce their carbon footprint and insulate themselves somewhat from the
increasing price of power generated from non-renewable sources. Sites with their own
electric substations (typically 14 kV) can buy direct from green energy sources and have
powered provided to them through the grid.
The Role of DC PowerDC power has a potentially important role in communication carrier datacenters.
Beside the fact that many alternative energy sources — such as solar and fuel cells —
naturally produce DC power, communication carriers are in the best position to
understand the benefits of using DC power in their datacenters. With years of
experience powering central offices using DC power, carriers understand the reliability
and efficiency of their power-distribution systems and servers from Sun’s Netra server
product line. DC power can significantly simplify datacenter power distribution.
Because multiple DC power sources can be joined using only a diode, expensive
technology such as static-transfer switches can be eliminated.
Traditional datacenter power architectures convert power back and forth between
alternating and direct current at least three times (Figure 9): incoming AC power is
converted to DC power to charge the uninterruptible power supply (UPS) batteries. DC
power from the batteries is converted to AC, distributed to racks and then to servers.
Finally, AC power is converted in each server’s power supply first typically to 380 VDC
and then to the various voltages needed within the server. Each conversion between AC
and DC not only introduces inefficiencies, it generates heat which then requires even
more cooling capacity to remove.
![Page 22: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/22.jpg)
20 Use Alternative Energy Systems Sun Microsystems, Inc.
Figure 9. Typical AC-powered datacenters convert between AC and DC power three times, introducing
inefficiencies in each conversion.
By converting their datacenters to run on DC power, carriers can reduce their server
power consumption by 5-7 percent, plus the corresponding savings in cooling
depending on the datacenter’s own load factor. Not only can carriers save on their
energy expenses; they can reduce complexity (and increase reliability) by eliminating a
layer of components that can fail. Indeed, communication carriers are already familiar
with the high reliability and economy of running Sun Netra™ servers on their 48 V
infrastructure, and are typically the most experienced with and receptive to this
approach.
Sun’s DC-Powered Datacenter DemonstrationIn conjunction with a California Energy Commission-sponsored project, Sun
implemented a proof-of-concept that demonstrated the power savings of using DC in
the datacenter environment. Sun created two identical datacenters, one with AC and
one with DC power distribution. Off-the-shelf equipment from Sun, Cisco, and Intel was
used, with the power supplies modified by hand to eliminate the power conversion
circuitry that converts alternating to direct current. Incoming power was rectified,
distributed at 380 VDC, and fed directly into the modified power supplies. Instead of
using lead-acid batteries to provide power during the time it takes for diesel generators
to respond to a power failure, the DC datacenter used a high-speed flywheel power-
storage device. This was a proof-of-concept demonstration only, and modified power
supplies are not available as products from Sun and have not been re-certified to
international safety and electromagnetic compatibility regulations.
The results demonstrated that changing power distribution to 380 VDC can eliminate
2-5 percent of inefficiency in the power conversion circuitry in each server. Eliminating
the inefficient power conversion in the UPS systems brings the total to 10-20 percent.
Sun estimates a range of savings because every datacenter is different — the older the
equipment, the less energy efficient it tends to be, and the more energy can be saved.
UPS PDU
AC/DC DC/AC AC/DC DC/DC
Power Supply Unit (PSU)
Inte
rnal
Volt
ages
480 VACSupplyPower
Power Inefficiencies 380 VDC (typical)
ServerLead-Acid Batteries
![Page 23: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/23.jpg)
21 Use Alternative Energy Systems Sun Microsystems, Inc.
Commercial DC Power Distribution ProductsOne of the factors limiting the use of DC power in communication carrier datacenters is
the large amount of copper needed to distribute low-voltage DC power throughout the
datacenter. Sun’s DC datacenter demonstration illustrated that high-voltage DC power
distribution is one way to solve this problem, but 380 VDC server power supplies are not
a commercially available option today.
Information on Validus DC Systems can be
obtained at: http://www.validusdc.com.
Sun has not tested or endorsed these
products.
One solution to this problem is offered by Validus DC Systems. The company’s approach
is to distribute high-voltage DC power to in-row converters which then supply -48 VDC
over relatively short buses to racks. Each rack has a plug and breaker panel that can be
used to distribute power to off-the-shelf DC-powered equipment such as Sun Netra
servers.
The use of high-voltage DC power distribution eliminates the need for heavy-gauge
cabling throughout the datacenter, and the reduction in the number of power
conversions helps to increase power distribution efficiency in the datacenter. With UPS
systems typically based on DC power, backup power can be ‘or-ed’ into the power
distribution system with diodes, eliminating the need for expensive static transfer
switches to switching between line and backup power.
The company’s solution is to rectify incoming AC power (typically 13.8 kV) into -575 VDC
using its Power Quality Module (PQM). This power can be fed through the company’s
Power Distribution Module (PDM), a switchboard that integrates other power sources
(such as battery backup) into the distribution system. Power Converter Units (PCU)
regulate -575 VDC to -48 VDC through a set of four modular, hot-swappable conversion
units that can provide a total of 120 kW per PCU.
![Page 24: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/24.jpg)
22 Sun’s Energy-Efficient Datacenters Sun Microsystems, Inc.
Chapter 5
Sun’s Energy-Efficient Datacenters
For more information about Sun’s energy-
efficient datacenters, please visit: http://
www.sun.com/eco. Many of the topics
discussed in this chapter are discussed in
detail in briefs available at this site.
Sun is not unlike many communication carriers in the sense that its own business cycles
of expansion, contraction, mergers, acquisition, have resulted in a patchwork of both
energy and real estate inefficiency. When Sun addressed its own global IT portfolio, it
found that it was supporting datacenters in 1588 separate rooms consuming 1.3 million
square feet of space. The majority of these rooms were running old, inefficient
hardware that was still in use due to limited budgets and a culture that encourages
groups to retain capital equipment.
Addressing InefficiencyInefficiency such as this is difficult to address because the facilities organization doesn’t
always understand or even believe the ever-increasing power requirements of the IT
organization. Conversely, IT organizations typically don’t understand the time and
expense required to implement power and cooling infrastructure to support higher
density datacenters. This lack of visibility, combined with the inefficiency of small
datacenters scattered throughout a large organization, contributed to inefficiency.
Sun created an independent organization with both the authority and financing to
bridge the gap between IT and Facilities with a global technical infrastructure strategy.
Sun’s centralized Global Lab & Datacenter Design Services (GDS) group set out to
achieve several goals, many of which resonate with communication carriers:
• Consolidate datacenter space to free up valuable real estate.
• Improve the ability to accommodate current and future high-density servers.
• Replace aging, inefficient equipment with current, more efficient systems.
• Reduce the overall number of servers required to support existing activities.
• Increase flexibility so that equipment can be changed to support new business and
product development activities — including further increases in density — without
significant changes to datacenter infrastructure.
• Build an electrical and cooling plant that could scale as needed.
Efficiency Through DensityIt might seem to be counter-intuitive to think that moving to higher-density equipment,
which generates its heat in a smaller space, could actually increase efficiency. But a
combination of server and space consolidation results in higher performance per watt,
and space consolidation allows more efficient power, cooling, and cabling systems to
be deployed. Indeed, the energy savings of newer equipment is so substantial that the
GDS group itself bought new hardware for Sun’s internal organizations, using only 10
percent of its budget to do so.
![Page 25: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/25.jpg)
23 Sun’s Energy-Efficient Datacenters Sun Microsystems, Inc.
The results of Sun’s California datacenter restructuring are astonishing: Sun was able to
reclaim 88 percent of its datacenter floor space, reduce power consumption by 61
percent, increase server performance by 465 percent, and increase storage capacity by
244 percent — all while avoiding more than USD $9 million in construction costs. From
an energy cost and environmental impact perspective, the program was a huge
success, saving more than $500,000 in utility costs and reducing more than 3,000 tons
of carbon dioxide emissions annually.
Efficient, Pod-Based DesignSun’s GDS organization created a set of generic design and construction standards that
can be used to speed the process of building new, efficient, and up-to-date
environments. The standards are generic so that they can adapt to Sun’s various spaces
worldwide — some with raised floors, some with concrete slabs — and all of varying
sizes. The standards are generic, yet support scalability and mobility so that the normal
churn of equipment and space reconfiguration can take place much easier than before.
Hardware replacement is integral to Sun’s plan, because increasing density is what
allows for the use of newer, high-efficiency power, cooling, and cabling techniques.
The need to create datacenter spaces of varying sizes and to satisfy varying needs
pointed to a ‘pod’ design, a self-contained group of racks that optimize power, cooling,
and cabling efficiencies (Figure 10). The pod design can be scaled up or down as
needed, and replicated easily. Base power and cooling parameters can be adjusted up
and down in 4 kW-per-rack increments up to 30 kW per rack. This is a key feature
because it allows one pod to contain different density equipment than another pod,
and it allows a single pod to support racks of varying density. Sun’s pod design include
power, cooling, and networking innovations.
Figure 10. Sun’s datacenter pod design defines flexible, modular spaces that can be replicated in
datacenters around the world.
![Page 26: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/26.jpg)
24 Sun’s Energy-Efficient Datacenters Sun Microsystems, Inc.
PowerTo power its energy-efficient datacenters, Sun created a new electrical yard containing
power and cooling infrastructure that is more than adequate for today’s needs, with
plumbing and transformer pads that can more than double the yard’s capacity in the
future. The cooling system uses variable frequency drives throughout, from the
evaporative cooling towers to the chilled water circulator pumps.
Sun used a similar approach to its UPS systems, sizing them so that they could scale to
handle double today’s capacity, with the ability to increase capacity one module at a
time, with no down time.
Inside the datacenters, standardized on a hot-pluggable overhead busway to distribute
power to each rack. With the busway, there is no need to engage an electrician to
change breakers and receptacles or pull new cable when rack configurations change.
The modular busway allows power drops to be installed and moved on demand,
making the space much more flexible, while eliminating large amounts of copper and
waste.
The busway eliminates the need for on-floor UPS systems which consume costly floor
space and generate additional heat. Instead, Sun uses standard transformers whose
cooling requirements are much less stringent than the equipment on the datacenter
floor.
CoolingAs rack densities increase, standard CRAC units increasingly fail to provide sufficient
cooling. The use of perforated floor tiles provides some degree of control over where
chilled air is delivered, but the general mixing of hot and cold air that occurs in
datacenters cooled this way requires lower chilled water temperatures and causes
greater inefficiency. Sun’s approach is to deliver more cooling capacity exactly where it
is needed through in-row and overhead cooling units. Sun uses the two solutions
depending on the specific datacenter requirements.
In-Row Cooling and Hot-Aisle Containment
One way to virtually eliminate the inefficient mixing of hot and cold air is to completely
separate hot and cold areas through hot-aisle containment. For this approach, Sun
deployed APC InfraStruXure InRow Cooling (RC) with hot-aisle containment. This
mechanism completely contains the hot aisle, eliminating hot air recirculation. Hot air
passes through the RC units, which use chilled water to cool the air, and variable-speed
fans to provide exactly the amount of cooling required based on temperature sensors.
Sun installed double the capacity needed today with every other RC unit blocked off
until its cooling capacity is needed.
![Page 27: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/27.jpg)
25 Sun’s Energy-Efficient Datacenters Sun Microsystems, Inc.
Figure 11 shows a portion of Sun’s Bangalore datacenter, where each pod’s hot aisle is
contained with a door at the end of the aisle. The narrow racks in each row (also shown
in Figure 10) are the in-row cooling systems.
Figure 11. The use of hot-aisle containment with in-row cooling allows efficient cooling without the
use of CRAC units.
Overhead Cooling for High-Density Equipment
In open datacenter designs, such as those where equipment is rolled in and out
frequently, and in high-density situations, overhead cooling systems are used to direct
chilled air directly to the racks where it is required, augmenting room air conditioning
or CRAC units (Figure 12).
For these situations, Sun used Liebert X-treme Density (XD) heat removal systems that
were developed with Emerson Network Power. The XD systems are refrigerant-based
chillers that are especially effective in high-density situations. They are relatively simple
devices that contain an evaporator coil and a fan. They circulate refrigerant back to an
in-room coolant chiller through pre-plumbed and tapped refrigerant pipes that allow
units to be installed exactly where they are needed. The coolant chiller moves heat
from the refrigerant into the datacenter’s chilled water system, efficiently removing
heat from the datacenter.
![Page 28: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/28.jpg)
26 Sun’s Energy-Efficient Datacenters Sun Microsystems, Inc.
Figure 12. Overhead cooling units can effectively cool high-density racks in raised-floor or slab
configurations.
Cabling
Increased density means more cables per rack, more use of copper, and more of an
impact on airflow with racks stuffed with cables. Sun’s strategy is to move network
racks, cabling, and equipment to the pod itself, increasing flexibility and reducing the
length of cables. Each pod is considered its own room, with high-speed uplinks from
each pod to core switching apparatus. Sun’s approach cuts cable costs by more than 50
percent, saving copper, improving airflow, and simplifying reconfiguration for new,
high-density equipment.
Lessons for Communication CarriersThe space, power, and cooling crunch is a perceived liability that can be turned into an
advantage by communication carriers. When the cost of powering aging, inefficient
equipment is contrasted with the benefits of using modern, energy-efficient systems,
carriers may make the same choice as Sun has, and learn from its experience. The cost
of replacing aging hardware is small compared to the long-term value of having
smaller, more dense, efficient, and flexible datacenters.
![Page 29: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/29.jpg)
27 Conclusion Sun Microsystems, Inc.
Chapter 6
Conclusion
The convergence of two trends heralds a new, more efficient and environmentally
friendly approach for communication carrier datacenters. First, one key solution to the
space, power, and cooling crunch is to upgrade server infrastructure and consolidate
applications, steps that lead to using less space through higher densities. Second, the
use of energy-efficient power and cooling technologies to handle increased density can
save additional power.
Sun has the technology and experience that can help carrier datacenters reduce their
power consumption and increase their power and cooling infrastructure efficiency.
Carriers can take any or all of the following recommended steps:
• Power down by converting to energy-efficient Sun Fire servers powered by the
UltraSPARC T1 processor with CoolThreads technology, or Sun x64 servers powered by
AMD Opteron and Intel Xeon processors. Power down further through the use of
energy-efficient power and cooling techniques.
• Consolidate multiple workloads onto a smaller number of state-of-the-art systems,
reducing the number of servers required, and raising utilization levels, both of which
contribute to increased power efficiency. Even refreshing server technology alone can
yield performance increases along with reduction in power consumption.
• Use alternative energy sources to reduce carbon footprint and increase efficiency.
Consider converting to DC power, saving 10-20 percent by eliminating multiple,
inefficient, AC-DC and DC-AC power conversions in the datacenter.
Communication carrier datacenters are in a power, space, and cooling crunch. This
crunch is in part aggravated by historical silo-architected applications with
incompatible and aging server technologies that are becoming unmaintainable as they
consume more power than applications hosted with more modern, energy-efficient
servers. What better approach for a carrier datacenter than to standardize on a small
number of industry-standard platforms that can run all three of today’s major operating
systems: the Solaris OS, Linux, and Microsoft Windows? Sun platforms, both natively
and through the use of virtualization technologies, can support a range of applications
based on different operating system technologies. With a single set of standard
platforms in their datacenters, carriers can reap the benefits of a standard,
interchangeable, and consisting single-vendor solution for their server and storage
needs.
![Page 30: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/30.jpg)
28 Conclusion Sun Microsystems, Inc.
For More InformationSun’s Web site hosts several power calculators that can estimate power consumption
for specific configurations of Sun’s energy-efficient servers:
• Power calculators for Sun Fire x64 servers are available on each server’s overview
page. For a list of server offerings, consult http://www.sun.com/x64/offerings.jsp.
• The Sun Fire T1000 server power calculator is available at:
http://www.sun.com/servers/coolthreads/t1000/calc.
• The Sun Fire T2000 server power calculator is available at:
http://www.sun.com/servers/coolthreads/t2000/calc.
• Sun’s datacenter simulator that illustrates how CoolThreads technology can ease
power, cooling, and space concerns is available at: http://www.sun.com/servers/
coolthreads/overview/index.jsp.
Information on Sun’s DC Powered Datacenter Demonstration project is available from
Lawrence Berkeley Labs:
• “Energy-Efficient Direct-Current-Powering Technology Reduces Energy Use in
Datacenters By Up to 20 Percent,” article available at: http://www.lbl.gov/Science-
Articles/Archive/EETD-DC-power.html.
More information on Sun’s energy-efficient datacenters is available at:
• http://www.sun.com/eco.
![Page 31: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/31.jpg)
29 About the Authors Sun Microsystems, Inc.
Chapter 7
About the Authors
Bruce BaikieBruce Baikie is responsible for developing Sun’s global telecom strategies and strategic
alliances with an emphasis on wireless services and platform infrastructure. His
position leverages extensive knowledge in wireless architectures, mobile service
delivery environments and next-generation wireless and broadband services. Bruce
delivers field sales and marketing collateral to promote partner solutions developed by
Network Equipment Manufacturers (NEPs), ISVs, IHVs, and consulting organizations. A
key member of the Sun’s telecom team, his responsibilities also include partner
contracts and negotiations, solution architectural designs, managing partner
implementations and direct global field sales support. Mr. Baikie holds a B.S. in
Mechanical Engineering from Michigan Technological University and studies in
International Business from the University of Wisconsin.
Steve GaedeSteve Gaede is a freelance technical writer and technology consultant. In his work with
PointSource Communications, he develops numerous documents each year for Sun
Microsystems. His history includes work with Amdahl Corp., Bell Laboratories, Cray
Laboratories, NBI, Inc., and Software Design & Analysis Inc. Steve performs technology
assessments, builds proof-of-concept prototypes, conducts performance evaluation
projects, and develops IT best practices through his consulting company, Lone Eagle
Systems Inc. He is active in the Boulder Colorado professional community, and has been
a coordinator of the Front Range UNIX Users Group since 1984. Mr. Gaede received his
B.S.E. in Computer Engineering from the University of Michigan and his M.S. in
Computer Science from the University of California, Berkeley.
AcknowledgmentsThe authors would like to thank Dennis Symanski of Sun’s Worldwide Compliance Office
for building and describing Sun’s DC-Powered Datacenter demonstration project, Dean
Nelson of Sun’s Global Lab & Datacenter Design Services group for describing the
technology and benefits of Sun’s energy-efficient datacenters, and Mark Monroe, for his
thorough review of the second edition of this paper.
![Page 32: Easing the Space, Power, and Cooling Crunch with Green ...green-wifi.org/wp-content/uploads/2015/11/GreenIDC07-Final2.pdf · dramatic steps to reduce carbon dioxide emissions by reducing](https://reader034.vdocuments.us/reader034/viewer/2022052001/6013e0d6b9861a111f4ceae7/html5/thumbnails/32.jpg)
Sun Microsystems, Inc. 4150 Network Circle, Santa Clara, CA 95054 USA Phone 1-650-960-1300 or 1-800-555-9SUN (9786) Web sun.com
On the Web sun.comEasing the Space, Power and Cooling Crunch with Green Internet Datacenter Re-Design
© 2006-2008 Sun Microsystems, Inc. All rights reserved. Sun, Sun Microsystems, the Sun logo, BluePrints, CoolThreads, Java, Netra, OpenSolaris Solaris, Sun Blade, Sun Fire, and “The Network is the Computer” are trademarks or registered trademarks of Sun Microsystems, Inc. in the U.S. and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the U.S. and other countries. Products bearing SPARC trademarks are based upon architecture developed by Sun Microsystems, Inc. AMD, Opteron and the AMD Opteron logo are a trademarks or registered trade-marks of Advanced Micro Devices, Inc. Intel, the Intel logo, and Xeon are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Information subject to change without notice. Printed in USA