scalable design white paper

Upload: walter-marino

Post on 07-Apr-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/6/2019 Scalable Design White Paper

    1/6

    R E S E A R C H U N D E R W R I T E R W H I T E P A P E R

    LEAN, CLEAN & GREEN

    Wright Line

    Creating Data Center Efficiencies UsingClosed-Loop DesignBrent Goren, Data Center Consultant

    Currently 60 percent of the cool air that is supplied from air-conditioning units in a typical data center iswasted. This whitepaper provides information to help achieve greater efficiencies within the data center byoptimizing the physical cooling capacity, while maintaining expected levels of reliability.

    PublishersNote:This cobranded and copyrighted paper is published for inclusion in formal compilation of papers, presentations, and proceedings knownThePathForwardv4.0RevolutionizingDataCenterEfficiencyof the annual Uptime Institutes Green Enterprise IT Symposium, April 1316, 2009, New YorkCity.It contains significant information value for the data center professional community which the Institute serves. Although written by a researchunderwriting partner of the Institute, the Institute maintains a vendorneutral policy and does not endorse any opinion, position, product or service mentionin this paper. Readers are encouraged to contact this papers author(s) directly with comments, questions or for any further information.

  • 8/6/2019 Scalable Design White Paper

    2/6

    UPTIME INSTITUTE RESEARCH UNDERWRITER WHITE PAPER Creating Data Center Efficiencies Using Closed-loop Design

    Data center trends have traditionally been focused on

    delivery of service and reliability. However, there has been

    a recent shift in focus to provide greater efficiency in data

    centers. Up until now, there has been little incentive for

    data center managers to optimize the efficiency of their

    data center and they are still primarily concerned about

    capital costs related to their data centers capacity and

    reliability. A study by research analyst firm IDC shows

    that for every dollar of new server spend in 2005, 48 cents

    was spent on power and cooling. This is a sharp increase

    from 2000, when the ratio was 21 cents per dollar of server

    spend. This ratio is expected to increase even further. Thus

    the immediate demand to create more efficient data centers

    will be at the forefront of most companys cost-saving

    initiatives. However, efficiency gains must be balanced to

    ensure there is no compromise in data center reliability and

    performance.

    Legacy Data Center Design Issues

    A legacy data center typically has the following

    characteristics:

    A open system that delivers cold air at about

    55F via overhead ducting or a raised-floor

    plenum

    Perforated tiles (in a raised-floor environment)

    used to channel the cold air from beneath the

    raised-floor plenum into the data center

    Rows of racks orientated 180 degrees from

    alternate rows to create hot and cold aisles

    Minimum of four feet separation between cold

    aisles and three feet between hot aisles1

    Precision air conditioning units located at the

    ends of each hot aisle.

    In practice, the airflow in a legacy data center is very

    unpredictable and has numerous inefficiencies, which

    proliferate as power densities increase. This is shown in

    Figure 1, where bypass air, recirculation, and air

    stratification are the dominant airflow characteristics

    throughout the data center.

    1 Recommendations as per ANSI/TIA/EIA -942, April 2005.

    Figure 1. Bypass airflow, recirculation, and air stratification

    Bypass Airflow

    Bypass airflow is defined as conditioned air that doesnt

    reach computer equipment.2 The most common form of

    bypass air occurs when air supplied from the precision air

    conditioning units is delivered directly back to the air-

    conditioner intakes. Examples of this would be leakage

    areas such as air penetrating through cable cut-outs, holes

    under cabinets, or misplaced perforated tiles that blow air

    directly back to the air-conditioner intakes. Other examples

    of bypass airflow include air that escapes through holes in

    the computer room perimeter walls and non-sealed doors.

    In conventional legacy data centers, as little as 40 percent

    of the air delivered from precision air conditioning units

    may actually make its way to cool the existing IT

    equipment a great waste of energy as well as an

    excessive and unnecessary operational expense.

    Recirculation

    Recirculation occurs when hot air exhausted from rack-

    mounted computing devices is fed into device inlets. This

    principally occurs in servers located at the highest points of

    a high-density enclosure. This is illustrated in Figure 2 by

    the large area shown in red. Recirculation can cause

    overheating damage to computing equipment and

    disruption to mission-critical services.

    2 Reducing Bypass Airflow Is Essential for Eliminating

    Hotspots, by Robert F. Sullivan, Ph.D.

    2

  • 8/6/2019 Scalable Design White Paper

    3/6

    UPTIME INSTITUTE RESEARCH UNDERWRITER WHITE PAPER Creating Data Center Efficiencies Using Closed-loop Design

    Figure 2. Recirculation

    Hot and Cold Remixing and Air Stratification

    Air stratification in the data center is the layering effect of

    temperature gradients from the floor to the ceiling of the

    computer room.

    In a raised-floor environment, air is delivered at

    approximately 55F from under the raised floor through

    perforated tiles. The temperature, as the air first penetrates

    the perforated tile, remains the same as the supply

    temperature, but as the air moves vertically up the racksfront face, air temperatures gradually increase. This occurs

    because insufficient airflow is delivered through the

    perforated tiles, which allow the hot air exhaust to penetrate

    the cold-aisle region. In high-density enclosures, its not

    uncommon for temperatures to exceed 90F at the server

    inlets mounted at the highest point of the enclosure.

    However, the recommended temperature range for server

    inlets as stated by the American Society of Heating,

    Refrigerating, and Air Conditioning Engineers

    (ASHRAEs) Mission-Critical Facilities Technical

    Committee 9.9 is between 64.4F and 80.6F. Thus, in a

    legacy data center design the computer room is actually

    being over-cooled, by sending extremely cold air under the

    raised floor to compensate for the wide range of

    temperatures at the device inlets.

    Data Center Heat Source: Processor

    Performancethe Need for Speed

    In our modern economy, the fact remains that companies

    need to maintain growth and profitability, which demands

    delivery of better, faster, richer, and more reliable products

    and services to remain competitive. Thus, constant need forspeedreflects the modern day business compulsion to

    consume increasing levels of computing performance to

    maintain or attain a competitive advantage. However, until

    recently most IT departments never related this exponential

    growth of processing power to how it affects power

    consumption.

    The fact is the ratio of processor performance with respect

    to power has increased significantly over the last several

    years. In other words, the processor manufacturers have

    made some significant technology breakthroughs to

    increase the performance of the processor while consuming

    less power.

    The actual culprit of the power consumption issue is related

    to the exponential growth in power densities. Processor

    manufacturers such as Intel and AMD are making the

    processors smaller and denser, such that server

    manufacturers can incorporate a greater number of

    processors in a smaller footprint. Data collected by Intel

    Corporation has shown that current processor technology

    consumes 24 percent of the power consumption to execute

    the same workload in roughly the same time period as

    processor technology used in 1999. Thats less than one-

    quarter of the power consumption of less than a decade

    ago. However, the power density (the amount of electricpower consumed by the computer chip) has increased by a

    factor of 16X during the same period, which creates the

    fundamental cooling problem from the chip throughout the

    critical computing environment.

    Virtualization software has also significantly reduced

    power consumption in the data center by taking advantage

    of underutilized processing power within each server; in

    effect consolidating many physical servers into one.

    However, although the total power consumption

    considerably decreases with virtualization, thepower

    density per physical serverincreases.

    These strategies provide tremendous impact in reducing

    energy consumption; the challenge is that these

    technological advances come with a cost. With increasing

    power densities per cabinet, traditional computer room

    cooling designs cannot prevent server exhaust recirculation

    and thus become unreliable as a means of cooling.

    Closed-Loop Heat Containment: What is a

    Closed-Loop Design?

    The legacy data center is an open system where air isallowed to move freely throughout the data center. A

    closed-loop design is a solution whereby all the air supplied

    by the computer-room air conditioners is delivered to the

    intakes of the rack-mounted computing equipment and all

    the hot air exhaust is delivered directly back to the intake of

    the air-conditioning system.

    3

  • 8/6/2019 Scalable Design White Paper

    4/6

    UPTIME INSTITUTE RESEARCH UNDERWRITER WHITE PAPER Creating Data Center Efficiencies Using Closed-loop Design

    There are essentially two current methods available for

    achieving a closed-loop design.

    Cold-air containment. A cold-air containment system is

    one in which the cold air supply from the computer room

    air-conditioning unit is isolated, and the hot air is allowed

    to move freely throughout the room. This can be done bycompletely isolating the cold aisle in the data center or

    using a ducted enclosed channel attached to the front of the

    enclosure that draws cold air directly to the server intakes.

    Heat containment. Heat containment is achieved by

    capturing all the hot air that is exhausted from the rack-

    mounted computing equipment and directing it to the intake

    of the computer room air conditioner without any cold air

    contamination. This can be accomplished by enclosing the

    hot aisle or enclosures and having a heat-rejection system

    pump the heat from these contained units out of the data

    center. Conversely, a ducting system that directs the hot airfrom the rear of the rack enclosure to the air-conditioner

    intakes can also be used.

    Closed-Loop Heat Containment Solutions

    Closed-loop design is an adaptive concept built on the

    premise of providing customers with ease of deployment

    that integrates with existing infrastructure. Not unlike

    LEGO building blocks, once the foundation is created,

    all the other pieces fit together. Once an adaptable

    enclosure frame is installed there are several solutions

    available to the customer. Each solution has its benefits tomeet the customers requirements.

    Passive exhaust system. This containment system

    incorporates a chimney attached to the back of an adaptable

    frame enclosure. In this case, the chimney is designed over

    the rear corner of the rack to ensure access to overhead

    cable management such as ladder trays. The heat

    containment system relies on all the hot-air exhaust to be

    directed through the chimney, thus much attention has been

    placed on minimizing any air leaks in the cabinet. The rack

    must be deployed with a sealed solid back door and a cover

    must be used on other exposed areas to ensure the airexhaust does not leak outside the cabinet. The passive

    system is dependent on computing equipment exhaust fans

    to deliver enough volume of airflow to pass through the

    chimney. Thus, in a passive exhaust system, one needs to

    be cognizant of potential pressurized backflow with low-

    flow exhaust configurations.

    Assisted exhaust system. This heat containment design uses

    fans within the attached enclosure chimney to assist the

    airflow through the ducted vent. This system should be

    used in conjunction with a fan speed controller to optimize

    the airflow volume within the rack. One of the advantages

    of the assisted system is the ability to control the flow of

    air. If the server exhaust is not strong enough, air from thesurrounding room or the plenum can enter into the rear

    rack, causing remixing. The key strategy in using an

    assisted exhaust-based system is to control the flow of air

    such that there is a slight negative pressure at the very top

    of the enclosure and a zero static pressure throughout the

    rest of the rear portion of the rack. This strategy will

    optimize airflow performance to ensure the heat is

    exhausted, eliminating the risk of backflow.

    Application of Heat-Containment System

    Closed-loop heat containment solutions are designed toadapt to existing infrastructures and provide a solution for

    greenfield applications. The application of heat

    containment systems increase efficiencies within the data

    center by reducing bypass airflow and recirculation, thus

    allowing the heat to flow directly to the air-conditioner

    intakes.

    To achieve heat containment with the active- or passive-

    ducted exhaust option, the hot air exhaust that flows

    through the enclosures chimney attachment must be used

    in conjunction with other facilities to continue the flow

    directly back to the precision air-conditioner intakeswithout remixing. There are effectively two methods to

    achieve this task.

    Extending the adaptable enclosures rear duct to a plenum

    ceiling. A closed-loop system can be attained by extending

    the rear duct from the back of the frame to a drop-ceiling

    plenum and adding a ducted return from the ceiling plenum

    back to the air-conditioner intakes. If a drop ceiling is in

    place, it has the advantage of minimizing the costs of

    building out a dedicated ducted heat return.

    Direct-ducted exhaust return.

    If no plenum ceiling exists, itmay be possible to duct the hot air exhaust directly back to

    the air-conditioner intakes.

    This has the advantage of providing a more controlled

    heating, ventilating, and air conditioning (HVAC)

    environment, since the air path is 100 percent dedicated to

    heat containment.

    4

  • 8/6/2019 Scalable Design White Paper

    5/6

    UPTIME INSTITUTE RESEARCH UNDERWRITER WHITE PAPER Creating Data Center Efficiencies Using Closed-loop Design

    Quantifying Closed-Loop Efficiencies

    Recent articles make generalizations about an enclosures

    ability to cool based on power densities within the rack.

    Specifically, the enclosure is essentially a passive device3

    that doesnt provide any cooling. Thermally, the function of

    the enclosure is to ensure that adequate airflow can be

    provided to computing equipment intakes and that the heat

    generated from the equipment is not trapped within the

    enclosure. However, with the recent increases in power

    densities and data center energy costs, the enclosure has

    evolved into a critical piece of the data center and now

    needs to be a part of an integrated strategy for achieving

    greater efficiencies.

    The foundations of closed loop design efficiency savings

    can be established by optimizing four conditions:

    1. Provide consistent air temperature between

    64.4F and 80.6F to all computing equipment

    (this is a statement of reliability as provided by

    ASHRAE TC9.9).

    2. Ensure the air temperature leaving the server

    exhaust matches as closely as possible the intake

    temperature of the computer room air

    conditioner.

    3. Make certain there is sufficient air flowing to the

    inlets of all the computing equipment.

    4. Ensure the computer room is sealed as much as

    possible and avoid air leakages wherever they

    occur.

    In open legacy infrastructures, the only way to maintain

    ASHRAEs recommended temperature range for reliability

    in high-density environments is to oversupply the amount

    of cooling in the room. In some cases this can be as much

    as 50 percent of the necessary airflow required. Therefore,

    the cost of ensuring reliability comes by reducing the

    overall efficiency and significantly increasing the amount

    of bypass airflow in the room. On the other hand, by

    supplying only the necessary amount of volumetric airflow,

    3 With the exceptions of enclosures that include internal heat

    exchangers

    the traditional data center cooling design cannot prevent the

    server exhaust from feeding back to the device inlets, thus

    reducing the reliability of the IT equipment.

    In a legacy data center, there is a tradeoff between

    efficiency and reliability; increasing one negatively affects

    the other. In a closed-loop heat containment system,because the hot and cold air streams are isolated, there is

    little effect on recirculation as the cooling supply is

    optimized to meet demand requirements and thus there is

    no reliability penalty when increasing efficiency.

    In traditional data centers, cold air is supplied from the

    precision air conditioners at very cold temperatures

    (approximately 55F). The reason the air is supplied at such

    cold temperatures is to counter the effects of high

    temperatures detected at the top of many enclosures caused

    by hot and cold air remixing. However, if the heat can be

    contained and not remixed with the cold air, there is noreason to supply such cold temperatures under the raised

    floor. Studies have shown that increasing the chilled water

    supply temperature from 45F to 55F, or by raising the air

    supply temperature to 65F, will achieve a 16 percent

    energy savings consumed by the chiller.4

    Closed-loop heat containment systems can further increase

    efficiency when combined with air-side economizers.

    During the appropriate seasonal conditions, outside air can

    be used to cool the data center as opposed to consuming

    large amounts of energy using mechanical cooling.

    Providing the outdoor environment has favorablehumidification conditions, significant energy savings can

    be attained by taking advantage of the hours during the year

    that both the supply and return temperatures are higher than

    the outside air.

    In a legacy environment, the typical supply and return

    temperatures are 54F and 70F. In contrast, the closed-loop

    heat containment system could typically have supply and

    return temperatures between 68F and 95F. Depending on

    location, this can have a tremendous effect on the number

    of hours necessary to provide mechanical cooling in the

    data center. For example, a city such as Los Angeles can

    use economized cooling 86 percent of the hours in a year if

    4 A Strategic Approach to Datacenter Cooling, by Dr. James

    Fulton, Associate Professor of Mathematics, Suffolk County

    Community College, Selden, New York. April 2008.

    5

  • 8/6/2019 Scalable Design White Paper

    6/6

    UPTIME INSTITUTE RESEARCH UNDERWRITER WHITE PAPER Creating Data Center Efficiencies Using Closed-loop Design

    the supply temperature is above 70F. However, if the

    supply temperature is below 53F, it can only use full air

    side economization cycles 6 percent of the hours in a year. 5

    Thus, substantial savings can be achieved by increasing the

    supply return temperatures and a closed-loop heat

    containment solution can effectively deliver the result,

    while ensuring the reliability of the IT equipment.

    About the Author

    Brent Goren, PE, is a data center consultant with Wright

    Line, where he provides technical expertise to assist clients

    in designing scalable, reliable, and efficient data centers.

    Brent has in-depth knowledge in both power and cooling in

    the data center and, most recently, has taken a lead role in

    building a practice surrounding computational fluid

    dynamics (CFD) modeling and airflow management. Brent

    has over 15 years experience working within IT

    environments in various roles and capacities, with the lastthree years prior to working with Wright Line dedicated to

    data center consolidation and relocation projects. Brent

    received his BA in Electrical Engineering, from the

    University of Manitoba.

    Mr. Goren can be reached at [email protected] .

    About Wright Line

    Wright Line provides a wide range of innovative data

    center solutions developed through direct collaboration

    with its customers. From server enclosures and power

    distribution units (PDUs) to its new patent-pending heatcontainment system, Wright Line can help you improve

    your data center infrastructure efficiency (DCiE) and power

    usage effectiveness (PUE). Wright Line doesnt advocate a

    one-size-fits-all product development methodology, but

    rather a consultative, collaborative approach that

    maximizes your data center operations. Its industry-leading

    enclosures, coupled with its broad range of accessories,

    power distribution, keyboard/video display/mouse (KVM)

    switches and monitoring products, are designed to store,

    cool, power, manage and secure your mission-critical

    equipment.

    5Data from Best Practices for Datacom Facility EnergyEfficiency ASHRAE Series, ISBN 978-1-933742-27-4.

    About the Uptime Institute

    Uptime Institute is a leading global authority on data

    centers. Since 1993, it has provided education, consulting,

    knowledge networks, and expert advisory for data center

    Facilities and IT organizations interested in maximizing

    site infrastructure uptime availability. It has pioneered

    numerous industry innovations, including the Tier

    Classification System for data center availability, which

    serves as a de facto industry standard. Site Uptime Network

    is a private knowledge network with 100 global corporate

    and government members, mostly at the scale of Fortune

    100-sized organizations in North America and EMEA. In

    2008, the Institute launched an individual Institute

    membership program. For the industry as a whole, the

    Institute certifies data center Tier level and site resiliency,

    provides site sustainability assessments, and assists data

    center owners in planning and justifying data center

    projects. It publishes papers and reports, offers seminars,

    and produces an annual Green Enterprise IT Symposium,

    the premier event in the field focused primarily on

    improving enterprise IT and data center computing energy

    efficiency. It also sponsors the annual Green Enterprise IT

    Awards and the Global Green 100 programs. The Institute

    conducts custom surveys, research and product

    certifications for industry manufacturers. All Institute

    published materials are 2009 Uptime Institute, Inc., and

    protected by international copyright law, all rights reserved,

    for all media and all uses. Written permission is required to

    reproduce all or any portion of the Institutes literature for

    any purpose. To download the reprint permission request

    form, uptimeinstitute.org/resources.

    Uptime Institute, Inc.

    2904 Rodeo Park Drive East

    Building 100

    Santa Fe, NM 87505-6316

    Corporate Offices: 505.986.3900

    Fax: 505.982.8484

    uptimeinstitute.org

    2009 Uptime Institute, Inc. and Wright Line

    6

    mailto:[email protected]://www.uptimeinstitute.org/resourceshttp://www.uptimeinstitute.org/http://www.uptimeinstitute.org/http://www.uptimeinstitute.org/resourcesmailto:[email protected]