dc efficiency and design 2009

Upload: felixdej

Post on 03-Jun-2018

231 views

Category:

Documents


2 download

TRANSCRIPT

  • 8/11/2019 DC Efficiency and Design 2009

    1/36

    Data CenterEf ciency andDesign

    Volume 13 | November 2009

    Green Power Protection spinninginto a Data Center near you

    Isolated-Parallel UPS Systems Ef ciency and Reliability?

    Powering Tomorrows Data Center:400V AC versus 600V ACPower Systems

  • 8/11/2019 DC Efficiency and Design 2009

    2/36

  • 8/11/2019 DC Efficiency and Design 2009

    3/36

  • 8/11/2019 DC Efficiency and Design 2009

    4/36

    2 | THE DATA CENTER JOURNAL www.datacenterjournal.com

    FACILITY CORNERELECTRICAL

    4 ISOLATED-PARALLEL UPS SYSTEMS EFFICIENCY ANDRELIABILITY?By Frank Herbener & Andrew Dyke, Piller GroupGmbH, GermanyIn todays data center world of everincreasing power demand, the scaleof mission critical business dependentupon uninterruptible power grows evermore. More power means more energy

    and the battle to reduce running costs isincreasingly erce.

    MECHANICAL

    8 OPTIMIZING AIR COOLINGUSING DYNAMIC TRACKINGBy John Peterson, Mission Critical FacilityExpert, HPDynamic tracking should be consideredas a viable method to optimize theeffectiveness of cooling resources in adata center. Companies using adynamic tracking control system bene tfrom reduced energy consumption andlower data centercosts.

    CABLING

    11 PREPARE NOW FOR THENEXT-GENERATION DATACENTERBy Jaxon Lang, Vice President, GlobalConnectivity Solutions Americas, ADCFueled by applications such as IPTV,

    Internet gaming, le sharing and mobilebroadband, the ood of data surgingacross the worlds networks is rapidlymorphing into a massive tidal wave--one that threatens to overwhelm anydata center not equipped in advance tohandle the onslaught.

    SPOT LIGHTSENGINEERING ANDDESIGN

    14 GREEN POWER PROTECTION SPINNING INTO A DATACENTER NEAR YOUBy Frank DeLattre, President, VYCONFlywheel energy storage systemsare gaining strong traction in datacenters, hospitals, industrial and othermission-critical operations whereenergy ef ciency, costs, space andenvironmental impact are concerns.This green energy storage technologyis solving sophisticated power problemsthat challenge computing operationsevery day.

    18 POWERING TOMORROWSDATA CENTER: 400V ACVERSUS 600V AC POWER

    SYSTEMSBy Jim Davis, business unit manager, EatonPower Quality and Control OperationsWhile major advancements in electricaldesign and uninterruptible powersystem (UPS) technology have providedincremental ef ciency improvements,

    the key to improving system-widepower ef ciency within the data centeris power distribution.

    22 DATA CENTER EFFICIENCY ITS IN THE DESIGNBy Lex Coors, Vice President Data CenterTechnology and Engineering Group, Interxion

    Data centers have always been powerhogs, but the problem has acceleratedin recent years. Ultimately, it boils downto design, equipment selection andoperation of which measurement is animportant part.

    ITCORNER25 ONLINE BACKUP OR CLOUD

    RECOVERY?By Ian Masters, UK Sales & Marketing Director,Double-Take SoftwareThere is an old saying in the dataprotection business that the whole pointof backing up is preparing to restore.Having a backup copy of your data isimportant, but it takes more than apile of tapes (or an on-line account) torestore.

    26 FIVE BEST PRACTICESFOR MITIGATING INSIDERBREACHESBy Adam Bosnian, VP Marketing Cyber-ArkSoftware

    Mismanagement of processes involvingprivileged access, privileged data, orprivileged users poses serious risks toorganizations. Such mismanagement isalso increasing enterprises vulnerabilityto internal threats that can be causedby simple human error or maliciousdeeds.

    All rights reserved. No portion of DATA CENTER Journal may be reproduced without writtenpermission from the Executive Editor. The management of DATA CENTER Journal is notresponsible for opinions expressed by its writers or editors. We assume that all rights incommunications sent to our editorial staff are unconditionally assigned for publication.All submissions are subject to unrestricted right to edit and/or to comment editorially.

    AN EDM2R ENTERPRISES, INC. PUBLICATION ALPHARETTA, GA 30PHONE: 678-762-9366 | FAX: 866-708-3068 | WWW.DATACENTERJOUR

    DESIGN : NEATWORKS, INC | TEL: 678-392-2992 | WWW.NEATWORK

  • 8/11/2019 DC Efficiency and Design 2009

    5/36

    THE DATA CENTER JOURNAL | 3www.datacenterjournal.com

    ITOPS28 ENERGY MEASUREMENT

    METHODS FOR THE DATACENTERBy Info-Tech Research GroupUltimately, energy data needs to becollected from two cost buckets:data-serving equipment (servers,storage, networking, UPS) and supportequipment (air conditioning, ventilation,lighting, and the like). Changes in onebucket may affect the other bucket, andby tracking both, IT can understand thisrelationship.

    EDUCATIONCORNER

    30 COMMON MISTAKES INEXISTING DATA CENTERS &HOW TO CORRECT THEMBy Christopher M. Johnston, PE and Vali Sorell,PE, Syska Hennessy Group, Inc.After youve visited hundreds of datacenters over the last 20+ years(like your authors), you begin tosee problems that are commonto many of them. Were takingthis opportunity to list someof them and to recommendhow to correct them.

    YOUR TURN32 TECHNOLOGY AND THE

    ECONOMY By Ken BaudryAn article from our Experts Blog

    Holis-Tech ................................. Inside Frontwww.holistechconsulting.com

    MovinCool ..................................... pg 1www.movincool.com

    PDU Cables .................................. pg 3www.pducables.com

    Piller ............................................... pg 7www.piller.com

    Server Tech ................................... pg 9www.servertech.com

    Snake Tray .................................... pg 10www.snaketray.com

    Binswanger ................................. pg 13www.binswanger.com/arlington

    Upsite ............................ pgs 19, 21, 23www.upsite.com

    Universal Electric ....................... pg 20www.uecorp.com

    Sealeze .......................................... pg 22www.coolbalance.biz

    AFCOM ........................................... pg 24www.afcom.com

    7x24 Exchange ............................ pg 27www.7x24exchange.org

    Info-Tech Research Group ....... pg 29www.infotech.com/measureit

    Data Aire ....................................... Backwww.dataaire.com

    CALENDARNOVEMBER

    November 15 - November 18, 20097x24 Exchange International 2009 Fall Conference www.7x24exchange.org/fall09/index.htm

    DECEMBERDecember 2 - December 3, 2009 KyotoCooling Seminar: The Cooling Problem Solved www.kyotocooling.com/KyotoCooling%20Seminars.html

    December 1 - December 10, 2009 Gartner 28th Annual Data Center Conference 2009 www.datacenterdynamics.com

    VENDOR INDEX

  • 8/11/2019 DC Efficiency and Design 2009

    6/36

    4 | THE DATA CENTER JOURNAL www.datacenterjournal.com

    ParallelRedundant

    SystemRedundant

    IsolatedRedundant

    DistributedRedundant

    IsolatedParallel

    Redundant

    Fault tolerant No Yes Yes Yes Yes

    Concurrentlymaintainable

    No Yes Yes Yes Yes

    Load Manage-

    ment required

    No No Yes Yes No

    Typical UPSmodule loading(max)

    85% 50% 100%* 85% 94%

    Reliability order(1= best)

    5 1 4 3 2

    * One module is always completely unloaded.

    Table 1 Comparison of UPS scheme topologies.

    A parallel redundant scheme usuallyprovides N+1 redundancy to boostreliability but suffers rom singlepoints o ailure including the out-put paralleling bus and the scheme

    is limited to around 5 or 6MVA at low volt-ages. Te whole system is not ault tolerantand is difficult to concurrently maintain.

    A System + System approach canovercome the maintenance and ault toleranceissues but suffers rom a very low operatingpoint on the efficiency curve. Like the paral-lel redundant scheme, it too, is limited in scaleat low voltages.

    An isolated or distributed redundantscheme can be employed to tackle all theseproblems but such schemes introduce ad-ditional requirements such as essential loadsharing management and static trans erswitches or single corded loads.

    Te Isolated-Parallel (IP) rotary UPSsystem eliminates the undamental draw-backs o conventional approaches to providea highly reliable, ault tolerant, concur-rently maintainable and yes, highly efficientsolution.

    IP SYSTEM CONFIGURATIONTe idea [1] o an IP system is to use

    a ring bus structure with individual UPSmodules interconnected via 3-phase isolationchokes (IP chokes). Each IP choke, is de-signed to limit ault currents to an acceptablelevel at the same time as allowing sufficientload sharing in the case o a module outputailure. Load sharing communications arenot required and the scale o low voltagesystems can be greatly increased.

    LOAD SHARINGIn normal operation each critical load

    is directly supplied rom the mains via its as-sociated UPS. In the case that the UPSs are allequally loaded there is no power trans erredthrough the IP chokes. Each unit indepen-dently regulates the voltage on its output bus.

    In an unbalanced load condition eachUPS still eeds its dedicated load, but the unitswith resistive loads greater than the averageload o the system receive additional active

    Isolated-Parallel UPS Systems Ef ciency and Reliability?FRANK HERBENER & ANDREW DYKE, PILLER GROUP GMBH, GERMANY

    In todays data centre world of ever increasing power demand, the scale of mission critical businessdependant upon uninterruptible power grows ever more. More power means more energy andthe battle to reduce running costs is increasingly erce. Optimizing system ef ciency withoutcompromise in reliability seems like an impossible task or is it?

    Figure 1 Isolated-Parallel System

    FACILITY CORNERELECTRICAL

  • 8/11/2019 DC Efficiency and Design 2009

    7/36

    THE DATA CENTER JOURNAL | 5www.datacenterjournal.com

    power rom the lower loaded UPS via the IP bus (see Figure 2). It isthe combination o the relative phase angles o the UPS output bussesand the impedance o the IP choke that controls the power ow. Terelative phase angles o the UPS must be naturally generated in cor-relation to the load level in order to provide the ability o natural loadsharing among the UPS modules without the necessity o active loadsharing controls.

    Figure 2 Example of load sharing in an IP system consisting of 16UPS Modules

    Te inuence o the IP choke should also be considered: Withall UPS modules having the same output voltage, the impedance othe IP choke inhibits the exchange o reactive current, so that reactivepower control is also not necessary.

    Looking at the mechanisms o natural load sharing in an IPsystem, it is obvious that a normal UPS bypass operation wouldsignicantly disturb the system. So, i the traditional bypass operationis not allowed in an IP system, what will happen in case o a suddenshutdown o a UPS Module? o say absolutely nothing would beslightly exaggerated, but almost nothing is reality.

    Figure 3 Example of redundant load supply in the case one UPS fails.

    Te associated load is still connected to the IP bus via the IPchoke, which now works as a redundant power source. Te load willautomatically be supplied rom the IP bus without interruption. Inthis mode, each o the remaining UPS Modules equally eeds powerinto the IP bus (Figure 3). Tere is no switching activity necessary tomaintain supply to the load.

    An additional breaker between the load and the IP bus allowsconnection o the load directly to the IP bus, enabling the isolation othe aulty UPS under controlled conditions.

    UPS TOPOLOGY

    Te most suitable UPS topology to achieve the a orementionedload dependent phase angle in a natural way is a rotary or diesel ro-tary UPS with an internal coupling choke as shown in Figure 4.

    1 Utility bus2 IP bus3 IP bus (return)4 Rotary UPS with ywheel energy store.5 Load bus6 IP choke7 rans er breaker pair (bypass)8 IP bus isolation breakers.

    Note that a UPS module without a bi-directional energy store(e.g. battery or induction coupling) can be used but the system is

    likely to exhibit lower stability under transient conditions.FAULT ISOLATION

    Tere are two ault locations that must be evaluated: a). down-stream o each UPS and b) the IP bus itsel .

    A). A ault on the IP bus is the most critical because it results inthe highest local ault currents. Te ault is parallel ed by each UPSconnected to the IP bus but limited by the sub-transient reactance othe UPS combined with the impedance o its IP choke. Tis meansthat the effect on the individual UPS outputs is minimized and theocal point remaining is the ault withstand o the IP ring itsel .

    Figure 4 IP system using Piller UNIBLOCK T Rotary UPS wibi-directional energy store.

  • 8/11/2019 DC Efficiency and Design 2009

    8/36

    6 | THE DATA CENTER JOURNAL www.datacenterjournal.com

    b). A ault on the load side o a UPS is mostly ed by the associ-ated UPS, limited by its sub-transient reactance only. A current romeach o the non affected UPS is ed into the ault too, but because othe act that there are two IP chokes in series between the ault andeach o the non affected UPS, this current contribution is very muchsmaller. As a result o this, the disturbance at the non affected loads is very low. Tis, in combination with the high ault current capabilityo rotary ups ensures ast clearing o the ault while effectively isolat-ing the ault rom the other loads.

    Figure 5 Example of a fault current distribution in case of a shortcircuit on the load side of UPS #2

    CONTROL

    Te regulation o voltage, power and requency plus any syn-chronization is done by the controls inside each UPS module. TeUPS also controls the UPS related breakers and is able to synchronizeitsel to different sources. Each system is controlled by a separatesystem control PLC, which operates the system related breakers andinitializes synchronization processes i necessary. Te system controlPLC also remotely controls the UPS regarding all operations that arenecessary or proper system integration. Te redundant MasterControl PLCs are used to control the IP system in total. Additionalpilot wires interconnecting the system controls allow sa e systemoperation in the improbable case that both master control PLCs ail.

    MODES OF OPERATIONIn case o a mains ailure each UPS automatically disconnects

    rom the mains and the load is initially supplied rom the energystorage device o the UPS. From this moment on, the load sharingbetween the units is done by a droop- unction based on a power- re-quency-characteristic which is implemented in each UPS. No loadsharing communication between the units is required. Afer the DieselEngines are started and engaged, the loads are automatically trans-erred rom the UPS energy storage device to the Diesel Engine so the

    energy storage can be recharged and is then available or urther use.o achieve proper load sharing also in Diesel operation, eachDiesel Engine is independently controlled by its UPS, whether theengine is mechanically coupled to the generator o the UPS (DRUPS)or an external Diesel-Generator (standby) is used. A special regula-tor structure inside the UPS in combination with the bi-directionalenergy storage device allows active requency and phase stabilizationwhile keeping the load supplied rom the Diesel Engine.

    Te retrans er o the system to utility is controlled by the mastercontrol. Te UPS units are re-trans erred one by one, thereby avoid-ing severe load steps on the utility. Afer the whole system is syn-chronized and the rst UPS system is reconnected to utility, the load

    sharing o those UPS systems which are still in Diesel operation cannot be done by the regular droop unction. o overcome this, PillerGroup GmbH invented and patented the Delta-Droop-Control(DD-Control). Tis allows proper load sharing under this conditionwithout relying on load sharing communications. With the imple-mentation o DD-Control into the UPS Modules all UPS systems canbe reconnected to utility step by step until the whole IP system is inmains operation once more. Tis removes another problem in largescale systems: that o step-load re-trans er to utility afer mains ailure.

    MAINTAINABILITY Te IP bus system is probably the simplest (high reliability)

    system to concurrently maintain because the loads are independentlyed by UPS sources and these sources can readily be removed romand returned to the system without load interruption. Not only that,but the ring bus can be maintained, as can the IP chokes, also withoutload interruption. All the other solutions with similar maintainability(System, Isolated and Distributed redundant), have ar greater com-plexity o in rastructure, leading to more maintenance and increasedrisk during such operations.

    PROJECTSTe rst IP system was realized in 2007 or a data center in Ash-

    burn VA. It consists o two IP systems, each equipped with 16 x PillerUNIBLOCK UB 1670kVA UPS with ywheel energy storage (totalinstalled capacity > 2 x20MWatts at low voltage). Each o the UPS isbacked up by a separate Diesel Generator with 2810kVA, which can beconnected directly to the UPS load bus and which is able to supply boththe critical and the essential loads. Since the success o this rst instal-lation, three more data centers have been commissioned, o which therst phase o one is complete (a urther 20MWatts) as o today.

    Tere are urther projects planned to be done in medium voltageand also a conguration combining the benets o the IP system withthe energy efficiency o natural gas engines is planned by ConsultingEngineers.

    CONCLUSIONIn the orm o an IP bus topology, a UPS scheme that combines

    high reliability with high efficiency is possible.High reliability is obtained by virtue o the use o rotary UPS

    (with M BF values in the region 3-5 times better than static technolo-gy), combined with the elimination o load sharing controls, no modeswitching under ailure conditions, load ault isolation and simpliedmaintenance.

    High efficiency can be obtained with such a high reliabilitysystem because o the ability to simulate the System + System ault tol-erance without the penalty o low operating efficiencies. A 20MWattdesign load can run with modules that are 94% loaded and yet, offer a

    reliability that is similar to the S+S scheme that has a maximum mod-ule loading o just 50%. Tat can translate in to a difference in UPSelectrical efficiency o 3 or 4%. Tat means a potential waste in oper-ating costs o $750,000 per year (ignoring additional cooling costs).

    Whats more, the solution is not only concurrently maintainableand ault tolerant with high reliability and high efficiency, but can alsobe realized at either low or medium voltages and can be implementedwith DRUPS, separate standby diesel engines or even gas engines orsuper-efficient large scale acilities.

    For complete information on the invention and history ofIP systems, refer to Piller Group GmbH paper by Frank Herbenerentitled Isolated-Parallel UPS Conguration at www.piller.com.

  • 8/11/2019 DC Efficiency and Design 2009

    9/36

    THE DATA CENTER JOURNAL | 7www.datacenterjournal.com

    www.piller.com

    ROTARY UPS SYSTEMS

    STATIC UPS SYSTEMS

    STATIC TRANSFER SWITCHES

    KINETIC ENERGY STORAGE

    AIRCRAFT GROUND POWER SYSTEMS

    FREQUENCY CONVERTERS

    NAVAL POWER SUPPLIES

    SYSTEM INTEGRATION

    Piller Group GmbH Abgunst 24,37520 Osterode,GermanyT +49 (0) 5522 311 0E [email protected]

    Piller Australia Pty. Ltd. | Piller France SAS | Piller Germany GmbH & Co. KG | Piller Italia S.r.l. | Piller Iberica S.L.UPiller Power Singapore Pte. Ltd. | Piller UK Limited | Piller USA Inc.

    What do the followingorganizations all have

    in common?

    When it comes to power protection leading organizations dont take chances. Time after time the worlds

    leading organizations select Piller power protection systems to safeguard their data centers.

    Why? Because there is no higher level of data center power protection available!

    Whats more, Piller offers the most cost effective and the greenest through life investment available. So,

    if you are planning major data center investment and would like to know more about why the worlds

    leading organisations trust their data center power protection to Piller, contact us today.

    [email protected]

    Nothing protects quite like Piller

    A Langley Holdings Company

    They all rely ondata centers protected

    by Piller.

    3M | ABB | ABN Amro | Abovenet | ADP | AEG | Airbus | Alcan | Alcatel | Aldi | Allianz | Alstom | Altair | AMD | Anz Bank | AOL | Areva | Astra Zeneca | AT & T | Australian Stock Exchange | Australian Post | Aviva | Bahrain Financial Harbour | Banca d'Italia | Banco Bradesco | Banco Santander | Bank of Am erica | Bank of England | Bank of Hawaii | Bank of Morocco | Bank of Scotland| Bank Paribas | Barclays | BASF | Bayer | BBC (British Broadcasting Corporation) | BP (British Petroleum) | BICC | Black & Decker | BMW | Bosch | Bouygues Telecom | BA (British Airways) | BG(British Gas) | BT (British Telecom) | British Civil Service | British Government | Bull Computer | CAA (Civil Aviation Authority) | Canal+ | Capital One | Channel 4 (USA) | Channel 4 (UK) | Chase |Chevron | Chinese Army | Chinese Navy | Chrysler | Citigroup | Central Intelligence Agency (CIA) | Commerzbank | Conoco | Credit Lyonnais | Credit Mutuel | Credit Suisse | CSC | Daimler Benz |Danish Intelligence Service | Danish Bank | Danish Radio | Dassault | De Beers | Degussa | Dell Computer | Deutsche Bank | Deutsche Bundesbank | Deutsche Post | Disney | Dow Jones | DresdnerBank | DuPont | Dutch Military | EADS | EADS Hamburg | EASYNET | EDF | EDS | Eli Lilly | ESAT Telecom | European Patent Office | European Central Bank | Experian | Federal Reserve Bank |FedEX | First National Bank | First Tennesee Bank | Ford Motor | France Telecom | French Airforce | French Army | Friends Provident | Fujitsu | GCHQ (British Government Communications HeadQuarters) | Girobank | GlaxoSmithKline | GUS (General United Stores) | Heidelberger | Hewlett Packard | Hitachi | HSBC | Hynix | Hyundai | IBM | ING Bank | Intel | IRS | Iscor | J P Morgan | JohnDeere | Knauf | Knorr | Kodak | Lafrage | Linde | Lindsey Oil | Lloyds of London | Lockheed | Los A lamos National Laboratories | Lottery Vienna | Lottery Copenhagen | LSE (London Stock Exchange)| Marks & Spencer | MBNA | Mercedes Benz | Merrill Lynch | MOD (British Ministry of Defence) | Morgan Grenfell | Morgan Stanley | Motorola | NASA | NASDAQ | National Grid (British) | NationalSemiconductor | Natwest Bank | Nestl | Nokia | Nuclear Elektric (Germany) | NYSE (New York Stock Exchange) | NYSE Euronext | Pfizer | Philips | Phillip Morris | Porsche | Proctor & Gamble |Putnam Investments | Qantas | QVC | Rank Xerox | Raytheon | RBS | Reuters | Rolls Royce | Royal Bank of Canada | Royal & Sun Alliance | RWE | Samsung | Scottish Widows | Sharp | Shell |Siemens | Sky | Sony | Sony Ericsson | Sweden Television | TelecityGroup | Thyssen Krupp | T-Mobile | Union Bank of Switzerland | United Biscuit | United Health | Verizon | VISA | VW *

    * The above is an extract of Piller installations and is by no means exhaustive.

  • 8/11/2019 DC Efficiency and Design 2009

    10/36

    8 | THE DATA CENTER JOURNAL www.datacenterjournal.com

    FACILITY CORNERMECHANICAL

    INSIDE THE DATA CENTER

    One o the most challenging tasks orunning a data center is managingthe heat load within it. Tis re-quires balancing a number o ac-tors including equipment location

    adjacencies, power accessibility and availablecooling. As high-density servers continue togrow in popularity along with in-row and in-rack solutions, the need or adequate coolingin the data center will continue to grow at asubstantial rate. o meet the need or coolingusing a typical under oor air distributionsystem, a manager ofen adjusts per oratedoor tiles and lets the nearest ComputerRoom Air Conditioner (CRAC) unit reactas necessary to each new load. However,this may cause a sudden and unpredictableuctuation in the air distribution system dueto changes in static pressure and air reroutingto available outlets which can have a rippleeffect on multiple units. With new outletsavailable, air, like water, will seekthe path with less resistance; thenew outlets may starve existing

    areas o cooling, causing the ex-isting CRAC units to cycle the airaster. Tis becomes a waste uluse o an energy, let alone uc-tuations o cooling load energyallocation.

    Most managers understandthat the air supply plenum needsto be a totally enclosed space toachieve pressurization or airdistribution. Oversized or un-sealed cutouts allow air to escape

    the plenum, reducing the static pressure andeffectiveness o the air distribution system.Cables, conduits or power and piping canalso clog up the air distribution path, sothought ul consideration and organizationshould be an essential part o the data centeroperations plan. However, even the bestlaid plan can still end up with areas that arestarved or cooling air.

    In a typical layout, there are rows ocomputer equipment racks that draw cool airrom the ront and expel hot air at the rear.Tis requires an overall ootprint larger thanthe rack itsel (Figure 1).

    When adding new data center equip-ment, data center managers need to manageunpredictable temperatures and identi y anew per ect balance o how many per oratedtiles to use and where to locate them. Teyinvolve maintenance personnel to adjustCRAC units, assist with tile layouts, and evenpossibly add or relocate the units as neces-

    sary. Due to the predetermined raised oorheight, supply air temperature and humiditynecessities, the volatile air distribution systembecomes an inexible piece o the overallpuzzle, at the expense o energy and possiblyper ormance due to inadequate cooling.

    Meanwhile, the CRAC units are operat-ing at variable rates to meet this load, butmostly they are operating at their maxi-mum capacity instead o as-needed. Why?One reason is where the air temperature ismeasured. Each unit is operating on thereturn air temperature measured at the unit,and all units are sharing the same return air.Tis means that i the load is irregular in theracks, the units simply cool or the overallrequired capacity. Apply this across a datacenter, and the units are generally handlingthe cooling load without altering their owbased on changes happening in any localizedarea, which consequently allows that large variance o temperatures in the rows.

    emperature discrep-ancy is the main concernor most data center

    managers. Tey would likethe air system not to be thelimiting actor when addingnew equipment to racksand pre er to remove the variable o ckle air coolingrom the equation o equip-ment management. At thesame time, almost behindthe scenes, acility costsrom cooling are increas-ing to match the new load,

    Optimizing Air Cooling UsingDynamic Tracking BY JOHN PETERSON, MISSION CRITICAL FACILITY EXPERT, HP

    Dynamic tracking should be considered as a viable method to optimize the effectiveness of coolingresources in a data center. Companies using a dynamic tracking control system bene t from reducedenergy consumption and lower data center costs.

    Figure 1: Overall footprint needed per rack

  • 8/11/2019 DC Efficiency and Design 2009

    11/36

    THE DATA CENTER JOURNAL | 9www.datacenterjournal.comServer Technology, Inc. Sentry is a trademark of Server Technology, Inc.

    Solutions for the Data Center Equipment Cabinet

    1040 Sandhill DriveReno, NV 89521USA

    [email protected]

    tf +1.800.835.1515tel +1.775.284.2000fax +1.775.284.2065

    How Do You Measurethe Energy Efficiencyof Your Data Center?

    > Sentry POPSMeasure and monitor power information per

    outlet, device, application, or cabinet using Webbased CDU Interface

    > Sentry Power ManagerSecure software solution to:> Monitor, manage & control multiple CDUs> Alarm management, reports & trending

    of power info> ODBC Compliant Database for integration into

    your Building Management or other systems> kW & kW-h IT power billing and monitoring

    information per outlet, device, application,cabinet or DC location

    BMS

    P R I M A R Y E T H E R N E T P I P E L I N E

    WEB BASED CDU INTERFACE

    WEB BASED SPM INTERFACE

    DATABASE

    SENTRY POWERMANAGER APPLIANCE

    Sentry: POPS Switched CWith Device Monitoring> Rack Level Power Management> Outlet Power Monitoring (POPS)> Input Power Monitoring> Environmental Monitoring> Outlet Groups> Alarms

    Sentry Power Manager> Enterprise Cabinet Power Mn> Reports & Trends> Device Monitoring> Groups & Clusters> Kilowatt Readings for Billing> Auto-Discovery of Sentry CD> Alarms

    With SentryPower Manager (SPM) and Sentry POPS (Per Outlet Power Sensing) CDUs!

  • 8/11/2019 DC Efficiency and Design 2009

    12/36

    10 | THE DATA CENTER JOURNAL www.datacenterjournal.com

    driving the need or more efficient use o ex-isting resources. A Gartner report shows thatover 63% o respondents to a recent surveyindicated that they rely on air systems to cooltheir data center over liquid cooling. O thosesame respondents nearly 45% shared thatthey are acing insufficient power which willneed to be addressed in the near uture. 1 AsI managers are able to correct their powerconstraints they are able to deploy a moredemanding in rastructure and subsequentlywill require additional power and cooling.

    DYNAMIC TRACKING

    Although the air ow in a data centeris complex, an opportunity now exists to op-timize the effectiveness o cooling resourcesand better manage the air system within thedata center. Tere are ways to monitor air

    1 Power & Cooling Remain the Top Data Center Infrastructure Issues, Gartner-Research, February 2009

    temperatures within each row o cooling, andeven the temperature entering a specic rackat a particular height. From these tempera-tures, an intelligent system can react to meetthe need or cooling air at that location,eliminating the work o juggling oor tilesand guessing at the air ow.

    How is this done? o begin with, thetemperature is measured differently. Anumber o racks are mounted with sensorsthat measure the supply air temperature atthe ront o the rack. Tis in ormation isrelayed to a central monitoring system thatresponds accordingly by adjusting the CRAC

    units. Te units then unction as a team andnot independently, meeting specic needs asmonitored in real time by the sensors. Sincethe temperature is tracked rom the sourceand adjusts based on real time needs, this

    method o measurement and control is some-times re erred to as dynamic tracking.

    In the initial setup o dynamic track-ing, the intelligent control system tests andlearns which areas o the data center eachCRAC unit affects. Ten, the units are testedtogether, and the control system modulatesthem to provide the most uni orm distribu-tion within the constraints o the layout androom architecture. Tis data allows the airsystem to gather intelligence on how to com-pensate or leaks and barriers in the plenum.From there, the system knows how the unitsinteract, and can intelligently judge how to

    respond to changes within the data center. Itis also able to rebalance when one o the unitsails or is being serviced.

    o prevent a large uctuation, thetemperatures are measured over an extendedperiod o time and temperature is adjusteddepending on the cooling needs o the space.Te CRAC units respond based on the his-tory o how each unit has affected the specicarea. Te overarching intelligence o the dy-namic tracking control system gauges wheth-er an increase in temperature is sustained ora series o momentary heat spikes and adjustsitsel accordingly. Tis prevents units romcycling out o control rom variables such ashuman error, short peak demands, and sud-den changes in load.

    Once installed, a dynamic trackingsystem can show how the CRAC units haveoperated in the past and how they are cur-rently per orming. Most o the time, the unitsoperate at less than peak conditions, which isan opportunity to increase energy efficiencyand create signicant savings. Also, i theunits can measure and meet the load moreclosely, the cost savings carry directly over tothe mechanical cooling plant as well.

    Dynamic tracking systems can helptrans orm the air distribution and energyuse within a data center, and should beconsidered as a viable solution to handle variable and complex heat loads. Te abilityo dynamic tracking to reduced energy andpreserve data center exibility are promisingactors or driving optimization. n

    1 Power & Cooling Remain the Top Data Center Infrastructure IssuResearch, February 2009

    Dynamic tracking systems can help transform the air distribution and energy use withina data center, and should be considered as a viable solution to handle variable and

    complex heat loads. The ability of dynamic tracking to reduced energy and preserve datacenter exibility are promising factors for driving optimization.

  • 8/11/2019 DC Efficiency and Design 2009

    13/36

    THE DATA CENTER JOURNAL | 11www.datacenterjournal.com

    T he 2009 edition o the annual CiscoVisual Networking Index predicts that

    the overall volume o Internet Proto-col (IP) traffic owing across globalnetworks will quintuple between 2008

    and 2013, with a compound annual growthrate (CAGR) o 40 percent. During thatsame period, business IP traffic moving onthe public Internet will grow by 31 percent,according to the Cisco study, while enterpriseIP traffic remaining within the corporateWAN will grow by 36 percent.

    Faced with this looming challenge, datacenter managers know they must preparenow to deploy the solutions necessary to ac-complish three tasks: transmit this deluge o

    in ormation, store it and help lower total costo ownership ( CO). Specically, within thenext ve to seven years, they will need:

    more bandwidth aster connections more and aster servers and more and aster storage

    odays data center operations accountor up to hal o total costs over the li e cycleo a typical enterprise and retrots make upanother 25 percent. Managers want solutionsthat boost efficiencies immediately whilealso making uture upgrades easier and more

    affordable.Among the technologies that promise toprovide these solutions are 40 and 100GbpsEthernet (GbE); Fibre Channel over Ethernet(FCoE); and server virtualization. Becausethey directly affect the in rastructure, thesetechnologies will require new approaches tocabling and connectors; higher ber densi-ties; higher bandwidth per ormance; andmore reliable, exible and scalable opera-tions. Although managers want to deploytechnologies that will satis y their uture

    requirements, they also want to determine towhat extent they can leverage their exist-

    ing in rastructures to meet those needs. Asthey do so, many are discovering there arestrategies available today that can help themachieve both goals.

    40GBE AND 100GBE ARECOMING

    Although most data centers today run10GbE between core devices, and some run40GbE via aggregated 10GbE links, theyinevitably will need even aster connectionsto support high-speed applications, newserver technologies and greater aggregation.In response, the Institute o Electrical andElectronics Engineers (IEEE) is developing astandard or 40 and 100GbE data rates (IEEE802.3ba).

    Scheduled or ratication next year,the standard addresses multimode andsinglemode optical-ber cabling, as well ascopper cabling over very short distances (10meters, as o publication date). It is help ulto examine the proposed standard and thenlook at various strategies or evolving the datacenter accordingly. Currently, IEEE 802.3baspecies the ollowing:

    Multimode FiberRunning 40 GbE and 100 GbE will require:1) multi-ber push-on (MPO) connectors2) laser-optimized 50/125 micrometer (m)

    optical ber and3) an increase in the quantity o ber--40

    GbE requires six times the number obers needed to run 10 GbE, and 100 GbErequires 12 times that amount.

    MPO ConnectorsA single MPO connector, actory-pre-

    terminated to multi-ber cables purchased inpredetermined lengths, terminates up to 12

    or 24 bers. 40-GbE transmission up to 100meters will require parallel optics, with eight

    multimode bers transmitting and receivingat 10 Gbps, using an MPO-style connector.Running 100 GbE will require 20 bers, eachtransmitting and receiving at 10 Gbps, withina single 24-ber MPO-style connector.

    o achieve 10-GbE data rates ordistances up to 300 meters , some managershave used MPO connectors to install laser-optimized multimode ber cables, either ISO11801 Optical Mode 3 (OM3 or 50/125 m)or OM4 (50/125 m) ber cables. Tus theyalready have taken an important step to pre-pare or 40 and 100GbE transmission rates.Working with their vendors, they can retrot

    their 12-ber MPO connectors to support40 GbE. It may even be possible to achieve100GbE rates by creating a special patch cordthat combines two o those 12-ber MPOconnectors. Although the proposed standardspecies 100 meters or 40 and 100GbE (adeparture rom 300 meters or 10GbE), the vast majority o data center links currentlycover 55 meters or less.

    Tose who are not using MPO-styleconnectors today may have options otherthan orklif upgrades or achieving 40 and100GbE data rates. Initially, most data centermanagers will only run 40 and 100GbE on

    a select ew circuits--perhaps 10 percent or20 percent. So, depending on when theywill need more bandwidth, they can beginto deploy MPO terminated, laser-optimized,multimode ber cables and evolve gradually.

    High-performance CablingCompliance with the proposed standard

    will require a minimum o OM3 laser-opti-mized 50 m multimode ber with reducedinsertion loss (2.0dB link loss) and minimaldelay skew. As noted earlier, managers whocap their investments in OM1 (62.5/125 m)

    Prepare Now for theNext-Generation Data Center BY JAXON LANG, VICE PRESIDENT, GLOBAL CONNECTIVITY SOLUTIONS AMERICAS, ADC

    FACILITY CORNERCABLING

    Fueled by applications such as IPTV, Internet gaming, le sharing and mobile broadband, the oodof data surging across the worlds networks is rapidly morphing into a massive tidal wave--one thatthreatens to overwhelm any data center not equipped in advance to handle the onslaught.

  • 8/11/2019 DC Efficiency and Design 2009

    14/36

    and OM2 (standard 50/125 m) cabling nowand install high-per ormance cabling andcomponents going orward can position thedata center or eventual 40GbE and 100GbErequirements.

    Much More FiberRunning a 10GbE application requires

    two bers today, but running a 40GbE appli-cation will require eight bers, and a 100GbEapplication will require 20 bers. Tere ore,it is important to devise strategies today ormanaging the much higher ber densitieso tomorrow. Managers must determinenot only how much physical space will berequired but also how to manage and routelarge amounts o ber in and above racks.

    Singlemode FiberRunning 40GbE over singlemode ber

    will require two bers transmitting at 10Gbpsover our channels using coarse wavelengthdivision multiplexing (CWDM) technology.Running 100GbE with singlemode ber willrequire two bers transmitting at 25Gbps over4 channels using LAN wave division multi-plexing (WDM).

    Although using WDM to run 40GbEand 100GbE over singlemode ber is ideal orlong distances (up to 10 km) and extendedreach (up to 40 km), it probably will not bethe most cost-effective option or the datacenters shorter (100-meter) distances. As theindustry nalizes the standard and vendorsintroduce equipment, managers will havea window o time in which to evaluate theevolving cost differences among singlemode,multimode and copper cabling solutions orboth 40GbE and 100GbE.

    ypically, the elapsed time between therelease o a standard and the point at whichthe price o associated electronics comesdown to a cost-effective level is about veyears. For example, the cost o the rst 10GbEports, which emerged right afer the standardwas adopted in 2002, was roughly $32,000;today, that same port costs about $2,000. I40GbE and 100GbE ports ollow that pattern,managers who already have adopted an MPOconnectorization strategy will have untilabout 2015 to plan or and actually implementthe upgrades necessary to access the astertechnologies.

    Managers who have not opted or MPOconnectors but have invested in OM3 multi-mode ber that satises the length require-ments nevertheless may be able to devisea migration path. Tey could work with vendors to create a cord that combines 12 LC-type connectors into an MPO. However, theywould have to test the site or length, insertionloss and delay skew to ensure compliance withthe 802.3ba standard.

    CopperTe proposed standard species the

    transmission o 40GbE and 100GbE overshort distances o copper cabling, with10Gbps speeds over each lane-- our lanesor 40GbE and 10 lanes or 100GbE. Notintended or backbone and horizontal cabling,the use o copper probably will be limited to very short distances or equipment-to-equip-ment connections within or between racks.

    FIBRE CHANNEL OVER ETHERNET(FCOE) BOOSTS STORAGE

    Because o Fibre Channels reliabilityand low latency, most managers use it todayor high-speed communications among theirSAN servers and storage systems. Yet becausethey rely on Ethernet or client-to-server orserver-to-server transmissions, they havebeen orced to invest in parallel networks andinter aces, which obviously increase costs andcreate management headaches.

    In response, the industry has devel-oped a new standard (ANSI FC-BB-5) whichcombines Fibre Channel and Ethernet datatransmission onto a common network inter-ace, basically by encapsulating Fibre Channelrames within Ethernet data packets. FCoEallows data centers to use the same cableor both types o transmission and deliverssignicant benets, including better serverutilization; ewer required ports; lower powerconsumption; easier cable management; andreduced costs.

    o most cost effectively deploy FCoE,managers may opt to use top-o -rack switches,rather than traditional centralized switch-ing, to provide access to existing EthernetLAN and Fibre Channel SANs. Although thetop-o -rack approach reduces the amount ocabling, it requires more exible, manageableoperations, simply because managers willhave to recongure each rack. In addition,40GbE and 100GbE require a higher-speedcabling medium.

    As they try to devise workable, afford-able strategies or deploying FCoE, managersmust take into account several actors. First,they have some time to move to FCoE. Cur-rent FCoE deployment rates are less than 5

    percent o storage ports sold. Te emergingtechnologies o 40 GbE and 100 GbE certainlymake FCoE more enticing.

    FCoE can be a two-step approach. Ini-tially, the current investment in Fibre Chan-nel-based equipment disk arrays, servers andswitches can continue to be utilized. As FCoEequipment becomes more cost effective andreadily available, a wholesale change can bemade at that time.

    FCoE becomes possible due to theadvent o Data Center Bridging (DCB) which

    enhances Ethernet to work in data center en- vironments. By deploying the electronics thatsupport FCoE, which overlays Fibre Channelon top o Ethernet, managers can eliminatethe need or--and costs o --parallel in rastruc-tures; reduce the overall amount and costso required cabling; and reduce cooling andpower-consumption levels. I they also beginto invest now in the OM3/OM4-compliantcabling or 40GbE and 100GbE, managerswill position their data centers or a smoothupgrade to FCoE-based equipment.

    SERVER VIRTUALIZATIONPRESENTS ITS OWN ISSUES

    By running multiple virtual operatingsystems on one physical server, managers aretackling several challenges: accommodatingthe space constraints created by more equip-ment; reducing capital expenditures by buyingewer servers; improving server utilization;and reducing power and cooling consump-tion. Currently, virtualization consolidatesapplications on one physical server at a ratioo 4:1, but that could increase to 20:1. Somany applications running on one serverobviously require much greater availabilityand signicantly more bandwidth.

    Server virtualization means that down-time limits access to multiple applications. oprovide the necessary redundancy, managersare deploying a second set o cables. Te addi-tional bandwidth needed to support increaseddata transmission to and rom the servers willrequire additional services, which, in turn, willdemand still more bandwidth. While virtu-alization theoretically reduces the number oservers and cabling volumes, the redundancyneeded to support virtualization, in act,means the data center needs more cabling.

    THE DRIVE TO REDUCE TCOAlthough technologies such as FCoE

    and server virtualization are aimed atreducing CO, the overall increase in datarequirements and equipment is putting a tre-mendous strain on power, cooling and spacerequirements. As a result, every enterprisetoday tries to balance the need to deploy newtechnologies with the need to reduce CO.o do so, data center operators are looking orsolutions that can handle changing congura-tion requirements and reduce energy con-sumption--which inevitably will rise as moreequipment comes online.

    By devising migration strategies thatprotect existing investments and simultane-ously prepare or the deployment o new,high-speed technologies, managers canenhance the capabilities, scalability and reli-ability o the data center. In the process, theycan reduce CO through more efficient op-erations and reduced power consumption. n

    12 | THE DATA CENTER JOURNAL www.datacenterjournal.com

  • 8/11/2019 DC Efficiency and Design 2009

    15/36

    THE DATA CENTER JOURNAL | 13www.datacenterjournal.com

    For complete details contact:BINSWANGER1200 THREE LINCOLN CENTRE, 5430 LBJ FREEWAY, DALLAS, TX 75240972-663-9494 FAX: 972-663-9461 E-MAIL: [email protected]

    WORLDWIDE COVERAGE WWW.BINSWANGER.COM/ARLINGTON

    50 acres available for expansion

    Ex-semiconductor site; low risk, low power costs

    Significant power to site

    Ceiling heights and floor loadings well-suited to data use

    5,860 tons of chiller capacity

    4,930 KW emergency generator capacity

    2.1 million gallon per day water capacity

    Plant systems include UPS, bulk gas, DI water system; compressed air plant,

    PCW plant and waste treatment plant Electric power 40.8 megawatts of power or 20.4 megawatts per feed

    Approximately 91,000 sq. ft. of high- quality office space

    Ideally located in the heart of the Dallas/Fort Worth Metroplex, minutes to I-20 and 25 minutes to DFW Airport

    Spectacular, 441,362 sq. ft.

    high-tech complex on 21 acres in

    ARLINGTON , T EXAS

    D A L L A S /F O R T W O RT H M E T R O P L E X

    Fort Worth, 14 miles s Dallas, 22 miles s

    B

    E

    W . B a r d i n R o a d

    A

    71 acres

    Building A 375,000 sq. ft.

    Building B51,400 sq. ft.

    Building E 9,130 sq. ft.

  • 8/11/2019 DC Efficiency and Design 2009

    16/36

    Green Power Protectionspinning into a DataCenter near YouBY FRANK DELATTRE, PRESIDENT, VYCON

    14 | THE DATA CENTER JOURNAL www.datacenterjournal.com

  • 8/11/2019 DC Efficiency and Design 2009

    17/36

    K eeping critical operations especially computer networks andother vital process applications up and running during powerdisturbances has been most commonly handled by uninterrupt-ible power systems (UPSs) and stand-by generators. Whetherdepending on centralized or distributed power protection, bat-

    teries used with UPS systems have been the typical standard due primarilyto their low cost. However, when one is looking to increase reliability anddeploy green initiatives, toxic lead-acid batteries are not the best solution.

    Frequent battery maintenance, testing, cooling requirements, weight,toxic and hazardous chemicals and disposal issues are key concerns. Mak-ing matters worse, one dead cell in a battery string can render the entirebattery bank useless not good when youre depending on your powerbackup system to per orm need it most. Every time the batteries are used(cycled), even or a split second, the more likely it is that they will ail thenext time they are needed.

    CLEAN BACKUP POWERFlywheel energy storage systems are gaining strong traction in data

    centers, hospitals, industrial and other mission-critical operations whereenergy efficiency, costs, space and environmental impact are concerns. Tisgreen energy storage technology is solving sophisticated power problems

    that challenge computing operations every day. According to the MetaGroup, the cost o downtime can average a million dollars per hour or atypical data center, so managers cant afford to take any risks. Flywheelsused with three-phase double-conversion UPS systems provide reliablemission-critical protection against costly transients, harmonics, voltagesags, spikes and blackouts.

    A ywheel system can replace lead-acid batteries used with UPSs andworks like a dynamic battery that stores energy kinetically by spinning amass around an axis. Electrical input spins the ywheel rotor up to speed,and a standby charge keeps it spinning 24/7 until called upon to release thestored energy. (Fig.1) Te amount o energy available and its duration isproportional to its mass and the square o its revolution speed. Specic toywheels, doubling mass doubles energy capacity, but doubling rotationalspeed quadruples energy capacity:

    Depends on the shape o the rotating massM Mass o the ywheel

    Angular velocity

    Fig. 1 Flywheel Cutaway

    www.datacenterjournal.com THE DATA CENTER JOURNAL | 15

    Today, data center andfacility managers have many

    considerations to evaluate when it comes to increasing

    energy ef ciencies and reducingones carbon footprint. Thechallenge becomes how to

    implement green technologies without disrupting high nines

    of availability and achieve a lowtotal cost of ownership (TCO). This challenge becomes even

    more crucial when looking at thepower protection infrastructure.

  • 8/11/2019 DC Efficiency and Design 2009

    18/36

    16 | THE DATA CENTER JOURNAL www.datacenterjournal.com

    During a power event, the ywheel provides backup powerseamlessly and instantaneously. Whats nice is that its not an eitheror situation as the ywheel can be used with or without batteries.When used with batteries, the ywheel is the rst line o de enseagainst damaging power glitches the ywheel absorbs all the shortduration discharges thereby reducing the number and requency odischarges, which shortens the li e o the battery. Since UPS batteriesare the weakest link in the power continuity scheme, ywheels paral-leled with batteries give data center and acility managers peace omind that their batteries are sa eguarded against premature aging andunexpected ailures. When the ywheel is used just with the UPS andno batteries, the system will provide instant power to the connectedload exactly as it would do with a battery string. However, i the powerevent lasts long enough to be considered a hard outage (rather than just a transient outage), the ywheel will grace ully hand off to theacilities engine-generator. Its important to know that according tothe Electric Power Research Institute (EPRI), 80 percent o all utilitypower anomalies/disturbances last less than two seconds and 98 per-cent last less than ten seconds. In the real world, the ywheel energystorage system has plenty o time or the Automatic rans er Switch(A S) to determine i the outage is more than a transient and to start

    the generator and sa ely manage the hand-off.SHINING LIGHT ON REAL WORLD EXPERIENCE

    SunGard, one o the worlds leading sofware and I servicescompanies that serves more than 25,000 customers in more than 70countries, rst tried out ywheels in their data centers three years agoto see how they would per orm over a period o time.

    Te driver or utilizing ywheels is to reduce the li e-cycle

    cost and maintenance requirements when installing large banks obatteries. In addition, the space savings by using ywheel and lessbatteries means lower construction costs and allows the optimumspace utilization, commented Karl Smith, Head O Critical Environ-ments or SunGard Availability Services. oday, SunGards legacy datacenters still have batteries, but as it becomes necessary to replace thebatteries, they plan to reduce the number o strings o batteries andcomplement them with a string o ywheels. For uture data centerbuilds, SunGard is planning to have a combination or short run timebatteries, in parallel with a bank o ywheels.

    BEATING THE CLOCKMany users are under a alse sense o security by having 10 or 15

    minutes o battery run time. Tey assume that i the generator doesnot start they will be able to have a chance to correct the issue. It istrue that batteries provide much longer ride-through time, but themost important ride-through time is in the rst 30 seconds. We dontneed much more than this to have our stand-by generators come online. In most cases, our generators are on-line and loads are switchedover in 30 to 40 seconds. Te ywheels are our rst line o de ense,but should we need a ew extra minutes to get a redundant genera-tor on-line, then the battery can be utilized, said Smith. Having theywheels discharge rst means the batteries are not discharged innormal operation, thus their li e can be extended.

    In various industry studies such as the IEEE Gold Book, gensetstart reliability or critical and non-critical applications was mea-sured at 99.5%. For applications where the genset is tested regularlyand maintained properly, reliability substantially increases. Whenthe genset ails to start, 80% o the time it is because o ailure o thebattery being used to start the generator. Just monitoring or adding aredundant starting system can remove 80% o the non-start issues.

    Fig. 3 Lifecycle costs of batteries vs. ywheels. Battery costs arebased on a 4-year replacement cycle.

    Fig. 2 Power protection scheme with UPSs, batteries and ywheel

    SunGard Data Center

  • 8/11/2019 DC Efficiency and Design 2009

    19/36

    THE DATA CENTER JOURNAL | 17www.datacenterjournal.com

    IT PAYS TO BE GREENTe latest ywheel designs sold by world-leaders in 3-phase UPS

    systems take advantage o higher speeds and ull magnetic levitationpacking more green energy storage into a much smaller ootprint andremoving any kind o bearing maintenance requirements. As shownin gure 2, over a 20-year design li espan, cost savings rom a hazmat-ree ywheel versus a 5-minute valve regulated lead-acid (VRLA)battery bank are in the range o $100,000 to $200,000 per ywheeldeployed.

    Tese gures (Figure 3) are based on a typical installation o a250kVA UPS using 10-year design li e VRLA batteries housed in acabinet. Te yearly maintenance or the batteries is based on a recom-mended quarterly check on the battery health to have some predict-ability on their availability. Moreover, these gures dont includeoor space or cooling cost savings that can be achieved by using theywheel energy storage vs. batteries.

    BATTERIES UNPREDICTABLE FAILURESWhile UPS systems have long used banks o lead-acid batteries

    to provide the energy storage needed to ride through a power event,they are, as stated earlier, notoriously unreliable. In act, according

    to the Electric Power Research Institute (EPRI), Batteries are theprimary eld ailure problem with UPS systems. Predicting whenone battery in a string o dozens will ail is next to impossible evenwith regular testing and requent individual battery replacements. Tetruth is that engineering personnel dont test them as ofen as theyshould, and may not have testing/monitoring systems in place to doso properly. Since ywheel systems are electro/mechanical devices,they can constantly sel monitor and report to assure the user, thatthey are ready or use or advise o the need or service. Tis is nearlyimpossible to accomplish in a chemically based system. Every time abattery is used, it becomes less responsive to the next event. Batteriesgenerate heat, and heat reduces battery li e. I operated 10F abovetheir optimum setting o 75F, the li espan o lead-acid batteries iscut in hal . I operated at colder temperatures, chemical reactions areslowed and per ormance is affected. Batteries can also release explo-sive gases that must be ventilated away.

    Battery reliability is always in question. Are they ully charged?Has a cell gonebad in the batterystring? When wasthe last time theywere tested? Someacility managersresist testing theirbatteries as thebattery test in itseldepletes battery

    li e. By contrast,ywheel systemsprovide reliableenergy storageinstantaneously toassure a predictabletransition to thestand-by genset.

    Hazmat permits, acid leak containment, oor loading issues,slow recharge times, lead disposal compliance and transporting arecausing acility managers to look closely at alternatives to energystorage.

    Protecting critical systems against costly power outages in amanner that is energy efficient, environmentally- riendly and providesa low total cost o ownership is a priority with most data center andacility managers. Double-conversion UPSs paired with ywheels(Figure 4) is the next step in greening the power in rastructure.

    BENEFITS OF FLYWHEEL TECHNOLOGY From 40kVA to over a megawatt, ywheel systems are increas-

    ingly being used to assure the highest level o power quality andreliability or mission-critical applications. Te exibility o thesesystems allows a variety o congurations that can be custom-tailoredto achieve the exact level o power protection required by the end userbased on budget, space available and environmental considerations. Inany o these congurations, the user will ultimately benet rom themany unique benets o ywheel-based systems.

    Flywheels today comply with the highest international standardsor per ormance and sa ety including those rom UL and CE. Someunits, like those rom VYCON, incorporate a host o advanced ea-tures that users expect to make the systems easy to use, maintain and

    monitor such as sel -diagnostics, log les, adjustable voltage settings,RS-232/485 inter ace, alarm status contacts, sof-start precharge romthe DC bus and push-button shutdown. Available options include DCdisconnect, remote monitoring, Modbus and SNMP communicationsand real-time monitoring sofware.

    Data center managers throughout the U.S. and around the worldare evaluating technologies that will increase overall reliability whilereducing costs. While the highest level o nines is the rst require-ment, being environmentally- riendly is certainly an added bonus. Byenhancing battery strings or eliminating them altogether with the useo ywheels, managers take one more step in greening their acilitiesand lowering CO. n

    Fig 4. VYCONs VDC Flywheel Energy Storage System pairedwith Eatons three-phase double-conversion UPS.

    BENEFITS OF FLYWHEEL TECHNOLOGY n No cooling requiredn High power density - small footprintn Parallel capability for future expansion and

    redundancy n Fast recharge (under 150 seconds)n 99% ef ciency for reduced operating costn No special facilities requiredn Front access to the ywheel eliminates space

    issues and opens up installation site exibilityin support of future operational expansions and

    re-arrangementsn Low maintenancen 20-year useful lifen Simple installationn Quiet operationn Wide temperature tolerance (-4F to 104F)

  • 8/11/2019 DC Efficiency and Design 2009

    20/36

    18 | THE DATA CENTER JOURNAL www.datacenterjournal.com

    Industry reports show that data center

    energy costs as a percent o total revenueis at an all-time high, and data centerelectricity consumption accounts oralmost .5 percent o the worlds green-

    house gas emissions. As a result, data centermanagers are under pressure to maximizedata center per ormance while reducing costand minimizing environmental impact, mak-ing data center energy efficiency critical.

    According to a 2007 Frost & Sullivansurvey o 400 in ormation technology (I )and acilities managers responsible or largedata centers, 78 percent o respondentsindicated that they were likely to adopt moreenergy efficient power equipment in next veyears, a solution thats ofen less costly andmore quickly and easily implemented thandata virtualization or cooling systems.

    While major advancements in electri-cal design and uninterruptible power system(UPS) technology have provided incrementalefficiency improvements, the key to improv-ing system-wide power efficiency within thedata center is power distribution. However,todays 480V AC power distribution sys-temsstandard in most U.S. data centers andI acilitiesare not optimized or efficiency.

    O the several alternative power distributionsystems currently available, 400V AC and600V AC systems are generally accepted asthe most viable. While both have been provenreliable in the eld, con orm to currentNational Electrical Code (NEC) guidelines,and can be easily deployed into existing 480VAC in rastructure, there are important di -erences in efficiency and cost that must becare ully weighed.

    Tis article offers a quantitative com-parison o 400V AC and 600V AC power dis-

    tribution congurations at varying load levels

    using readily available equipment, taking intoaccount the technology advancements andinstallation and operating costs that drivetotal cost o ownership ( CO).

    THE TRADITIONAL U.S. DATACENTER POWER SYSTEM

    In most U.S. data centers today, aferpower is received rom the electrical gridand distributed within the acility, the UPSensures a reliable and consistent level opower and provides seamless backup powerprotection. Isolation trans ormers step downthe incoming voltage to the utilization volt-age and power distribution units (PDUs) eedthe power to multiple branch circuits. Teisolation trans ormer and PDU are normallycombined in a single PDU component, manyo which are required throughout the acility.

    Finally, the server or equipment internal

    power supply converts the utilization voltageto the specic voltage needed. Most I equip-ment can operate at multiple voltages. Lossesthrough the UPS, the isolation trans ormer/PDU and the server equipment produce anoverall end-to-end efficiency o approximate-ly 76 percent.

    Data center efficiency is ofen evalu-ated using the efficiency ratings o the serverand I equipment alone. Despite recentadvances in energy management and servertechnology, maximum efficiency can beachieved only by taking a holistic view o thepower distribution system. Each componentimpacts the end-to-end cost and efficiencyo the system. Te entire system must beoptimized in order or the data center to ullyrealize the efficiency gains offered by newserver technologies.

    Powering Tomorrows Data Center:400V AC versus 600V AC Power SystemsBY JIM DAVIS, BUSINESS UNIT MANAGER, EATON POWER QUALITY AND CONTROL OPERATIONS

    Figure 1: End-to-end ef ciency in the 400V AC power distribution system

    A growing demand for network bandwidth and faster, fault-free data processing has driven anexponential increase in data center energy consumption, a trend with no end in sight.

  • 8/11/2019 DC Efficiency and Design 2009

    21/36

    www.datacenterjournal.com

    THE 400V AC POWER SYSTEM Te 400V AC power distribution

    model offers a number o advantages in termso efficiency, reliability and cost, as comparedto the 480V AC and 600V AC models. Ina 400V system, the neutral is distributedthroughout the building, eliminating theneed or PDU isolation trans ormers anddelivering 230V phase-neutral power directlyto the load. Tis enables the system to per-orm more efficiently and reliably, and offerssignicantly lower overall cost by omittingmultiple isolation trans ormers and branchcircuit conductors.

    Figure 1 shows that losses through theauto-trans ormer, the UPS and the serverequipment produce an overall end-to-endefficiency o approximately 80 percent.

    THE 600V AC POWER SYSTEM Te 600V AC power system, while o -

    ering certain advantages over both the 480VAC and 400V AC systems, carries inherentinefficiencies making it an impractical solu-tion or most U.S. data centers. Te 600V ACsystem offers a small equipment cost savingsover the 480V AC and 400V AC systems, re-quiring less copper wiring eeding and lowercurrents, which reduce energy cost.

    In unique circumstances where largerdata centers deploy multi-module parallelredundant UPS systems, 600V AC powerequipment can support more modules with asingle 4000A switchboard than in a 400V ACsystem, allowing data center managers to adda small amount o extra capacity at a nominalcost and with no increase in ootprint.

    With 600V AC power, the distributionsystem requires multiple isolation trans-ormer-based PDUs to step down the incom-ing voltage to the 208/120V AC utilization

    voltage, adding signicant cost and reducingoverall efficiency. Some UPS vendors create a600V AC UPS using isolation trans ormers inconjunction with a 480V AC UPS, reducingefficiency even urther.

    As shown in Figure 2, losses throughthe UPS, the isolation trans ormer/PDU,and the server equipment produce an overallend-to-end efficiency o approximately 76percentcomparable to the efficiency otodays traditional 480V AC power distribu-tion system.

    COMPARING TOTAL COST OFOWNERSHIP

    CO or the power distribution systemis determined by adding capital expendi-tures (CAPEX) such as equipment purchase,installation and commissioning costs, andoperational expenditures (OPEX), whichinclude the cost o electricity to run both the

    UPS and the cooling equipment that removesheat resulting rom the normal operation othe UPS.

    Te end-to-end efficiency o the 400VAC power distribution system is 80 percent versus 76 percent efficiency in the 600VAC system, with both systems running inconventional double conversion mode. Te400V AC systems higher efficiency drivessignicant OPEX savings over the 600VAC system, substantially lowering the datacenters CO both in the rst year o serviceand over the 15-year typical service li e o thepower equipment.

    o urther reduce OPEX, many UPSmanu acturers offer high-efficiency systemsthat use various hardware- and sofware-based technologies to deliver efficiencyratings between 96 and 99 percent, withoutsacricing reliability. Te Energy Saver

    Figure 2: End-to-end ef ciency in the 600V AC power distribution system

    Where

    are yourenergydollarsgoing?

    r e t u r n a i r t

    e m p e r a t u r e t o c o o l i n

    g

    b y p a s s a i r o

    w

    p e r f o r a t e d t

    i l e c o u n t a n d

    p l a c e m

    c o o l i n g c a p a

    c i t y f a c t o r

    c a b i n e t c i r c

    u l a t i o n p a t t e r

    n s

    I T e q u i p m e n

    t i n t a k e t e m p

    e r a t

    Receive a free Upsite TemperatureStripvisit upsite.com/energy

    upsite.comupsite corporate headquarters

    santa fe, new mexico usa 505.982.7800

    upsite europeutrecht, netherlands +31 (0)30 7523670

    All rights reserved. Upsite Technologies, Inc. 2009

    Upsite is an ENERGY STAR Service and Product ProviderPartner, developing ways to optimize data centers andimprove energy efciency.

    Count on Upsites systematic solutions suite to optimizeyour existing equipment andyour energy dollar s.

    Start with an Upsite Servic

    cooling health benchmark.

    Our diagnostic surveys offer systematicremediation strategies that will cor-rect airow inefciencies for improvedcooling capacity. Then increase serverdensity and defer capital costs, all whilereducing operating expenses.

  • 8/11/2019 DC Efficiency and Design 2009

    22/36

    F LEXIBLEP OWER S OLUTIONS

    IN M INUTES .N OT W EEKS!

    Expanding your power distribution capacity shouldntbe a hardship. And with the flexible Starline Track Busway,it wont be. Our overhead, scalable, add-as-needed

    system can expand quickly, with no down time , and no routine maintenance . Make dealing with the jungle of under-floor wires a thing of the past, and reliable expansion and reconfigurations a part of your future.To learn moreaboutStarlineTrackBuswayand tofinda representativenearyou, justvisit www.uecorp.com/busway/reps or call us at +1 724 597 7800 .

    On your mark, get set, go!

  • 8/11/2019 DC Efficiency and Design 2009

    23/36

    www.datacenterjournal.com

    System is a new offering that enables selectnew and existing UPSs to deliver industry-leading 99 percent efficiency, even at low loadlevels, while still providing total protectionor critical loads. With this technology, theUPS operates at extremely high efficiency un-less utility power conditions orce the UPS towork harder to maintain clean power to theload. Te intelligent power core continu-ously monitors incoming power conditionsand balances the need or efficiency with theneed or premium protection, to match theconditions o the moment.

    When high-efficiency UPS systemsare deployed, losses through the auto-trans-ormer, the UPS and the server equipmentproduce an overall end-to-end efficiency oapproximately 84 percent.

    400V AC POWERS AHEADTe 400V AC power distribution

    systems lower equipment cost and higherend-to-end efficiency deliver signicant CA-PEX, OPEX and CO savings as comparedto the 600V AC system. Te 400V AC systemrunning in conventional double conversionmode offers an average 10 percent rst-yearCO savings and an average 5 percent COsavings over its 15-year service li e, as com-pared to the 600V AC system. When runningthe 400V AC UPS in high-efficiency mode,the rst-year CO savings increase to 16 per-cent, and the 15-year CO savings increaseto 17 percent, minimizing data center cost interms o both CAPEX and OPEX.

    In CAPEX investment alone, the400V AC conguration offers an average 15percent savings over the 600V AC congura-

    tion or all system sizes analyzed. Te 400VAC systems lower CAPEX gives data centermanagers a more cost-effective solution orexpanding data center capacity. Te systemsanalyzed produced an average annual OPEXsavings o 4 percent with the 400V AC systemrunning in double conversion mode, and17 percent when running in high-efficiencymode. OPEX savings rates are linear acrossall system sizes, indicating that savings willcontinue to increase in direct proportion tothe size o the system.

    Tere ore, the 400V AC power distri-bution system offers the highest degree oelectrical efficiency or modern data centers,signicantly reducing capital and operationalexpenditures and total cost o ownership ascompared to 600V AC power systems. Recentdevelopments in UPS technologyincludingthe introduction o trans ormerless UPSs andnew energy management eatures urther

    enhance the 400V AC power distributionsystem or maximum efficiency.Tis conclusion is supported by I

    industry experts who theorize that 400V ACpower distribution will become standard asU.S. data centers transition away rom 480VAC to a more efficient and cost-effective solu-tion over the next one to our years. n

    About The Author: Jim Davis is a business unit manager for EatonsPower Quality and Control Operations Division. Hecan be reached at [email protected]. For more

    information about the 400V UPS power scheme, visit www.eaton.com/400volt.

    Chart 1: 15-year TCO (400V AC Energy Saver System vs. 600V AC double conversion mode)

    Howabout

    spendingfewerenergy

    dollars?Eliminate bypass airow with KoldLokRaised Floor Grommets. Seal cableopenings with 98% effectiveness.

    Studies show that installing KoldLokGrommets facilitates data center man-agers turning off 18% of CRAC units aan annual operating cost savings of ap-proximately $5,000 per unit.

    upsite corporate headquarterssanta fe, new mexico usa 505.982.7800

    upsite europeutrecht, netherlands +31 (0)30 7523670

    upsite.com

    All rights reserved. Upsite Technologies, Inc. 2009

    Upsite is an ENERGY STAR Service and Product ProviderPartner, developing ways to optimize data centers andimprove energy efciency.

    Receive a free Upsite TemperatureStripvisit upsite.com/energy

    Count on Upsites systematic so lu t ions sui te to opt imizeyour existing equipment andyour energy dollar s.

  • 8/11/2019 DC Efficiency and Design 2009

    24/36

  • 8/11/2019 DC Efficiency and Design 2009

    25/36

    www.datacenterjournal.com

    upsite corporate headquarterssanta fe, new mexico usa 505.982.7800

    upsite europeutrecht, netherlands +31 (0)30 7523670

    upsite.com

    Doesenergy savings have a

    nicering?

    All rights reserved. Upsite Technologies, Inc. 2009

    Upsite is an ENERGY STAR Service and Product ProviderPartner, developing ways to optimize data centers andimprove energy efciency.

    Receive a free Upsite TemperatureStripvisit upsite.com/energy

    Prevent circulation of hot exhaustair with HotLok Blanking Panels.Seal rack unit openings in IT servercabinets with 99.97% effectiveness.

    New research shows that installingHotLok Panels helps data centermanagers achieve up to 29% reduc-tion in annual operating costs and

    simple payback in a few months.

    Count on Upsites systematicsolutions suite to optimizeyour existing equipment andyour energy dollar s.

    Bearing this in mind taking some othe ollowing steps to achieve a better totalper ormance o the site in energy usage willhelp you achieve a more energy efficientoperation.

    STEP 1:Measure the trans ormer (or other mainsource energy usage) and the I energyusage and calculate the PUEenergy

    STEP 2:Start harvesting the low hanging ruitsbased on the Uptime Institute guide linesthat have been set or many years andavailable on their website.

    STEP 3:

    Measure the trans ormer and the I En-ergy usage again and calculate your newPUEenergy. You may observe that whileyour total energy usage has decreased,your PUEenergy ratio has increased.

    STEP 4:Start switching off unneeded in rastruc-ture, while maintaining your redundancylevels.

    STEP 5:Measure the trans ormer and the Ienergy usage and calculate your PUEen-ergy. You may now observe that yourPUEenergy and again that your totalenergy usage has decreased.

    It comes as no surprise that gooddesign leads to lower capital expenditure(CAPEX) and better efficiency, but what isgood design? A model that has proved suc-cess ul both in terms o efficiency and greencredentials is Modular Design. ModularDesign was developed by Lex Coors, VicePresident o Data Center echnologyand Engineering Group, Interxion, and isunique since it allows or uture data center

    expansion without interruption o servicesto customers.Recent research by McKinsey and the

    Uptime Institute identied ve key steps toachieving operational efficiency gains:n Eliminate decommissioned servers,

    which will equal an overall gain o10-25%

    n Virtualize, which leads to gains o25-30%

    n Upgrade older equipment leading to a10-20% gain

    n Reduce demand or new servers, whichcan also increase efficiency by 10-20%

    n Introduce greener and more power e -cient servers and enable power savingeatures, this also equates to a 10-20%gain

    By ollowing the above steps, anorganization can look to achieve an overallefficiency gain o 65%, signicantly improv-ing its PUE ratio.

    Te third and nal piece o theefficiency puzzle is customer ocus. Anefficient data center should have hands-on expert support in energy efficiencyimplementation efforts, as well as the bestpractice customer installation check lists.

    Staff need to be able to advise on how

    to reduce temperatures and energy usagethough things like innovative hot and coldaisle designs. Tey need to have the toolsin place to measure and analyze efficiency,implement the latest efficiency ratings,develop and implement rst phase actions,and integrate gures and ratings withcustomers CSR. Without such expertise inplace, organization will nd it hard to reachtheir desired efficiency gains.

    Green and efficient data centers arereal and achievable, but emissions andcost o energy are rising ast (althoughpeople now and then orget these costssometime decrease temporally), so weneed to do more now. Organizations mustwork together especially when it comes tomeasurement. Vendors should be provid-ing standard meters on all equipment tomeasure energy usage versus productivity;i you dont know whether youre wastingenergy, how can you change it?

    But its not just vendors who areresponsible. Data center providers shouldprovide leadership or industry standardsand ratings that work, data center designand operational efficiency steps, and sup-

    port or all customer I efficiency improve-ments. What is apparent is that the wholeindustry, rom the power suppliers to therack makers, all need to work together toimprove efficiencies and ensure that we areall at the ore ront o efficient, green datacenter design. n

    PUEenergy measures efficiency over timeusing KwH.

  • 8/11/2019 DC Efficiency and Design 2009

    26/36

    www.datacenterjournal.com

  • 8/11/2019 DC Efficiency and Design 2009

    27/36

    Online Backup orCloud Recovery?BY IAN MASTERS, UK SALES & MARKETING DIRECTOR,DOUBLE-TAKE SOFTWARE

    ITCORNER

    Cloud recovery can be a nebulousterm, so I would dene it based onthe solution having the ollowingeatures:

    1. Te ability to recover workloads in thecloud2. Effectively unlimited scalability with littleor no up- ront provisioning3. Pay-per-use billing model4. An in rastructure that is more secure andmore reliable than the one you would buildyoursel 5. Complete protection - i.e. non-expert usersshould be able to recover everything theyneed, by de ault.

    I a solution does not meet up to theseve criteria, then it should be called anonline backup product. Tis may be rightor your business, but typically they requiremore I knowledge and are based on specicresources.

    Tere is an old saying in the data

    protection business that the whole point obacking up is preparing to restore. Havinga backup copy o your data is important,but it takes more than a pile o tapes (or anon-line account) to restore. You might needa replacement server, new storage, and maybeeven a new data centre, depending on whatwent wrong. raditionally, you would eitherkeep spare servers in a disaster recovery datacentre, or suffer a period o downtime whileyou order and congure new equipment.With a cloud recovery solution, you dont

    want just your data in the cloud, you want theability to actually start up applications anduse them, no matter what went wrong in yourown environment.

    Te next area where cloud recovery canprovide a better level o protection is aroundprovisioning. Even using online backupsystems, organizations would have to usereplacement servers in the event o an outage.Te whole point o recovering to the cloud isthat they already have plenty o servers andadditional capacity on tap. I you need morespace to cope with a recovery incident, thenyou can add this to your account. Under thismodel, your costs are much lower than build-ing the DR solution yoursel , because you getthe benet o duplicating your environmentwithout the up ront capital cost.

    Removing the up- ront price andlong-term commitment shifs the risk awayrom the customer, and onto the vendor. Te

    vendor just has to keep the quality up to keepcustomers loyal, which requires great service

    and efficient handling o customer accounts.Te cloud recovery provider takes on all themanagement effort and constant improve-ment o in rastructure that is required. Abusiness without in-house staff that isamiliar with business continuity planningmay ultimately be much better off paying amonthly ee to someone who specializes inthis area.

    One area where cloud providers maybe held to account is around security andreliability, but I think they hold the providers

    to the wrong standard. In the end, you haveto compare the results that a cloud servicesprovider can achieve, the service levels thatthey work to, and the cost comparison todoing it yoursel . Te point is that securityand reliability are hard, but they are easier atscale. Companies like Amazon and Rack-space do in rastructure or a living, and do itat huge scale. Amazons outages get reportedin news, but how does this compare to whatan individual business can achieve?

    Te last area where cloud recovery candeliver better results is through usability andprotecting everything that a business needs.While some businesses know exactly whatles should be protected, most either donthave this degree o control, or have got usersinto the habit o ollowing standard ormatsor saving documents into specic places.Te issues that people normally get bittenby are with databases, conguration changesand weird applications that only a couple opeople within the organization use. Complete

    protection means that all o these things canbe protected without requiring an expert ineither your own systems, or with the cloudrecovery solution.

    Cloud means so many different thingsto so many people, that it sometimes seemsnot to mean anything at all. I you are goingto depend on it to protect your data, it hadbetter mean something specic. Tese vepoints may not cover every possible protec-tion goal, but they set a good minimumstandard. n

    Backing up les and data online has been around for quite a while, but it hasnever really taken off in a big way for business customers. There is also anew solution coming onto the market which uses the cloud for backup andrecovery of company data. While these two approaches to disaster recoveryappear to be similar, there are some signi cant differences as well.So which one would be right for you?

    THE DATA CENTER JOURNAL | 25www.datacenterjournal.com

  • 8/11/2019 DC Efficiency and Design 2009

    28/36

  • 8/11/2019 DC Efficiency and Design 2009

    29/36

    THE DATA CENTER JOURNAL | 27www.datacenterjournal.com

    3 IDENTIFY ALL YOUR PRIVILEGED ACCOUNTS Te best way to start managing privileged accounts is tocreate a checklist o operating systems, databases, appli-ances, routers, servers, directories, and applications throughout theenterprise. Each target system typically has between one and veprivileged accounts. Add them up and determine which area posesthe greatest risk. With this data in hand, organizations can easilycreate a plan to secure, manage, automatically change, and log allprivileged passwords.

    4 SECURE EMBEDDED APPLICATIONACCOUNTSUp to 80 percent o system breaches are caused by internalusers, including privileged administrators and power users, who acci-dentally or deliberately damage I systems or release condential dataassets, according to a recent Cyber-Ark survey.

    Many times, the accounts leveraged by these users are the ap-plication identities embedded within scripts, conguration les, oran application. Te identities are used to log into a target database or

    system and are ofen overlooked within a traditional security review.Even i located, the account identities are difficult to monitor and logbecause they appear to a monitoring system as i the application (notthe person using the account) is logging in.

    Tese privileged, application identities are being increasinglyscrutinized by internal and external auditors, especially during PCI-and SOX-driven audits, and are becoming one o the key reasons thatmany organizations ail compliance audits. Tere ore, organizationsmust have effective control o all privileged identities, including ap-plication identities, to ensure compliance with audit and regulatoryrequirements.

    5 AVOID BAD HABITSo better protect against breaches, organizations mustestablish best practices or securely exchanging privilegedin ormation. For instance, employees must avoid bad habits (suchas sending sensitive or highly condential in ormation via e-mail orwriting down privileged passwords on sticky notes). I managersmust also ensure they educate employees about the need to create andset secure passwords or their computers instead o using sequentialpassword combinations or their rst names.

    Te lesson here is that the risk o internal data misuse andaccidental leakage can be signicantly mitigated by implementing e -ective policies and technologies. In doing so, organizations can better

    manage, control, and monitor the power they provide to their employ-ees and systems and avoid the negative economic and reputationalimpacts caused by an insider data breach, regardless o whether it wasdone maliciously or by human error. n

    2009 FALL CONFERENCEEnd-to-End Reliability:

    For more informationand to register, visitwww.7x24exchange.org

    Media Partners:

    KEYNOTE TOPICS

    Leadership and Accountability

    When It MattersCommander Kirk S. Lippold, USN (Ret)Commander of the USS Cole

    IBM Achieving Data Center Availability and Energy EfficiencySteven Sams Vice President Global Site Facilities Services, IBM

    Global Economic Impact on DataCenters Can ASHRAE Books Help?Don Beaty President DLB AssociatesPast Chair of ASHRAE TC9.9

    November 15-18, 2009JW Marriott Desert Ridge, Phoenix, AZ

    MITSUBISHI ELECTRICU P S Division

    Conference Partners:

  • 8/11/2019 DC Efficiency and Design 2009

    30/36

    28 | THE DATA CENTER JOURNAL www.datacenterjournal.com

    ITOPS

    F or many shops, this in ormation is unavailable: I does notreceive an energy bill, and does not use, or have, tools to iden-ti y its share o energy consumption. In the past, electricitycosts, especially in smaller I shops, were o minor concern

    in many cases, the energy bill was simply lef in the hands othe acilities director or company accountant to pay and le away.However, in the same study, In o- ech nds that 28% o I de-

    partments are now piloting an energy measurement solution o somekind, and an additional one-quarter o shops are planning a measure-ment project within twelve months. Many converging actors driveinterest in measuring and managing energy use, and the major onesare outlined here:

    n Increasing energy costsTe US Energy In ormation Administration (EIA) reports thatbetween 2000 and 2007, the average price o electricity or busi-nesses increased rom 7.4 cents per kilowatt-hour (kWh) to 9.7cents per kWh an increase o 30%.

    n Burgeoning data center energy consumptionAccording to the American Society o Heating, Re rigerating andAir-Conditioning Engineers (ASHRAE), energy density o typicalmid-range server setups has increased about our times between2000 and 2009 ( rom about 1,000 watts per square oot to almost4,000). Greater server consumption means more waste in theorm o heat, so energy consumption o cooling and supportsystems also spikes simultaneously.

    n Green considerationsEnergy consumption has an associated carbon ootprint. Interestin reducing energy use has increased in I and senior manage-

    ment ranks.

    Ultimately, interest in energy data is driven by the age-oldaccounting precept: What gets measured gets done. Realizing thatenergy use will become a compounding issue, a growing number oI shops seek to quanti y energy as an operational cost, just like lineitems such as staffing and maintenance. Once the cost is accountedor, I has a number to improve on. In this note, learn about threeoptions or obtaining energy numbers in the data center. A companionIn o- ech Advisor research note, Energy Measurement Methods orEnd-User In rastructure describes how to obtain energy data at theuser in rastructure level (workstations, printers, and the like).

    CONSIDERATIONS FOR CALCULATIONUltimately, energy data needs to be collected rom two cost

    buckets: data-serving equipment (servers, storage, networking, UPS)and support equipment (air cond