aci reference design white paper - cisco.com · the cisco aci network architecture is different...

29
© 2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 1 ACI Reference Design RD-03, Rev. 1 Mapping Logical Architectures to Physical Infrastructure Cisco ACI January 2017

Upload: lekhue

Post on 15-Jul-2018

238 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

© 2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 1

ACI Reference Design

RD-03, Rev. 1

Mapping Logical Architectures to Physical Infrastructure – Cisco ACI

January 2017

Page 2: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 2

Contents

1. Introduction ................................................................................................................................... 3

2. Overall Data Center Design ........................................................................................................... 3 2.1 Workload Profile ........................................................................................................................ 4 2.2 Floor Plan.................................................................................................................................. 4 2.3 Identification .............................................................................................................................. 5 2.4 Pathways .................................................................................................................................. 6 2.5 Logical Architecture ................................................................................................................... 8

3. Server Cabinet Pod Architecture .................................................................................................. 8 3.1 Server Cabinet Pods – Twins .................................................................................................... 9 3.2 Pod Server Cabinet with Management Network Switch ............................................................ 14

4. Main Distribution Area (MDA) ..................................................................................................... 15 4.1 MDA Row ................................................................................................................................ 15 4.2 Double Spine Switch Cabinet .................................................................................................. 15 4.3 Border Leaf Switch Cabinet ..................................................................................................... 17

5. Facilities Planning ....................................................................................................................... 19 5.1 Cooling Design ........................................................................................................................ 19

5.1.1 Equipment Arrangement and Specifications ........................................................................ 20 5.1.2 Computational Fluid Dynamics Analysis .............................................................................. 23 5.1.3 Best Practices Checklist ..................................................................................................... 24

5.2 Power Distribution ................................................................................................................... 25 5.2.1 Sizing the Power Infrastructure ........................................................................................... 27 5.2.2 Power Usage Effectiveness (PUE) ...................................................................................... 28

6. Conclusion................................................................................................................................... 29

Page 3: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 3

1. Introduction

Cisco® Application Centric Infrastructure (Cisco ACI

™) is a comprehensive software-defined networking (SDN)

architecture. This policy-based automation solution supports a business-relevant application policy language,

greater scalability through a distributed enforcement system, and greater network visibility. These benefits are

achieved through the integration of physical and virtual environments under one policy model for networks,

servers, storage, services, and security.

The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core,

aggregation, and access). Instead it uses a two-tier model, called spine-leaf or a flat network, which reduces the

overall signal latency (through the elimination of a switching layer). Panduit® and Cisco have collaborated to

provide validated spine-leaf topologies, offering customers a clear path to adopting Cisco ACI and helping ensure

a high-performance, scalable, and reliable data center design.

This reference design illustrates an initial, proof-of-concept Cisco ACI data center layout that is intended to grow

to a full-sized deployment. Throughout the creation of this data center design, a keen focus has been placed on

the future ramifications of every placement of equipment, pathways, cabling, and cooling units. A future reference

design will document the growth of the data center to full size.

The layout in this reference design is based on industry best practices and products. It allows enterprise

architects to develop targeted architectures that result in reliable, scalable, high-performing, and secure

implementations. In addition, this document offers clear guidance for designing data centers that fulfill

organizational requirements while avoiding many of the current operational difficulties facing IT managers.

This guide is divided into four sections. Section 1 provides an overview of the entire reference design by

examining key design principles, workload profiles, pathways, and logical architecture. Sections 2 and 3 explain

the modular building blocks that populate the data center and present details about these elements at the rack

elevation level. The two types of rows studied in these sections are server pods and networking rows. Section 4

presents the power, cooling, and grounding design for the data center.

2. Overall Data Center Design

Panduit’s Infrastructure for a Connected World is an innovative approach to connect, manage, and automate all

critical enterprise systems – including communications, computing, control, security, and power. These solutions

meet the customer’s comprehensive needs across all enterprise domains for a smarter, connected business

foundation. The data center presented in this reference design is a model for the modern data center, as it

integrates logical network and compute into the physical infrastructure in a scalable, modular way.

Panduit’s Infrastructure for a Connected World solutions help enterprises to:

Improve network availability, agility, and security

Reduce operational costs – including energy costs – with improved efficiency

Centralize systems management

Simplify resource provisioning

Optimize virtualized resources

Create a path to future technology requirements

Integrate environmental monitoring and control, capacity planning, and security with real-time data

visibility and control

Page 4: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 4

This approach also enhances ROI by helping enterprises to:

Maintain the highest levels of application uptime, increase asset utilization, and meet or exceed customer

service-level agreements (SLAs)

Acquire and maintain customers, respond to business requirements, and integrate new technologies into

current and future environments

Provide safe and secure work environments and business continuity

Implement energy-efficient assets and support corporate sustainability objectives

Reduce the footprint for new facilities and get the most from space allocations for existing facilities

2.1 Workload Profile

This reference design presents an optimized Tier III modular data center. The design for this data center has

been created with the maximum configuration in mind, though this reference design represents only a small

preliminary deployment. This document denotes only the first three rows of a much larger data center, so

placement of these rows is dictated by where they will sit in the final configuration. Planning a data center for the

maximum configuration provides several benefits:

With maximum configuration data, it is easier to calculate the most demanding set of conditions for

weight, power, and cooling for proper consideration early in a facility’s construction cycle

There is decreased risk that major facility modifications will be made in the future to accommodate

unforeseen capacity growth and IT equipment upgrades

Rows of cabinets and other IT and mechanical equipment can be more easily selected and distributed

with this data

Infrastructure management tools, such as Data Center Infrastructure Management (DCIM), provide

information regarding available resources and capacity planning

2.2 Floor Plan

Figure 1 displays the data center floor plan discussed in this reference design. The following bullet points highlight

critical data about the room:

Footprint: 58 ft × 34 ft (17.7 m × 10.3 m) totaling 1972 square feet (183.2 square meters)

Raised floor installation: 36 in (914.4 mm)

Floor tiles: 24 in. × 24 in. (600 mm × 600 mm)

Point load of raised floor: 1500 lb (680 kg)

Height of the room from floor tiles to deck: 12 ft (3.7 m)

Page 5: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 5

Mechanical equipment such as computer room air handlers (CRAH) reside against the walls along the room’s

perimeter. IT equipment, including servers and network switches, is arranged in rows of cabinets that run

perpendicular to the mechanical equipment. To provide appropriate buffer zones around all these objects for

airflow and operational activities, some floor space is intentionally left open. Only three CRAH units are needed to

cool the initial deployment. The fourth unit will eventually be needed to cool the data center once additional rows

are added. The fourth unit could be installed but not powered on in the initial deployment, or not installed until a

later date.

Figure 1: Data Center Layout

2.3 Identification

To easily identify and manage all the elements of this data center, an effective labeling strategy is employed.

Proper labeling provides two very important benefits: determining locations of components and defining system

connections. It is this determining and defining that allow the quick, clear communication required to accurately

install, maintain, and repair critical infrastructure components, resulting in efficient and consistent data center

maintenance.

Component locations within this data center are determined using an X-Y coordinate system that is based on the

floor tile grid. Using alphabetic characters on the horizontal axis of the room and numerals on the vertical axis

creates a series of alphanumeric designations that can be established for each floor tile in a data center space.

These floor tile designations are the basis for determining the location of all mechanical and IT equipment.

Figure 2 displays this system.

Page 6: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 6

Figure 2: Floor Tile Grid Labeling System for Determining the Location of Data Center Equipment

The following list describes how component locations and system connections are identified in this data center:

a. The location of cabinets of IT equipment, mechanical devices, and other elements consuming surface

area on the raised floor is based on which floor tile the right front corner of the entity rests upon.

b. Equipment and panels within cabinets are identified by combining (a) with a two-digit number that

represents the rack unit number (RU), where the top-left mounting screw lands in the cabinet.

c. Ports on patch panels are labeled sequentially from left to right and top to bottom.

d. Patch cables and horizontal/backbone cabling are labeled with a combination of (a), (b), and (c) to define

the connection between the near-end and far-end patch panel ports. If one end is a device, a standard

name is used for that device in the labeling scheme.

e. Ranges of ports are labeled on the patch panels to define the near-end and far-end port connections.

f. Power cable labels define the source of power to any power outlet unit (POU) in the data center. This

information includes the distribution panel and the circuit feeding the POU.

g. Grounding and bonding, cooling, safety, fire, and security systems are also identified using a

standardized naming convention.

2.4 Pathways

The structured cabling system in this data center delivers all the benefits of a connected infrastructure system.

This system provides a robust and modular pathway through which bulk cabling can effectively be routed between

cabinets and rows of equipment. This section examines those overhead pathways.

Figure 3 illustrates the lower-tier Wyr-Grid® overhead cable tray routing system used to carry copper distribution

cabling throughout the data center. Figure 4 illustrates the upper-tier FiberRunner® cable routing system used to

carry fiber distribution cabling throughout the data center. Two tiers are used to keep proper separation between

copper and fiber optic cabling, as recommended by TIA-942. Rack elevations throughout this guide also depict

these pathways for further clarification.

The following bullet points summarize key configuration best practices and benefits related to these pathways.

Overhead Pathway Lower Tier – Wyr-Grid Overhead Cable Tray Routing System

– Mounts 6 in (152 mm) above the top of the cabinets.

– Exists over all rows in the data center.

– Routes Panduit TX6A™

-28 (consumes less space at under 0.185 in (4.67 mm) in diameter).

10Gig™

Category 6A UTP copper cabling is used between cabinets and rows. All connections in

the data center are under the 96-m distance restriction.

Page 7: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 7

– Employs uniform pathway dimensions throughout the data center – 24 in x 6 in (610 mm x 152

mm). Pathway sizing ensures a best practice fill ratio of 50% as recommended by TIA-569-B.

– Aligns with cable entry holes of cabinet tops, and does not interfere with FiberRunner cable

routing system functionality.

– Multiple spillout options provide versatility for routing cables into cabinets.

– Integral bonding features help ensure electrical continuity between all pathway sections.

Figure 3: Wyr-Grid Overhead Cable Tray Routing System for Data Center (Lower Tier). Left: Top View and

Right: Front View

Overhead Pathway Upper Tier – Panduit FiberRunner Cable Routing System

– Mounts 12 in (305 mm) above the Wyr-Grid Overhead Cable Tray Routing System. Does not

interfere with the Wyr-Grid Overhead Cable Tray Routing System.

– Exists over all rows in the data center.

– Routes Panduit Opti-Core® OM4 Multimode LC Push-Pull Duplex Fiber Cables between leaf

cabinets and spine cabinets. All connections in the data center are under the 150-m distance

restriction for the bidirectional (BiDi) transceivers.

Employs two different channel dimensions to accommodate higher-density cable counts at the

perimeter and main distribution area (MDA) rows. Pathway sizing ensures a best practice fill

ratio of 50%, as recommended by TIA-569-B.

Channel above server pods: 12 in × 4 in (152 mm × 101 mm).

Channel above MDA and along perimeter: 24 in × 4 in (305 mm × 101 mm).

– QuikLock™

couplers and brackets eliminate or reduce the need for tools to assemble the system.

– Multiple spillout options provide versatility for routing cables into cabinets.

– Fittings provide minimum 2-in (50.8-mm) bend radius control to protect against signal loss due to

excessive cable bends.

Figure 4: FiberRunner Cable Routing System for Data Center (Upper Tier)

Page 8: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 8

2.5 Logical Architecture

Figure 5 introduces the logical Cisco ACI architecture of this reference design and shows how the elements of the

architecture map to the physical floor plan. The Cisco ACI architecture follows a spine-leaf logical architecture.

Leaf switches reside in each server cabinet. Leaf switches are mounted in the middle of the cabinet to optimize

the cable lengths used within the cabinet. Spine switches are in the MDA. Two spines are in each spine cabinet.

Figure 5: Mapping the Physical Layer with the Spine-Leaf Logical Architecture

3. Server Cabinet Pod Architecture

Pod architectures provide a modular and scalable solution for the buildout of a data center. Pods are a collection

of any of the entities that consume floor space in the data center. A pod architecture in the data center provides

consistency across rows. This allows for easier training and maintenance within the facility. To most efficiently

and cost-effectively maximize the use of the floor space within this data center, equipment is organized into

modular rows aligned with the floor tile grid system. Figure 6 provides a view of half of a server cabinet pod.

Figure 6: Rear View Rack Elevation of the Left Half of a Server Cabinet Pod - Twins

Page 9: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 9

3.1 Server Cabinet Pods – Twins

In each pod, there are sixteen 28-in (711-mm), 42RU Panduit S-type cabinets that house 24 Cisco UCS® C220

M4 Rack Servers. These 1RU servers meet the redundancy requirements of this reference design by using two

hot-swap power supplies, one integrated dual-port Gigabit Ethernet controller for LAN connectivity and a Cisco

array controller configured with RAID 1 and 5. These cabinets also house one Cisco Nexus® 93108TC-EX Switch.

The 93108TC-EX switch has 48 1/10-Gbps ports that can operate at 100-Mbps, 1-Gbps, and 10-Gbps speeds,

and six 40/100-Gbps Quad Small Form Factor Pluggable Plus (QSFP+) uplink ports. All ports are line rate,

delivering 2.16 Tbps of throughput in a 1RU form factor. The Cisco ACI ready switch allows users to take full

advantage of the automated, policy-based, systems management Cisco ACI approach.

One cabinet in each of the pods has a management network switch, which handles the out-of-band (OOB)

network and the SmartZone™

Gateway network. These cabinets are featured in Section 2.2 of this paper.

Twins Concept

The server cabinet pods in this design use a “twins” concept, which allows for redundant switch connectivity while

using only one switch per cabinet. Since each switch in the design has 48 ports for server connections, in a

traditional “one cabinet, one switch” design, 24 of the 48 switch ports would go unused. In the twins design, all 48

ports are used by cross-meshing the connections between two adjacent cabinets. This design provides

redundancy and resiliency for the reasonable cost of patch cords between cabinets. In addition to the twins

concept, the server cabinet pods employ a middle-of-the-rack switch placement. This placement allows for

simplified patch cord ordering, shorter patch cords, and no thermal effects. It also allows for easier cable

management between the A and B cabinets.

In addition to these servers, cabinets are populated with structured cabling elements such as patch panels and

horizontal cable managers that ensure the efficient management of the physical layer. Figure 7 provides a rack

elevation of the rear of the server cabinet pod. Table 1 presents the environmental profile of each cabinet.

Figure 7: Rack Elevation of the Twins Server Cabinet Pod (Rear View)

Page 10: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 10

Table 1: Environmental Profile - Server Cabinet Pod

Description Value

Dimensions (H × W × D) 79.8 in (2026 mm) × 27.6 in (700 mm) × 48 in (1200 mm)

Rack Spaces 42

Expected Power Consumption 8 kW per cabinet

Airflow Servers and switch have front to back airflow

Inside the cabinet, 48 copper ports and 4 fiber optic strands are terminated in the current configuration. By

employing the cable routing and management best practices displayed in this document, the design of this

cabinet provides easy access to server components during maintenance windows and helps ensure proper

cooling and power distribution.

The following bullet points summarize key configuration best practices and benefits related to effective cable

routing and management in the server cabinet pod.

Cabling Systems

Telecommunications cabling routes to the left to avoid short power cables that route to the right (this can

be seen in Figure 11). These cables are kept separated to comply with TIA-942.

Panduit TX6A-28 Category 6A UTP copper cabling is used for patch cords and horizontal cabling. TX6A-

28 cables consume less space at under 0.185 in (4.67 mm) in diameter.

Panduit Opti-Core OM4 Multimode Duplex Push-Pull LC to Push-Pull LC Patch Cords are routed inside

cabinets, and Panduit Opti-Core OM4 Multimode LC Push-Pull Duplex Fiber Interconnects are routed

through the FiberRunner cable routing system.

Power cables from two mounted POUs in the cabinet route through a floor tile cutout and Cool Boot®

Raised Floor Air Sealing Grommet into the raised floor to mounted junction boxes.

QuickNet™

Patch Panels use preterminated assemblies for both copper and fiber for faster deployment of

distribution cabling between cabinets and rows.

Cabinets and Cable Management

28-in (700-mm) Net-Access™

S-Type Cabinet provides an optimally sized vertical cable pathway for

unobstructed access.

Vertical cable management finger sections align with rack spaces to help ensure proper bend radius

control and greater routing flexibility.

Panduit NetManager™

Horizontal Cable Managers create efficient, high-density pathways for routing and

maintaining structured cabling across flat patch panels. In addition, curved surfaces on these cable

managers maintain cable bend radius.

The appropriate size of the horizontal cable manager is selected to achieve a fill rate of less than 50% to

accommodate proper cable routing techniques.

Panduit Cool Boot Raised Floor Air Sealing Grommet prevents bypass of air through the floor tile cutouts.

Page 11: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 11

Figure 8: Intra-Cabinet Cable Routing and Management in Twins Server Cabinet (Rear View)

Figure 8 displays the intra-cabinet cable routing and management in a twins server cabinet. The TX6A-28 UTP

copper cable is routed through the Open-Access™

horizontal D-ring cable managers to the rack-mount cable

management fingers, which align to each RU in the cabinet. In the middle of the cabinet, a 2RU D-ring cable

manager is used to handle the higher density of cables located around the switches. The side walls on the inside

of the twins pod are left open to allow the cables to pass through. The fiber cables, which provide connectivity to

the spine cabinets, are routed on the right side of the cabinet to a patch panel at the top of the cabinet.

SmartZone Gateways

Panduit SmartZone Gateways are installed in every cabinet in this design. SmartZone Gateways convert power

consumption and environmental data captured by SmartZone power monitoring devices into useful information

that is relayed to SmartZone Software to provide current and historical views of power, temperature, and humidity

data readings. The SmartZone Data Center Infrastructure Management (DCIM) solution used in this design

includes the Panduit EPA126, displayed in Figure 9. The EPA126 provides room- or multirack-level power,

environmental monitoring, and physical access control using a single IP address. It also allows IT and facilities

managers to use the information to develop strategies for reducing energy expenses, using energy resources

more efficiently, and improving overall operating costs.

Page 12: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 12

Figure 9: Panduit SmartZone EPA126 Gateway

Cabinet-Level Monitoring

SmartZone cabinet-level power monitoring hardware focuses on power monitoring of all IT equipment at the

cabinet. Intelligent power distribution units (PDUs) may also accept sensors and security access control inputs,

providing for granular reporting, auditing, and trend analysis. This information greatly assists in measuring and

allocating appropriate power usage and infrastructure efficiency (for example, capacity, power, environmental,

and connectivity status) throughout the data center, allowing for more informed decision making regarding cooling

and power capacity planning. A broad range of metrics is included, from power usage, humidity, temperature, and

airflow to leak detection, security, and rack-level assets.

Cabinet-level monitoring offers the potential for supplying detailed information to IT and facility managers about

power and environmental and physical security. Intelligent PDUs collect true root-mean-square (RMS) volts,

amps, kW, kVA, kWh, power factor (PF), and frequency (Hz) to deliver trend and spot data for individual power

strips in a cabinet. Capacity reports offer the opportunity to identify power, temperature, and space capacity on a

cabinet, aisle containment, row, or room basis.

Figure 10: Power Cable Routing and Management in Twin Pod Server Cabinet (Rear View)

Page 13: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 13

Figure 10 displays how the power cords and PDUs are mounted and managed. The PDUs are mounted to

brackets on the right side of the rear of the cabinet. This allows the power cords easy access from the power

supplies on the same side. Black power cords are for A-side redundant power supplies. White power cords are for

B-side redundant power supplies. Both the Cisco Nexus 93108TC-EX and Panduit SmartZone Gateway have

power supplies that are located toward the front side of the cabinet and must be routed to the back of the cabinet

to plug into the PDU. A Panduit Cool Boot Raised Floor Air Sealing Grommet ensures that cold air stays beneath

the raised floor while allowing the PDU power cords to be routed through the floor to the power whips.

Figure 11 displays how the rear side of the cabinet looks when everything is installed. Fiber connections are run

through the FiberRunner cable routing system to the double spine cabinets. The light blue cables are fiber

connections routed to the top spines in the double spine cabinets. The brown cables are fiber connections routed

to the bottom spines in the double spine cabinets. This ensures that each leaf switch has a connection routed

diversely to the spines for resiliency, and multiple connections for redundancy. If any double spine cabinet fails for

any reason, the network would still be available through the connections to spines in other cabinets. The black

cables are copper connections for the SmartZone Gateway. The green cables are copper connections for the

OOB network.

Figure 11: Twin Pod Server Cabinet (Rear View) with All Power, Copper, and Fiber Cabling Installed

Page 14: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 14

3.2 Pod Server Cabinet with Management Network Switch

In each pod, one server cabinet houses a management network switch. In this reference design, this cabinet is

the last cabinet on the right side of the pod, if you are facing the front of the cabinet. This management network

switch provides access for the SmartZone Gateway connections that are required to each pod server cabinet. The

design of the cabinet is the same as the other pod server cabinets, except that it houses a Cisco Nexus 3048

switch and two additional patch panels. This switch also has available ports for additional OOB access to

switches, servers, or other equipment in the data center space that may require it (cameras, KVM units, etc.). A

drawing of the cabinet is shown in Figure 12.

Also housed in the two pod server cabinets with a management network switch are two servers being used as an

Application Policy Infrastructure Controller (APIC). Any server connected to a leaf could be operated as an APIC,

but we would suggest deploying the APICs in the same placement within the pod and in the selected cabinet

throughout the data center. This ensures consistent deployments, which assists with maintenance and operation.

Figure 12: Twin Pod Server Cabinet (Rear View) with Management Network Switch

Page 15: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 15

4. Main Distribution Area (MDA)

The main distribution area (MDA) of the data center is also referred to as the networking rows. In this reference

design, the MDA comprises one row of 31.5-in (800-mm), 42RU Panduit Net-Access N-type cabinets that house a

wide variety of IT equipment. Due to the small size of the initial deployment, only two spine cabinets with two

spines in each cabinet are required to operate the initial ACI network. An additional two cabinets in the MDA

house leaf switches that provide connectivity to the legacy network, also known as border leaf switches.

4.1 MDA Row

Figure 13: Rack Elevation of MDA Row (Front View)

The MDA row is critical to the functioning of the data center, as it houses all core networking capabilities for the

data center. Many of the best practices for structured cable routing and management discussed in Section 2 also

apply to these cabinets; therefore, they will not be reexamined here. The MDA row is illustrated in Figure 13.

Following are detailed descriptions of each of the cabinets that make up the MDA row.

4.2 Double Spine Switch Cabinet

In this reference design the two 31.5-in (800-mm), 42RU Panduit Net-Access N-type switch cabinets located in

the middle of the MDA provide housing for the Cisco Nexus 9508 spine switches. These cabinets will be referred

to as double spine switch cabinets. They are a repeatable, modular design that will scale as the data center grows

to full capacity. Figure 14 displays the layout of the switches, line cards, patch panels, and SmartZone Gateway.

Table 2 provides the environmental details for the double spine switch cabinets.

Page 16: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 16

Figure 14: Rack Elevation of a Double Spine Switch Cabinet (Front View)

Table 2: Environmental Profile – Pod Switch Cabinet

Description Value

Dimensions (H × W × D) 78.8 in (2000 mm) × 31.5 in (800 mm) × 48.0 in (1200 mm)

Rack Spaces 42

Expected Power Consumption 8 kW

Airflow Switches have front-to-back airflow

A total of 36 fiber optic cables are terminated here, as well as four copper connections. Figure 15 illustrates how

cabling is routed within the Net-Access N-type switch cabinet to support the quality attributes of the design. The

figure also displays how connections are routed outside the cabinet. Fiber connections are run through the

FiberRunner cable routing system to the server cabinet leaf switches. The light blue cables in the figure are fiber

connections routed to the leaf switches taking the right directional channel. The brown cables are fiber

connections routed to all the leaf switches using the left directional channel. This ensures that each leaf switch

has a connection routed diversely to the leaf switches for resiliency, and multiple connections for redundancy. If

any double spine cabinet fails for any reason, the network would still be available through the connections to

spines in other cabinets. The black cables are copper connections for the SmartZone Gateway. The green cables

are copper connections for the OOB network.

Page 17: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 17

High-Density 5RU Port Replication Fiber Adapter Panel

This design uses purpose-built port replication patch panels, which are designed to align directly with Cisco

Nexus 9500 36-port QSFP+ line cards. These high-port-density patch panels save three units of rack space

compared to traditional patch panel designs, while providing a port replication layout that is easy to use and

manage. Each panel can accept LC Duplex or PanMPO™

fiber adapters in each port for interchangeable use of

BiDi (LC Duplex) and CSR4/SR4 (MPO) within the panel.

Figure 15: Cable Routing and Management in the Double Spine Switch Cabinet (Front View)

4.3 Border Leaf Switch Cabinet

Two 31.5-in (800-mm), 42RU Panduit Net-Access N-type switch cabinets located toward the ends of each side of

the MDA row provide redundant access to the outside world. Each cabinet houses a Cisco Nexus 9504 leaf

switch that provides connectivity to the legacy network. Also, each cabinet houses a Cisco Nexus 3064X switch,

which provides aggregation layer connectivity to the management network. Figure 16 displays the layout of the

border leaf switch cabinet. Table 3 provides the environmental profile of the single border leaf switch cabinet.

Page 18: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 18

Figure 16: Rack Elevation of Single Border Leaf Switch Cabinet (Front View)

Table 3: Environmental Profile – Single Border Leaf Switch Cabinet

Description Value

Dimensions (H × W × D) 78.8 in (2000 mm) × 31.5 in (800 mm) × 48.0 in (1200 mm)

Rack Spaces 42

Expected Power Consumption 5 kW

Airflow Switches have front-to-back airflow

A total of 16 fiber optic cables are terminated here, as well as two copper connections. Figure 17 illustrates how

cabling is routed within the Net-Access N-type switch cabinet to support the quality attributes of the design.

The figure also displays how connections are routed outside the cabinet. Fiber connections are run through the

FiberRunner cable routing system to the double spine switch cabinets. The light blue cables in the figure are fiber

connections routed to the spine switches to the right. The brown cables are fiber connections routed to the spine

switches to the left. This ensures that each leaf switch has a connection routed diversely to the spine switches for

resiliency, and multiple connections for redundancy. If any double spine cabinet fails for any reason, the network

would still be available through the connections to spines in other cabinets. The purple and red cables are the

connections to the legacy network (not located in this data center) connecting to the border leaf, and are routed in

different directions to ensure resiliency and redundancy. The green cables are copper connections for the

management network to the access switches. The orange cables are the fiber connections routed to the

management network core switches (not located in this data center).

Page 19: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 19

Figure 17: Cable Routing and Management in a Single Border Leaf Switch Cabinet (Front View)

5. Facilities Planning

Previous sections of this reference design have presented profile tables for each unique cabinet of IT equipment.

These tables list all pertinent facilities data for the populated cabinets of this ACI data center, including physical

dimensions, power, and cooling requirements. The following sections provide guidance to facilities and IT

managers when planning and evaluating the power and cooling infrastructure for the deployment of an ACI

network in a new data center.

5.1 Cooling Design

Optimal operation of an ACI network and its IT components depends upon a safe and efficient cooling system.

The key challenges to consider when designing a data center cooling system are adaptability, availability, lifecycle

costs, and maintenance. Some of these challenges can be addressed by following good design practices and by

anticipating future loads to determine if the data center cooling system can meet the increased demand.

Therefore, the initial cooling design should be flexible and scalable to allow future expansions for various IT

deployments. Emerging technologies allow the implementation of refined environmental control methods that

target significant capital and operational savings. Efficient cooling often equates to fewer computer room air

handlers (CRAH) or computer room air conditioners (CRAC), allowing organizations to achieve savings by

lowering power consumption and reducing the cooling equipment footprint.

Page 20: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 20

This reference design delivers an ACI data center cooling system configuration that follows industry best practices

to supply the proper amount of cooling for the server cabinets while ensuring a reliable and efficient cooling

system.

5.1.1 Equipment Arrangement and Specifications

The critical goal of a data center cooling system is not only to meet the cooling requirements of the electronic

equipment but also to separate the server’s exhaust air from its intake air, thus preventing the equipment from

overheating. This separation significantly increases the efficiency and capacity of the cooling system.

The ACI data center layout follows a typical hot aisle/cold aisle configuration that aligns with industry best

practices to maintain uptime in the most efficient way feasible. The cooling capacity required to offset the heat

generated by the pod’s IT equipment is usually estimated as the sum of the heat output of each electronic

component. The design specifications of the ACI data center in this reference design call for a Tier III data center;

therefore, redundancy for the cooling system must meet an N+1 configuration.

The total IT equipment heat load and airflow for the room are estimated to be 274 kW and 32,880 cubic feet per

minute (CFM), respectively, according to the heat dissipation for the IT equipment at typical operating loads. To

prevent recirculation of hot exhaust air, an additional 33% airflow must be supplied by the CRAH units to the

raised floor. This value was determined through a computational fluid dynamics (CFD) simulation of the ACI data

center layout. A final analysis was then performed with the IT equipment set to 100% utilization (~400 kW) to

verify that the cooling design would meet the inlet temperature requirement of 81°F set by the American Society

of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE). The total cooling airflow required for these

conditions was 48,000 CFM. Based on the manufacturer’s specifications, the two selected CRAH units can

provide sufficient cooling airflow and cooling capacity for the typical heat load of the ACI data center’s IT

equipment. However, to ensure N+1 redundancy, one similar CRAH unit is installed along with the two other

CRAH units in the design. For the typical IT load running all three CRAH units, the fans will consume 44% less

energy than if running two CRAH units at a higher fan speed.

Figure 18 shows the fourth CRAH unit, which can be installed as a standby unit in the current design or for future

expansion. It is not required to meet the initial cooling load of the data center. The fourth unit will need to be

installed when the fourth row is built out and additional equipment is deployed in the MDA for the ACI data center

design.

The CRAH units are located on the left and right sides, along the perimeter of the room and aligned with the hot

aisles of the cabinet rows, as recommended by industry best practices. These units draw the cabinets’ hot

exhaust air from the room and supply cool conditioned air to the raised-floor plenum space. The cool air reaches

the cabinet inlets through 56% open perforated tiles located in front of the cabinets, in the cold aisle.

The following bullet points reiterate the pertinent room details from the floor plan and provide a summary of the

data center specifications pertaining to power and cooling.

Data Center Cooling Room Specifications

Footprint: 58 ft × 34 ft (17.7 m × 30.5 m) totaling 1972 square feet (183.2 square meters)

Raised floor installation: 36 in (900 mm)

Floor tiles: 24 in. × 24 in. (600 mm × 600 mm)

Height of the room from floor tiles to deck: 12 ft. (3.7 m)

Page 21: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 21

Power and Cooling Specifications

Server pod number of cabinets: 16

MDA row number of cabinets: 10 (only four cabinets with active equipment)

Average heat load per cabinet: 8 kW

Total IT equipment heat load for data center: 274 kW

Average airflow per cabinet: 960 CFM

Total IT equipment airflow for data center: 32,880 CFM

Total number of CRAH units: 3 (fourth unit is optional but not required for initial installation)

Cool air supply temperature (aligns with ASHRAE 2015 Thermal Guidelines for Data Processing

Environments): 68°F

Cool air supply humidity (aligns with ASHRAE 2015 Thermal Guidelines for Data Processing

Environments): 52°F dew point

Nominal cooling capacity per CRAH unit: 200 kW

Nominal airflow per CRAH unit: 24,000 CFM

Number of 56% open perforated floor tiles: 45

Figure 18: Data Center Cooling System Layout

Page 22: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 22

Panduit SmartZone Data Center Infrastructure Management (DCIM) Solutions

A top-of-mind concern of C-level executives is the continuous increase of data center energy consumption and

the ever-increasing associated operating costs. Monitoring the white space to retrieve critical information

regarding power utilization and environmental conditions helps organizations understand and mitigate this issue.

Panduit’s SmartZone solutions are a suite of DCIM tools that provide extensive visibility into a data center’s

operation and supply accurate and actionable data for all levels of decision-making personnel.

Throughout the monitoring network, sensors installed in critical IT equipment locations periodically report

environmental data to support the energy-efficiency monitoring and trending for optimization. In the same manner,

power consumption is logged and used for energy usage analysis and operation characterization of IT equipment

for improving the power usage effectiveness (PUE).

Panduit’s SmartZone DCIM solutions help to mitigate risk and manage change within the physical infrastructure

by providing real-time data on the status of power, cooling, security, and environmental conditions from the

enterprise to individual devices in data center cabinets.

At the core of Panduit SmartZone DCIM solutions is the 6 Zone approach (Figure 19), which defines “zones” from

the entry point of utilities in the building through the power distribution systems down to the individual devices in

the data center cabinets.

Figure 19: 6 Zone Approach

The zones, shown in Figure 19, are identified primarily by their physical location and infrastructure equipment

operation. For example, zones 5 and 6 are in the white space and monitor the environmental conditions and

power consumption at the rack and server level. Access management is also a data point included in the Panduit

SmartZone Solutions suite.

Page 23: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 23

5.1.2 Computational Fluid Dynamics Analysis

CFD analysis models are an integral part of designing and validating a data center cooling system. To build and

validate the cooling system of this reference design, several scenarios were simulated using CFD. The first

scenario used 60% of the nameplate power and the corresponding airflow for the IT equipment. The 60% power

and airflow number represents the heat dissipation for the equipment at full utilization under typical operating

conditions. Next, three additional runs were completed on the model, failing each CRAH unit to test the N+1

resiliency. The model passed these tests with no IT equipment inlet temperatures exceeding the ASHRAE upper

temperature limit (80.6°F). Only the results for the 60% model with all CRAHs functioning are presented here, as

the other runs (with disabled CRAH units) produced similar results.

The 100% power and airflow number mentioned in the previous section represents the theoretical maximum heat

dissipation for the equipment. The cooling design passed this check with IT equipment inlet temperatures of 81°F

or less. The results for the 100% analysis are not presented here, as this extreme condition is typically used to

test the robustness of the design and represents only an infrequent occurrence in actual data center operations.

The CFD models were created using the following assumptions:

3D Navier-Stokes steady-state equations were solved using a commercially available CFD code 6-Sigma

Room by Future Facilities.

The κ ε turbulence model was used to solve the turbulent fluid flow region.

The data center was assumed to be located inside a building. No solar gains or heat gains from the other

areas of the building were considered. The walls of the room were modeled as adiabatic.

The CRAH units were modeled with a fixed airflow based on the nominal airflow needed to maintain all IT

equipment below 81°F. The cooling capacity of the CRAH was modeled with a temperature rather than a

capacity curve.

Secondary heat loads, such as lighting, IT personnel, etc., were ignored in the analysis. The cooling load

due to the fan motors inside the CRAH unit was added automatically by the software.

Over 5 million grid cells were generated to create the computational domain. The number of grid cells

was optimized for the runs.

Facility-side cooling equipment (chillers, cooling towers, pumps, etc.) was not considered for this analysis.

A fixed supply-air temperature was selected by the analyst to provide adequate cooling for the IT

equipment.

Uninterruptible power supplies (UPSs) were assumed to be located outside the white space and were not

included in the CFD models.

Figure 20 shows the temperature plots for the 60% heat dissipation analysis. The maximum inlet temperature for

all the cabinets was less than the ASHRAE recommended inlet temperature for IT equipment (80.6°F). The air

temperature distribution in the room is shown with result planes at 1 ft (30.5 cm), 3 ft (91.4 cm), and 6 ft (183 cm)

heights above the raised floor. The air temperature in the cold aisles is uniform and close to the supply air

temperature for most cabinets. Only the end-of-row cabinets have inlet temperatures significantly greater than

68°F. This is due to end-of-row cabinets recirculating some of the warm air from the room instead of obtaining it

directly from the nearby perforated tiles. Figure 20 also shows the CRAH units colored based on cooling capacity

utilization. As can be inferred from the results, the CRAH units have sufficient capacity to easily handle the

cooling load at the reduced fan speed.

Page 24: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 24

Figure 20: CRAH Cooling Capacity and Air Temperature Distribution at 1, 3, and 6 Feet Above the Finished

Floor (AFF)

5.1.3 Best Practices Checklist

The checklist in Table 4 summarizes industry best practices followed in this reference design to produce the most

optimized cooling system for the ACI data center design.

Table 4: Cooling Infrastructure Design Best Practices

Best Practice Reference Design

Thermal Profile

Validate and analyze using CFD

Temperature and humidity specifications should align with

ASHRAE 2015 Thermal Requirements for Data Processing

Environments

Cabinets are arranged into hot and cold aisles

Cable cutouts are properly sealed

Empty spaces within cabinets are sealed using blanking panels

Air Distribution

Perimeter down-flow CRAH units are aligned with the hot aisles

Perforated floor tiles are located only in the cold aisles

Ventilation Design

The CRAH units supply sufficient cooling airflow to the room to

minimize hot spots due to hot air recirculation patterns

If a return plenum is used, ensure that the placement of ceiling

return grilles aligns with the hot aisles

As a starting baseline, each rack with working IT equipment

should have a single perforated floor tile in front of it

The cooling needs of cabinets are satisfied by the adjacent

perforated tiles, resulting in a scalable and predictable cooling

architecture

Page 25: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 25

Best Practice Reference Design

Raised-Access Floor Plenum Height

Provide at least 36 in. of free-flow height for a raised-access

floor plenum

Room Design

The room height of 12 ft. provides the cabinet hot exhaust air

enough open space to rise and return to the CRAH units

without being recirculated to the cabinet inlets

Seal any holes in the walls or structure surrounding the

computer room

Provide a vapor barrier within the room construction for

humidity control

Under-Floor Blockages

No cable trays or other obstructions are placed in the cold

aisles below the perforated tiles

Under-floor cable routing is coordinated with other under-floor

systems

CRAH supply discharge under-floor area is free from

obstructions

CRAH Placement and Configuration

CRAHs are aligned with the hot aisle rather than the cold aisle

6 ft. minimum clearance spacing is provided between the

cabinet rows and the nearest CRAH unit

Passive Air-Blocking Devices

All cabinets have in-cabinet cable managers providing

pathways that allow cables to run vertically in cabinet side

areas

Blanking panels should block empty rack units to prevent hot

air recirculation

When IT equipment inlets do not draw air directly from the cold

aisle, inlet ducts should be installed to redirect the cold air to

the equipment’s inlets

5.2 Power Distribution

The power distribution for the ACI reference design can be addressed, since we know the power and

environmental profiles of the IT equipment. The power distribution components and topology for this reference

design are assumed to be for a Tier III data center facility as defined by the Uptime Institute and TIA-942. A

typical Tier III data center allows concurrent maintainability through multiple distribution paths. Generally, only one

distribution path serves the computer equipment at any time. A Tier III data center implies that UPS and generator

redundancy is provided at an N+1 level (N equals capacity required, 1 means an additional separate unit is added

for redundancy). Tier III data centers also require a redundant feed from the utility company.

Page 26: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 26

Figure 21 shows the power distribution infrastructure arranged in a Tier III facility. A brief description of each

component follows.

Figure 21: Infrastructure Components Arranged in a Tier III Facility

The AC power supporting the data center originates with one or more utility companies. Power is sent

over long-distance power lines at high voltages. Before it enters the data center, the power must be

reduced to a lower voltage, which is typically 480 VAC. This reduction is performed by the local

transformer outside the facility.

A generator at the data center facility provides standby power in case the utility source fails. Typical

generators use diesel, natural gas, LP fuel, or biodiesel as their energy source.

A service entrance panel is located indoors and receives the utility power from the local transformer and

may house equipment to provide transient voltage surge suppression, utility metering, and bypass

switching.

Both the utility source and the generator feed an automatic transfer switch (ATS). The ATS detects

utility source failure, signals the generator to start up, switches from utility to generator, and switches back

to the utility once the utility becomes stable.

In the case of a utility power failure, the generator does not immediately take over – it requires seconds or minutes to start up. Critical computing systems cannot wait this long, which is why the uninterruptible

power supply (UPS) is essential. If the power supply to the UPS fails, there is a seamless transition to

battery power. Battery banks are specified according to how many minutes of power they can supply (five

minutes to several hours of backup time). After that period, the generator must take over if power is to be

maintained.

Page 27: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 27

A power distribution unit (PDU) takes the UPS power and distributes it across the “branch circuits”

feeding data center IT equipment. The PDU can perform a variety of functions, such as switching

between redundant UPSs in case one fails. An alternative to using a PDU is to deploy electrical panel

boards in the data center. A panel board is an enclosure that houses current protection devices and distributes power to racks or cabinets. Used alongside panel boards, step-down transformers can lower

voltage to a level that the IT equipment power supply can accept (250 VAC or less).

A rack power distribution unit distributes power inside the rack to the IT equipment. Rack PDUs come

in a variety of form factors (horizontal, vertical, and various loads and outlet counts). It is common for all

IT equipment to be “dual cord” (that is, it requires two power inputs for redundancy).

5.2.1 Sizing the Power Infrastructure

It is assumed that the UPSs for this reference design are located outside the data center room to optimize space

utilization within the data center room. The UPS is selected based on acceptable hold time and total power

capacity required. As a design best practice, oversizing the UPS by 15% to 30% is acceptable. Although most

UPSs are centralized, there are modular UPS systems that provide flexible growth. Another design consideration

is redundancy. Our Tier III design dictates an N+1 UPS redundancy. For example, if our design requires using

three same-sized UPSs, a fourth one of the same size should be added for redundancy. When sizing the UPS, it

is important to consider the total IT equipment load, add a safety oversize margin, and include redundancy.

The PDUs are also placed outside of the data center room for this reference design. Two PDUs provide

redundant power feeds to each row of cabinets. Figure 22 provides a quick reference of power for each cabinet at

60% of the nameplate power supply utilization rating of the IT equipment. The PDUs share the load during typical

operation, and each PDU carries up to 50%. However, each PDU is specified to carry 100% of the load in the

event of a power path failure.

Figure 22: Power Consumption per Cabinet at 60% of IT Equipment Utilization

Page 28: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 28

Rack PDUs distribute the power to IT equipment within each cabinet. Three-phase rack PDUs can be used in this

design to reduce the amount and size of the conductors. During typical operation, each rack PDU carries no more

than 50% of the IT load. Therefore, two rack PDUs are used in each cabinet to accomplish power distribution

redundancy (A and B power feeds). In case of a power path failure, a single feed can deliver 100% of the cabinet

load.

Table 5 indicates the Panduit rack PDU selection for this reference design. The Panduit rack PDU GM-A0255BL

has capabilities for remote power monitoring and optional in-rack environmental sensing. A three-phase 30A rack

PDU running at 208 VAC is recommended based on the cabinet load. The total power available is 10.8 kW;

however, an 80% NEC derating has been included to yield 8. 6 kW per cabinet.

Table 5: Rack PDU Selection

Power per

Rack

PDU Model Description kW per PDU Outlets

Up to 8.6 kW GM-A0255BL 3-phase, 30A, 208 VAC 8.6 (24) C-13 and (6) C-19

5.2.2 Power Usage Effectiveness (PUE)

PUE is the most commonly used metric to determine the energy efficiency of a data center. Although PUE is

criticized in the industry as an imperfect metric, it is still the most widely used. PUE was first proposed by the

Green Grid, an industry group focused on data center energy efficiency. Other industry organizations, such as

ASHRAE, the Environmental Protection Agency (EPA), and the European Code of Conduct, have jumped on

board to harmonize and recognize PUE as a data center efficiency metric. The main flaw of PUE is its inability to

benchmark the work done by the data center. In other words, zombie servers can be sitting mostly idle, and PUE

cannot account for the inactivity. An efficient data center will have both a low PUE and a high amount of work

produced by the IT equipment (in other words, a high level of IT equipment utilization).

PUE is defined as a ratio of total power consumed by a data center to the power consumed by the IT equipment

that populates the facility.

where

A PUE value approaching 1 represents a very efficient data center.

Page 29: ACI Reference Design White Paper - cisco.com · The Cisco ACI network architecture is different from the traditional three-tier model (consisting of core, aggregation, ... – Mounts

2017 Panduit Corp. | Cisco. All rights reserved. This document is Cisco Public Information. Page 29

© 2017 Panduit Corp. All Rights Reserved.

© 2017 Cisco and/or its affiliates. All rights reserved. Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its

affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com/go/trademarks. Third-party

trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership re lationship

between Cisco and any other company. (1110R)

C11-738372-00 01/17

6. Conclusion

This reference design outlines an optimized data center that uses Cisco’s ACI software-defined network

architecture, based on industry standards and best practices. It also provides an informative guideline for

engineers looking to accelerate the deployment and integration of components from Cisco and Panduit.

All aspects of the physical infrastructure have been thoroughly reviewed and mapped to the corresponding logical

architectures. The design allows enterprise architects to develop targeted architectures that result in reliable,

scalable, high-performing, and secure implementations. Additionally, this document offers clear guidance for

designing data centers that fulfill organizational requirements while avoiding many of the current operational

difficulties facing IT managers.

All trademarks, service marks, trade names, product names, and logos appearing in this document are the

property of their respective owners.

Cisco, Cisco UCS, Cisco ACI, and Cisco Nexus are trademarks of Cisco System, Inc.