achieving enterprise san performance with the brocade dcx backbone

Upload: innovite

Post on 24-Feb-2018

220 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/25/2019 Achieving Enterprise SAN Performance With the Brocade DCX Backbone

    1/16

    www.brocade.com

    DATA CENTER Achieving Enterprise

    SAN Performance with the

    Brocade DCX Backbone

    WHITE PAPER

    A best-in-class architecture enables optimum

    performance, exibility, and reliability for enterprise

    data center networks.

  • 7/25/2019 Achieving Enterprise SAN Performance With the Brocade DCX Backbone

    2/16

    The BrocadeDCXBackbone is the industrys highest-performing

    enterprise-class platform for Storage Area Networks (SANs). With its

    intelligent sixth-generation ASICs and new hardware and software

    capabilities, the Brocade DCX provides a reliable foundation for fully

    connected, multiprotocol SAN fabrics, FICON solutions, and MetaSANs

    capable of supporting thousands of servers and storage devices.

    The Brocade DCX Backbone also provides industry-leading power and

    cooling efciency, helping to reduce the Total Cost of Ownership (TCO),

    as well as helping to reduce overall OpEx.

    This paper details the architecture advantages of the Brocade DCX

    Backbone and describes how IT organizations can leverage the

    performance capabilities, modular exibility, and ve-nines

    (99.999 percent) reliability of this SAN platform to achieve specic

    business requirements.

    OVERVIEW

    In January 2008, Brocade introduced the Brocade DCX Backbone (see Figure 1), the

    rst platform in the industry to provide 8 Gigabits per second (Gbps) Fibre Channel (FC)

    capabilities. With the release of Fabric OS (FOS) 6.0 at the same time, the Brocade DCX

    Backbone added 8 Gbps Fibre Channel and FICON performance for data-intensive storage

    applications.

    In January 2009, the Brocade DCX-4S (see Figure 2) was added to the backbone family,and the Brocade DCX has become a key component in thousands of data centers around the

    world. New Fibre Channel over Ethernet (FCoE) and SAN extension blades were introduced in

    September 2009. In June 2010, Brocade launched the industrys rst and only 8 Gbps

    64-port blade.

    Although this paper focuses on the Brocade DCX, some information is provided for the

    Brocade DCX-4X Backbone, notably in the section on Inter-Chassis Link (ICL) conguration.

    For more details on these two backbone platforms, see the Brocade DCX Backbone Family

    Data Sheet on www.brocade.com.

  • 7/25/2019 Achieving Enterprise SAN Performance With the Brocade DCX Backbone

    3/16

    Compared to competitive offerings, the Brocade DCX Backbone is the industrys fastest and

    most advanced SAN backbone, providing numerous advantages:

    Scales non-disruptively from 16 to as many as 512 concurrently active 4 or 8 Gbps full-

    duplex ports in a single domain (open systems)

    Scales non-disruptively from 16 to as many as 256 concurrently active 4 or 8 Gbps full-

    duplex ports in a single domain (FICON)

    Enables simultaneous uncongested switching on all ports as long as simple best practices

    are followed

    Can provide 4.6 Terabits per second (Tbps) (Brocade DCX) or 2.3 Tbps (Brocade DCX-4S)

    utilizing 8 Gbps blades, Inter-Chassis Links (ICLs), and Local Switching.

    In addition to providing the highest levels of performance, the Brocade DCX Backbone

    features a modular, high-availability architecture that supports mission-critical environments.

    Moreover, the platforms industry-leading power and cooling efciency helps reduce

    ownership costs while maximizing rack density.

    The Brocade DCX Backbone uses just 6 watts AC per port and 0.7 watts per Gbps at its

    maximum 8 Gbps 512-port conguration. The Brocade DCX-4S uses just 6.7 watts AC per

    port and 0.8 watts per Gbps at its maximum 8 Gbps 256-port conguration. Both are twice

    as efcient as its their predecessors and up to six times more efcient than competitive

    products. This efciency not only reduces data center power billsit reduces cooling

    requirements and minimizes or eliminates the need for data center infrastructure upgrades,

    such as new Power Distribution Units (PDUs), power circuits, and larger Heating, Ventilation,

    and Air Conditioning (HVAC) units. In addition, the highly integrated architecture uses fewer

    active electric components boarding the chassis, which improves key reliability metrics such

    as Mean Time Between Failure (MTBF).

    The Brocade DCX Backbone leverages a highly exible multiprotocol architecture, supporting

    Fibre Channel, Fibre Connectivity (FICON), Fibre Channel over Ethernet, Fibre Channel over

    IP (FCIP), IP over Fibre Channel (IPFC), and Data Center Bridging (DCB) IT organizations

    can also easily mix FC port blades with advanced functionality blades for FCoE server I/O

    convergence, SAN encryption, and SAN extension to build an infrastructure that optimizes

    functionality, price, and performance. And ease of setup enables data center administrators

    to quickly maximize its performance and availability.

    This paper describes the internal architecture of Brocade DCX Backbones and how best

    to leverage their industry-leading performance and blade exibility to meet business

    requirements.

    How Is Fibre Channel

    Bandwidth Measured?

    Fibre Channel is a lossless, low- latency,

    full-duplex network protocol, meaning thatdata can be transmitted and received

    simultaneously. The name of a specic Fibre

    Channel standard, for example 8 Gbps FC,

    refers to how fast an application payload

    can move in one direction. This is called

    data rate. Vendors sometimes state data

    rates followed by the words full duplex, for

    example 8 Gbps full duplex, although it is

    not necessary when referring to Fibre

    Channel speeds. The term aggregate data

    rate is the sum of the application payloads

    moving in each direction (full duplex) and is

    equal to twice the data rate.

    Figure 1.

    Brocade DCX (left)

    and Brocade DCX-4S (right).

  • 7/25/2019 Achieving Enterprise SAN Performance With the Brocade DCX Backbone

    4/16

    BROCADE DCX ASIC FEATURES

    The Brocade DCX Backbone Control Routers (CR8s) feature Brocade Condor 2 ASICs, each

    capable of switching at 640 Gbps. Each Brocade Condor 2 ASIC has 40 x 8 Gbps ports, which

    can be combined into trunk groups of multiple sizes. The Brocade DCX architecture leverages

    the same Fibre Channel protocols as the front-end ports, enabling back-end ports to avoid

    latency due to protocol conversion overhead.

    When a frame enters the ASIC, the destination address is read from the header, which

    enables routing decisions to be made before the entire frame has been received. This is

    known within the industry as Cut-Through Routing, which means that a frame can begin

    transmission out of the correct destination port on the ASIC even before the frame has

    nished entering the ingress port. Local latency on the same ASIC is 0.7 s and blade-to-

    blade latency is 2.1 s. As a result, the Brocade DCX has the lowest switching latency and

    highest throughput of any Fibre Channel backbone platform in the industry.

    Each Condor 2 (8 Gbps) ASICs on a port blade can act as an independent switching engine

    to provide Local Switching between port groups in addition to switching across the backplane

    Local Switching trafc does not cross the backplane, nor consume any slot bandwidth.

    This enables every port on high-density blades to communicate at full 8 Gbps. That is just

    700 nsat least 25 times faster than the next-fastest SAN enterprise platform on the market

    On the 16-, 32-, and 64-port blades, Local Switching is performed within 16-port groups. On

    48-port blades, Local Switching is performed within 24-port groups. Only Brocade offers an

    enterprise architecture that can make these types

    of switching decisions at the port level, enabling

    Local Switching and the ability to deliver up to 4.6

    Tbps (Brocade DCX) and 2.0 Tbps (Brocade DCX-

    4S) of aggregate bandwidth per backbone.

    To support long-distance congurations, 8 Gbps

    blades have Condor 2 ASICs that provide 2,048

    buffer-to-buffer credits per 16-port group on

    16-, 32-, and 64-port blades, and per 24-port

    group on 48-port blades.

    Condor 2 ASICs also enable Brocade Inter-Switch Link (ISL) Trunking with up to 64 Gbps

    full-duplex, frame-level trunks (up to 8 x 8 Gbps links in a trunk) and Dynamic Path Selection

    (DPS) for exchange-level routing between individual ISLs or ISL Trunking groups. Exchange-

    based DPS automatically optimizes fabric-wide performance by automatically routing data

    to the most efcient available path in the fabric. DPS augments ISL Trunking to provide more

    effective load balancing in certain congurations, such as routing data between multiple

    trunk groups. Up to 8 trunks can be balanced to achieve a total throughput of 512 Gbps.

    Furthermore, Brocade has signicantly improved frame-level Trunking through a masterless

    link in a trunk group. If an ISL trunk link ever fails, the ISL trunk seamlessly reforms with the

    remaining links, enabling higher overall data availability.

    Preventing frame loss during an event such as the addition or removal of an ISL while thefabric is active is a critical customer requirement. Lossless Dynamic Load Sharing and DPS

    enable optimal utilization of ISLs by performing trafc rebalancing operations during fabric

    events such as E_Port up/down, F_Port down, and so on. Typically, when a port goes down or

    comes back up, frames may be dropped or arrive out of order or trafc imbalance may occur.

    Brocades Lossless DLS/DPS architecture rebalances trafc at the frame and exchange level

    delivering in-order trafc without dropping frames, thus preventing application timeouts or

    SCSI retries.

    Unlike competitive

    offerings, frames that

    are switched within

    port groups are always

    capable of full port

    speed.

  • 7/25/2019 Achieving Enterprise SAN Performance With the Brocade DCX Backbone

    5/16

    BROCADE DCX PLATFORM ARCHITECTURE

    In the Brocade DCX, each port blade has Condor 2 ASICs that expose some ports for

    user connectivity and some ports to the control processors core switching ASICs via the

    backplane. The backbone uses a multi-stage ASIC layout analogous to a fat-tree core/

    edge topology. The fat-tree layout is symmetrical, that is, all ports have equal access to all

    other ports. The platform can switch frames locally if the destination port is on the sameASIC as the source. This is an important feature for high-density environments, because it

    allows blades that are oversubscribed when switching between blade ASICs to achieve full,

    uncongested performance when switching on the same ASIC. No other backbone offers

    Local Switchingwith competing offerings, trafc must traverse the crossbar ASIC and

    backplane even when traveling to a neighboring porta function that signicantly degrades

    performance.

    The exible Brocade DCX architecture uses a wide variety of blades for increasing port

    density, multiprotocol capabilities, and fabric-based applications. Data center administrators

    can easily mix the blades in the Brocade DCX to address specic business requirements

    and optimize cost/performance ratios. The following blades are currently available (as of

    mid-2010).

    Blade Name Description Introduced with

    CP8 - Control Processor Provides service activities and

    manageability of the backbone

    FOS 6.0

    CR8 - Core Switching 1,024 Gbps per CR8

    ICL ports on every CR8 blade

    turned on via an optional license

    FOS 6.0

    FC8-16 16 ports, 8 Gbps FC blade FOS 6.0

    FC8-32 32 ports, 8 Gbps FC blade FOS 6.1

    FC8-48 48 ports, 8 Gbps FC blade FOS 6.1

    FC8-64 64 ports, 8 Gbps FC blade FOS 6.4

    FCOE10-24 24 ports, 10 Gbps FCoE/DCB blade FOS 6.3

    FS8-18 Encryption

    Blade

    16 ports, 8 Gbps, line-speed

    encryption of data-at-rest

    FOS 6.1.1_enc

    FX8-24 Extension

    Blade

    24 ports, 12 x 8 Gbps FC ports,

    10 x 1 Gigabit Ethernet (GbE) ports, and

    two optional 10 GbE ports for long-distance

    extension of FCIP blade

    FOS 6.3

    FC10-6 6 ports, 10 Gbps FC blade FOS 5.3

    FA4-18 Fabric

    Application Blade

    18 ports, 4 Gbps FC application blade FOS 5.3

  • 7/25/2019 Achieving Enterprise SAN Performance With the Brocade DCX Backbone

    6/16

    CORE BLADES AND INTER-CHASSIS LINK (ICL) PORTS

    Control Processors

    The Brocade DCX has two Control Processor (CP8) blades. The control processor functions

    are redundant active-passive (hot-standby). The blade with the active control processor

    is known as the active control processor blade, but either could be active or standby.

    Additionally on the each processor are a USB port and two network ports. The USB is only

    for use with a Brocade-branded USB storage device. The dual IP ports allow a customer to

    potentially fail over internally on the same CP without the loss of an IP connection rather than

    fail over to the standby CP blade.

    Control Routing Blades

    The Brocade DCX includes two Control Routing (CR8) blades. This blade provides core routingof the frames either from blade to blade or from DCX to DCX/DCX-4S through the ICL ports.

    The CR8 blades are Active-Active in each Brocade DCX chassis. Each CR8 blade has four

    Condor 2 ASICs, and each ASIC has dual connection to each ASIC group on each line card.

    There are 2 x ICL connections on each CR8 blade. These can be connected to another DCX

    or DCX-4S chassis as shown in Figures 3, 4, and 5.

    Figure 1.

    CP8 blade design.

    Optical Power Slider

    Allows graceful CP

    failover with no

    dropped frames

    Control Path to Blades

    CP Power

    Control Processor Block

    USB Management Port

    Serial Management Port

    Ethernet Management Ports

    CP Power

    Control Path to BladesCPU

    Optical Power Slider

    Allows graceful CR

    failover with no

    dropped frames

    CR Power

    1 Tbps to Blades

    over Backplane

    CR Power

    ASIC

    ASIC

    ASIC

    ASIC

    128 Gbps

    ICL Connection

    (ICL1)

    128 Gbps

    ICL Connection

    (ICL0)

    Switching Block

    Figure 2.

    CR8 blade design.

  • 7/25/2019 Achieving Enterprise SAN Performance With the Brocade DCX Backbone

    7/16

    Multi-Chassis Confguration

    The dual-and triple-chassis congurations for the Brocade DCX and DCX-4S provide ultra-high-

    speed Inter-Chassis Link (ICL) ports to connect two or three backbones, providing extensive

    scalability (up to 1536 x 8 Gbps universal FC ports) and exibility at the network core. Special

    ICL copper-based cables are used, which connect directly and require no SFPs. Connections

    are made between 8 ICL ports (4 per chassis), located on the CR8 blades. The supportedcable congurations for connecting two Brocade DCX Backbones are shown in Figure 3. The

    supported congurations for a three-chassis conguration are shown in Figure 4. This is a

    good option for customers who want to build a powerful core without wasting the ports for ISL

    connectivity between chassis.

    NOTE: In a single rack, you can connect three Brocade DCX-4S chassis in addition to the

    options shown in Figure 4. A three-chassis topology is supported for chassis in two racks as

    long as the third chassis is in the middle of the second rack.

    Brocade DCX supports 16 or 8 links per ICL cable, which means 16 or 8 Gbps E_Ports per ICL

    cable. The Brocade DCX-4S supports 8 links per ICL.

    Brocade DCX

    Brocade DCX-4S

    ICL cables

    Figure 3.

    Examples of

    Brocade DCX/DCX-4S

    dual-chassis conguration.

  • 7/25/2019 Achieving Enterprise SAN Performance With the Brocade DCX Backbone

    8/16

    Brocade DCX

    Brocade DCX-4S

    Figure 4.

    Examples of

    Brocade DCX/DCX-4S

    three-chassis conguration.

    Figure 5.

    Photograph of a Brocade DCX

    three-chassis conguration

    across two racks.

  • 7/25/2019 Achieving Enterprise SAN Performance With the Brocade DCX Backbone

    9/16

    8 GBPS FIBRE CHANNEL BLADES

    Brocade offers 16-, 32-, 48-, and 64-port 8 Gbps blades to connect to servers, storage,

    or switches. All of the port blades can leverage Local Switching to ensure full 8 Gbps

    performance on all ports. Each CR8 blade contains four ASICs that switch data over the

    backplane between port blade ASICs. A total 256 Gbps of aggregate bandwidth per blade is

    available for switching through the backplane. Mixing switching over the backplane with LocalSwitching delivers performance of up to 512 Gbps per blade using 64-port blades.

    For distance over dark ber using Brocade-branded Small Form Factor Pluggables (SFPs),

    the Condor 2 ASIC has approximately twice the buffer credits as the Condor ASICenabling

    1, 2, 4, or 8 Gbps ISLs and more long-wave connections over greater distances.

    When connecting a large number of devices that need sustained 8 Gbps transmission line

    rates, IT organizations can leverage Local Switching to avoid congestion. Local Switching on

    FC port blades reduces port-to-port latencyframes cross the backplane in 2.1 s, locally

    switched frames cross the blade in only 700 nsthe latency from crossing the backplane is

    still more than 50 times faster than disk access times and is much faster than any competing

    product.

    All 8 Gbps ports on the FC8-16 blade operate at full line rate through the backplane or with

    Local Switching.

    Figure 6 shows a photo and functional diagram of the 8 Gbps 16-port blade.

    Switching Speed Defned

    When describing SAN switching speed,vendors typically use the following

    measurements:

    Milliseconds (ms):

    One thousandth of a second

    Microseconds (s):

    One millionth of a second

    Nanoseconds (ns):

    One billionth of a second

    Figure 6.

    FC8-16 blade design.

    ASIC256 Gbps to

    Core Switching

    No Oversubscription

    at 8 Gbps

    16 8 Gbps ports

    Power and

    Control Path

  • 7/25/2019 Achieving Enterprise SAN Performance With the Brocade DCX Backbone

    10/16

  • 7/25/2019 Achieving Enterprise SAN Performance With the Brocade DCX Backbone

    11/16

    The FC8-64 blade has a 2:1 oversubscription ratio at 8 Gbps switching through the

    backplane and no oversubscription with Local Switching. At 4 Gbps speeds, all 64 ports

    can switch over the backplane with no oversubscription. The FC8-64 blade exposes 16

    user-facing ports per ASIC, and up to eight 8-port trunk groups can be created with the

    64-port blade.

    Figure 9 shows a photograph and functional diagram of the FC8-64 blade.

    SPECIALTY BLADES

    DCB/FCoE Blade

    The Brocade FCOE10-24 blade is designed as an end-of-row chassis solution for server I/O

    consolidation (see Figure 15). Its a resilient, hot-pluggable blade that features 24 x 10 Gbps

    Data Center Bridging (DCB) ports with a Layer 2 cut-through and non-blocking architecture,

    which provides wire-speed performance for traditional Ethernet, DCB, and FCoE trafc.

    The FCOE10-24 features a high-performance FCoE hardware engine (Encap/Decap) and can

    use 8 Gbps Fibre Channel ports on 16-, 32-, and 48-port blades to integrate seamlessly into

    existing Fibre Channel SANs and management infrastructures. The blade supports industry-

    standard Link Aggregation Control Protocol (LACP) and Brocade enhanced, frame-based port

    Trunking that delivers 40 Gbps of aggregate bandwidth.

    ASIC

    ASIC

    Relative 2:1

    Oversubscription

    at 8 Gbps

    16 8 Gbps

    Port Groups

    Power and

    Control Path

    256 Gbps toBackplane

    512 Gbps Available

    for Local Switching

    Fibre Channel

    Switching

    ASIC

    ASIC

    Figure 9.

    FC8-64 blade design.

  • 7/25/2019 Achieving Enterprise SAN Performance With the Brocade DCX Backbone

    12/16

    Figure 10 shows a photograph and functional diagram of the FCOE10-24.

    SAN Encryption Blade

    The Brocade FS8-18 Encryption Blade provides of 16 x 8 Gbps Fibre Channel ports and

    2 x RJ-45 GbE ports, and a Smart Card reader. The Brocade FS8-18 is a high-speed, highly

    reliable FIPS 140-2 Level 3 validated blade, which provides fabric-based encryption and

    compression services to secure data assets either selectively or on a comprehensive basis.

    The blade scales non-disruptively from 48 up to 96 Gbps of disk encryption processing power

    It also provides encryption and compression services at speeds up to 48 Gbps for data on

    tape storage media. Moreover, the Brocade FS8-18 is tightly integrated with four industry-

    leading, enterprise-class key management systems that can scale to support key lifecycle

    services across distributed environments.

    Figure 11 shows a photograph and functional diagram of the FS8-18.

    Figure 10.

    Brocade FCOE10-24

    Blade design.

    ASIC

    ASICs

    Power and

    Control Path

    256 Gbps to

    Backplane

    24 x 10 GbE

    Ports

    Fibre Channel

    Switching

    ASICs

    FCoE

    Bridging

    DCB

    Switching

    8 x 8 Gbps

    Fibre Channel ports

    8 x 8 Gbps

    Fibre Channel ports

    Power and

    Control PathSmart Card reader

    2 x RJ-45 GbE

    reduncant

    cluster ports

    FIPS 140-2 Level 3

    Cryptographic Cover

    Figure 11.

    Brocade FS8-18

    Encryption Blade design.

  • 7/25/2019 Achieving Enterprise SAN Performance With the Brocade DCX Backbone

    13/16

    SAN Extension Blade

    Brocade FX8-24 Extension Blades accelerate and optimize replication, backup, and

    migration over any distance with the fastest, most reliable, and most cost-effective network

    infrastructure. Twelve 8 Gbps Fibre Channel ports, ten 1 GbE ports, and up to two optional

    10 GbE ports provide unmatched Fibre Channel and FCIP bandwidth, port density, and

    throughput for maximum application performance over IP WAN links. Whether deployed inlarge data centers or multisite environments, the Brocade FX8-24 enables replication and

    backup applications to move more data faster and fur ther than ever before to address the

    most demanding disaster recovery, compliance, and data mobility requirements.

    Figure 12 shows a photograph and functional diagram of the FX8-24.

    6-port 10 Gbps Fibre Channel Blade

    The Brocade FC10-6 blade consists of 6 x 10 Gbps Fibre Channel ports that use 10 Gigabit

    Small Form Factor Pluggable (XFP) optical transceivers. The primary use for the FC10-6 blade

    is for long-distance extension over dark ber. The ports on the FC10-6 blade operate only in

    E_Port mode to create ISLs. The FC10-6 blade has buffering to drive 10 Gbps connectivity up

    to 120 km per port and exceed the capabilities of 10 Gbps XFPs available in short-wave,

    10, 40, and 80 km long-wave versions. While potential oversubscription of a fully populated

    blade is small (1.125:1), Local Switching is supported in groups consisting of ports 0 to 2

    and ports 3 to 5, enabling maximum port speeds ranging from 8.9 to 10 Gbps full duplex.

    Storage Application BladeThe Brocade FA4-18 Application Blade has 16 x 4 Gbps Fibre Channel ports and 2 x

    auto-sensing 10/100/1000 Megabits per second (Mbps) Ethernet ports for LAN-based

    management. It is tightly integrated with several enterprise storage applications that leverage

    the Brocade Storage Application Services (SAS) APIan implementation of the T11 FAIS

    standardto provide wire-speed data movement and ofoad server resources. These fabric-

    based applications provide online data migration, storage virtualization, and continuous data

    replication and protection, and other partner applications.

    ASIC

    Power and

    Control Path

    64 Gbps to

    Backplane

    10 x GbE

    Ports

    Fibre Channel

    Switching

    FCIP,

    Compression,

    and Encryption

    12 x 8 Gbps

    Port Switching

    Group

    2 x Optional

    10 GbE Ports

    Figure 12.

    FX8-24 blade design.

  • 7/25/2019 Achieving Enterprise SAN Performance With the Brocade DCX Backbone

    14/16

    THE BENEFITS OF CORE/EDGE NETWORK DESIGN

    The core/edge network topology has emerged as the design of choice for large-scale, highly

    available, high-performance SANs constructed with multiple switches of any size.

    The Brocade DCX Backbone uses an internal architecture analogous to a core/edge fat-

    tree topology, which is widely recognized as being the highest-performance arrangement of

    switches. Note that the Brocade DCX Backbone is not literally a fat-tree network of discrete

    switches, but thinking of it in this way provides a useful visualization.

    While IT organizations could build a network of 40-port switches with similar performance

    characteristics to the Brocade DCX Backbone, it would require more than a dozen 40-port

    switches connected in a fat-tree fashion. This network would require complex cabling,

    management of 12+ discrete switching elements, support for higher power and cooling,

    and more SFPs to support ISLs. In contrast,

    the Brocade DCX delivers the same high

    level of performance without the associated

    disadvantages of a large multi-switch network,

    bringing fat-tree performance to IT organizations

    that could previously not justify the investment or

    overhead costs.

    It is important to understand, however, that the

    internal ASIC connections in a Brocade DCX

    Backbone are not E_Ports connecting a network

    of switches. They are considered C_Ports. Since

    they are not E_Ports, typical E_Port overhead is

    not present on the C_Port. The Fabric OS and ASIC architecture enables the entire backbone

    to be a single domain and a single hop in a Fibre Channel network. Unlike a situation in which

    a switch is removed from a fabric, a fabric reconguration is not sent across the network

    when a port blade is removed, further simplifying operations.

    In comparison to a multi-switch, fat-tree network, the Brocade DCX Backbone:

    Is easier to deploy and manageSimplies the cable plant by eliminating ISLs and additional SFP media

    Is far more scalable than a large network of independent domains

    Is lower in both initial CapEx and ongoing OpEx

    Has fewer active components and more component redundancy for higher reliability

    Provides multiprotocol support and routing within a single chassis

    The Brocade DCX

    Backbone architecture

    enables the entire

    backbone to be a singledomain and a single

    hop in a Fibre Channel

    network.

  • 7/25/2019 Achieving Enterprise SAN Performance With the Brocade DCX Backbone

    15/16

    PERFORMANCE IMPACT OF CONTROL ROUTING FAILURE MODES

    Any type of failure on the Brocade DCXwhether a control processor or core ASICis

    extremely rare. However, in the unusual event of a failure, the Brocade DCX is designed for

    fast and easy control processor replacement. This section describes potential (albeit unlikely)

    failure scenarios and how the Brocade DCX is designed to minimize the impact

    on performance and provide the highest level of system availability.

    Control Processor Failure in a CP8 Blade

    If the processor section of the active control processor blade fails, it affects only the

    management plane and data trafc between end devices continues to ow uninterrupted.

    With the addition of the second IP port, the risk of having to fail over to a standby CP is

    potentially minimized. A control processor failure has no effect on the data plane: the standby

    control processor automatically takes over and the backbone continues to operate without

    dropping any data frames.

    Data ows would not necessarily become congested in the Brocade DCX Backbone with one

    CP8 failure. A worst- case scenario would require the backbone to be running at or near

    50 percent of bandwidth capacity on a sustained basis. With typical I/O patterns and some

    Local Switching, however, aggregate bandwidth demand is often below 50 percent maximum

    capacity. In such environments there would be no impact, even if a failure persisted for

    an extended period of time. For environments with higher bandwidth usage, performance

    degradation would last only until the failed core blade is replaced, a simple 5-minute

    procedure.

    Core Routing Failure in a CR8 Blade

    The potential impact of a core element failure to overall system performance is

    straightforward. If half of the core elements were to go ofine due to a hardware failure, half

    of the aggregate switching capacity over the backplane would be ofine until the condition

    was corrected. A Brocade DCX Backbone with just one CR8 can still provide 1,024 Gbps

    aggregate bandwidth, or 128 Gbps to every backbone slot.

    SUMMARY

    With an aggregate chassis bandwidth far greater than competitive offerings, Brocade DCX

    Backbones are architected to deliver congestion-free performance, broad scalability, and

    high reliability for real-world enterprise SANs. As demonstrated by Brocade testing, the

    Brocade DCX:

    Delivers 8 and 4 Gbps Fibre Channel and FICON line-rate connectivity on all ports

    simultaneously

    Provides Local Switching to maximize bandwidth for high-demand applications

    Offers port blade exibility to meet specic connectivity, performance, and budget needs

    Provides investment protection by supporting data security, inter-fabric routing, SAN

    extension, and emerging protocols such as FCoE in the same chassis

    Performs fabric-based data migration, protection, and storage virtualizationDelivers ve-nines availability

    For further details on the capabilities of the Brocade DCX Backbone in the Brocade data

    center fabric, visit:

    http://www.brocade.com/products-solutions/products/dcx-backbone/index.page

    There you will nd the Brocade DCX Backbone Family Data Sheet and relevant Technical

    Briefs and White Papers.

  • 7/25/2019 Achieving Enterprise SAN Performance With the Brocade DCX Backbone

    16/16

    www.brocade.com

    2010 Brocade Communications Systems, Inc. All Rights Reserved. 06/10 GA-WP-1224-01

    Brocade, the B-wing symbol, BigIron, DCX, Fabric OS, FastIron, IronView, NetIron, SAN Health, ServerIron, and TurboIron

    are registered trademarks, and Brocade Assurance, DCFM, Extraordinary Networks, and Brocade NET Health are

    trademarks of Brocade Communications Systems, Inc., in the United States and/or in other countries. Other brands,

    products, or service names mentioned are or may be trademarks or service marks of their respective owners.

    Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied,

    concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the

    right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This

    informational document describes features that may not be currently available. Contact a Brocade sales ofce for

    information on feature and product availability. Export of technical data contained in this document may require an

    export license from the United States government.

    Corporate Headquarters

    San Jose, CA USA

    T: +1-408-333-8000

    [email protected]

    European Headquarters

    Geneva, Switzerland

    T: +41-22-799-56-40

    [email protected]

    Asia Pacifc Headquarters

    Singapore

    T: +65-6538-4700

    [email protected]

    WHITE PAPER