puresystems networking

Upload: lenin-kumar

Post on 07-Jul-2018

221 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/18/2019 PureSystems Networking

    1/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential

    IBM PureFlex Systems NetworkingConfiguration & Integration

    [email protected] & [email protected] System Networking

  • 8/18/2019 PureSystems Networking

    2/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential2

    Agenda

    IBM PureFlex Systems Networking

     –  Naming Conventions & Switch Overview

     –  FCoE & PureFlex Converged Switch –  Node Connections to I/O Modules

     –  Default Network Setup incl. VLANs

     –  Browser Based Interface of Switches

     –  Features on Demand –  SNMPv3 Access Configuration

     –  Basic Network Integration

     –  Rack Level Network Integration

    IBM Distributed Virtual Switch 5000V and IEEE 802.1Qbg FSM NetworkControl - CEE/EVB Support

  • 8/18/2019 PureSystems Networking

    3/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential3

    IBM PureFlex Systems Networking

  • 8/18/2019 PureSystems Networking

    4/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential4

    IBM Flex System I/O Module Naming Scheme

    PureFlex System and IBM Flex System Products & Technology Redbook

    EN2092 - 1Gb Switch

    EN4091 - 10Gb Pass Thru

    EN4093 - 10Gb Switch

    CN4093 - Converged 10Gb Switch

  • 8/18/2019 PureSystems Networking

    5/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential5

       N

      e   t  w  o  r   k   i  n  g   I  n   f  r  a  s   t  r  u  c   t  u  r  e

    Networking

    “Pay as you grow” scalability◊

    Optimized for performance

    Efficient network automation◊

    Enhanced virtualizationintelligence

    ◊Lower TCO

    Seamless interoperability

    IBM Flex System Fabric EN4093 10Gb Scalable Switch

    Leadership Proven Operating System Exceptional Performance

    < 1 µs latency, up to 1.28Tbps Scalable pay-as-you-grow design VM aware & VM Mobility with VMready / 802.1Qbg

    Virtual Fabric - carve up virtual NIC’s and pipes Seamless interoperability with other vendors

    switches Works as FCoE Transit Switch with 7.3 firmware 7.5 firmware in Nov 2012

    More FCoE Configurations

    EN4093 Stacking, 4K VLANs Warranty is 1 year or will match the chassis

    warranty (Includes software upgrades)

    Recommended Top-of-Rack switch Multiple chassis of 10Gb connection G8264

    Multiple chassis of 40Gb connection G8316

    B  a s e1  0 x1  0  G b E 

     S F P  +

     # 2 =

    4 x1  0  G b E 

     # 1 =2 x

    4  0  G b E 

    1  G b E M gm t  

  • 8/18/2019 PureSystems Networking

    6/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential6

    IBM PureFlex System / Cisco UCS Network Topology Comparison

    IBM

     – All network traffic within a

    chassis is switched locally – Only traffic traffic from chassisto chassis passes through thetop of the rack switch

    Cisco – Not optimized for chassis local

    traffic

     – 2104XP fabric extenderforwards all internal and

    external traffic to top of the rackswitch

     – Higher port count required onchassis ↔ top of the rackconnectivity

    x240

    x240 EN4093

    G8264

    IBM PureFlex System

    x240

    Up to 4*

    x240

    x240 EN4093

    x240

    Up to 4*

    6100B-Series

    B-Series 2104XP

    B-Series

    Up to 2*

    Cisco UCS

    B-Series

    B-Series

    B-Series

    Up to 2*

    2104XP

  • 8/18/2019 PureSystems Networking

    7/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential7

    FCoE & PureFlex Converged Switch

  • 8/18/2019 PureSystems Networking

    8/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential8

       N

      e   t  w  o  r   k   i  n  g   I  n   f  r  a  s   t  r  u  c   t  u  r  e

    Networking

    “Pay as you grow” scalability◊

    Optimized for performance

    Efficient network automation◊

    Enhanced virtualizationintelligence

    ◊Lower TCO

    Seamless interoperability

    IBM Flex System Fabric CN4093 10Gb Scalable Converged Switch (4Q12)

    Leadership Proven Operating System Exceptional Performance

    < 1 µs latency, up to 1.28Tbps Scalable pay-as-you-grow design Bulit in FCF to split FCoE packets in the chassis 12 omni ports programmable to run either

    Ethernet or Fibre Channel Virtual Fabric - carve up virtual NIC’s and pipes Seamless interoperability with other vendors

    switches

    Warranty is 1 year or will match the chassiswarranty (Includes software upgrades)

    Recommended Top-of-Rack switch Multiple chassis of 10Gb connection G8264 Multiple chassis of 40Gb connection G8316

    1 2 

    x1  0  G b E  omni   p or  t   s

    2 x4  0  G b E 

    2 x1  0  G b  S F P  +

  • 8/18/2019 PureSystems Networking

    9/71

  • 8/18/2019 PureSystems Networking

    10/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential10

    FCoE support plan (x86 nodes) - November release

    Win2008, ESX 4/5,

    RHEL 5/6, SLES10/11

    FCoE: Storage node, V7K, FC:

    SVC, DS3K/5K, DS8K, Tape,XIV

    Brocade SANBrocade VDXEN4093 10Gb SwitchLOM & CN4054 4-

    port adapter (BE3)UFP mode

    Win2008, ESX 4/5,RHEL 5/6, SLES10/11

    FCoE: Storage node, V7K, FC:SVC, DS3K/5K, DS8K, Tape,XIV

    Cisco SANBrocade SAN

    N/ACN4093 10Gb SwitchNPIV mode

    LOM & CN4054 4-port adapter (BE3)vNIC I & II

    Adapter Integrated Switch FCoE ToRSwitch

    SAN Switch Storage Target OS levels

    LOM & CN4054 4-port adapter (BE3)pNIC & vNIC II

    EN4091 10Gb PassThru Module

    Brocade VDXswitch

    Brocade SANswitch

    FC: SVC, DS3K/5K, DS8K,Tape, XIV

    Win2008, ESX 4/5,RHEL 5/6, SLES10/11

    LOM & CN4054 4-port adapter (BE3)

    pNIC, vNIC I & vNIC II

    EN4093 10Gb Switch Nexus 5548 /5596

    Cisco SAN FCoE: Storage node, V7K, FC:SVC, DS3K/5K, DS8K, Tape,

    XIV

    Win2008, ESX 4/5,RHEL 5/6, SLES

    10/11

    LOM & CN4054 4-port adapter (BE3)UFP

    EN4093 10Gb Switch Nexus 5548 /5596

    Cisco SAN FCoE: Storage node, V7K, FC:SVC, DS3K/5K, DS8K, Tape,XIV

    Win2008, ESX 4/5,RHEL 5/6, SLES10/11

    LOM & CN4054 4-port adapter (BE3)pNIC, vNIC I & vNIC II

    EN4093 10Gb Switch G8264CS (IBMConverged ToR)in NPIV mode

    Cisco & BrocadeSAN

    FCoE: Storage node, V7K, FC:SVC, DS3K/5K, DS8K, Tape,XIV

    Win2008, ESX 4/5,RHEL 5/6, SLES10/11

    LOM & CN4054 4-port adapter (BE3)UFP mode

    EN4093 10Gb Switch G8264CS (IBMConverged ToR)in NPIV mode

    Cisco & BrocadeSAN

    FCoE: Storage node, V7K, FC:SVC, DS3K/5K, DS8K, Tape,XIV

    Win2008, ESX 4/5,RHEL 5/6, SLES10/11

    LOM & CN4054 4-port adapter (BE3)pNIC, vNIC I & vNIC II

    EN4093 10Gb Switch Brocade VDX Brocade SAN FCoE: Storage node, V7K, FC:SVC, DS3K/5K, DS8K, Tape,XIV

    Win2008, ESX 4/5,RHEL 5/6, SLES10/11

    LOM & CN4054 4-port adapter (BE3)pNIC

    CN4093 10Gb SwitchNPIV mode

    N/A Cisco SANBrocade SAN

    FCoE: Storage node, V7K, FC:SVC, DS3K/5K, DS8K, Tape,XIV

    Win2008, ESX 4/5,RHEL 5/6, SLES10/11

  • 8/18/2019 PureSystems Networking

    11/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential11

    Node Connections to I/O Modules

  • 8/18/2019 PureSystems Networking

    12/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential12

    Compute Node Connections to I/O Modules

    2 Port Adaptors 4 Port Adaptors

  • 8/18/2019 PureSystems Networking

    13/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential13

    1 3 2 4

    Redundant pairRedundant pairasic level redundancy

    Adapter level Redundancy

    Robust Connectivity: Switch, ASIC and Adapter level Redundancy

    CN4054

    CN4054

    ASIC 1

    ASIC 1

    ASIC 2

    ASIC 2

       E   N   4   0   9   3

       (   b  a  s  e   )

       E   N   4   0   9   3

       (   b  a  s  e   )

       E   N   4   0   9   3

       (   b  a  s  e   )

       E   N   4   0   9   3

       (   b  a  s  e   )

       E   N   4   0   9

       3   (   U  p  g  r  a   d  e   1   )

       E   N   4   0   9   3   (   U  p  g  r  a   d  e   1   )

       E   N   4   0   9   3   (   U  p  g  r  a   d  e   1   )

       E   N   4   0   9

       3   (   U  p  g  r  a   d  e   1   )

  • 8/18/2019 PureSystems Networking

    14/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential14

    IBM Flex System EN4093 10Gb Scalable Switch - Connection to Nodes

    Node Adaptor Slot Adaptor NIC I/O Module Bay Port

    1 1 (LOM & 4 port adaptors) 1 INTAx

    2 (LOM & 4 port adaptors) 2 INTAx

    3 (4 port adaptors) 1 INTBx

    4 (4 port adaptors) 2 INTBx

    5 (when available) 1 INTCx

    6 (when available) 2 INTCx

    2 1 (LOM & 4 port adaptors) 3 INTAx

    2 (LOM & 4 port adaptors) 4 INTAx

    3 (4 port adaptors) 3 INTBx

    4 (4 port adaptors) 4 INTBx5 (when available) 3 INTCx

    6 (when available) 4 INTCx

    x = Node Bay NumberLOM is currently only 1 Gb

  • 8/18/2019 PureSystems Networking

    15/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential15

    IBM Flex System EN2092 1Gb Scalable Switch - Connection to Nodes

    Node Adaptor Slot Adaptor NIC I/O Module Bay Port

    1 1 (LOM & 4 port adaptors) 1 INTAx

    2 (LOM & 4 port adaptors) 2 INTAx3 (4 port adaptors) 1 INTBx

    4 (4 port adaptors) 2 INTBx

    2 1 (LOM& 4 port adaptors) 3 INTAx

    2 (LOM & 4 port adaptors) 4 INTAx

    3 (4 port adaptors) 3 INTBx

    4 (4 port adaptors) 4 INTBx

    x= Node Bay Number

  • 8/18/2019 PureSystems Networking

    16/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential16

    Logical View of Chassis Management Module

  • 8/18/2019 PureSystems Networking

    17/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential17

    Default Network Setup of PureFlex System

    3 Default VLANs

     –  VLANID 4091 VM/LPAR Management

     –  VLANID 4092 Data Network

     –  VLANID 4093 Management Network

    vlan 4091enablename "OS Mgmt"

     member INTA1-INTA14,EXT5

    !

    vlan 4092enablename "Data"

     member INTA1-INTA14,EXT1-EXT4

    !

    vlan 4093enable

    name "Device Mgmt" member EXT6-EXT10

    !

    spanning-tree stp 123 vlan 4091

    spanning-tree stp 124 vlan 4092

    spanning-tree stp 125 vlan 4093

  • 8/18/2019 PureSystems Networking

    18/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential18

    Default Network Setup of PureFlex System - x86 Compute Node

  • 8/18/2019 PureSystems Networking

    19/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential19

    Default Network Setup of PureFlex System - Power Compute Node

  • 8/18/2019 PureSystems Networking

    20/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential20

    Brower Based Interface (BBI) of Ethernet Switches

    • Tabs at top to selectoperation

    ‒ Dashboard toinspect

    ‒ Configure to

    make changes

    • Most operations aretwo step

    ‒ Submit (putchanges into

    scratchpad)‒ Apply (make

    changes takeeffect)

    • Anything you can doin the CLI you can

    do in the GUI

    • Can be useful tofigure out a featurein GUI, and thenlook at CLI to seehow it is applied

  • 8/18/2019 PureSystems Networking

    21/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential21

    BBI Feature on Demand Display

  • 8/18/2019 PureSystems Networking

    22/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential22

    BBI Feature on Demand - Key Installation

  • 8/18/2019 PureSystems Networking

    23/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential23

    CMM Software Key Display

  • 8/18/2019 PureSystems Networking

    24/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential24

    SNMPv3 Access Configurationof

    Ethernet Switches

  • 8/18/2019 PureSystems Networking

    25/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential25

    EN2092 / EN4093 SNMPv3 Access Configuration (1)

    How to get rid of partial access status of Ethernet Switches in Resource Explorer?

    Check if missing SNMP access is the reason.

    If yes, check if there is a SNMPv3 User configured on the Ethernet Switch

     –  If not (e.g. on Flex System), configure SNMPv3 User or deploy Template

    Configure Access (Credentials) and check SNMPv3 status again

  • 8/18/2019 PureSystems Networking

    26/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential26

    EN2092 / EN4093 SNMPv3 Access Configuration (2)

    Check if missing SNMP access is the reason for the partial access message

  • 8/18/2019 PureSystems Networking

    27/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential27

    EN2092 / EN4093 SNMPv3 Access Configuration (3)

    Configure SNMPv3 User on the Ethernet Switch (e.g. Flex System)snmp-server user 4 name "DirectorServerSNMPv3User"

    snmp-server user 4 authentication-protocol sha authentication-password "ee307####"

    snmp-server user 4 privacy-protocol des privacy-password "ee067###"

    !

    snmp-server group 4 user-name DirectorServerSNMPv3User

    snmp-server group 4 group-name "ibmd_grp_4"

    !

    snmp-server access 4 name "ibmd_grp_4"

    snmp-server access 4 level authPriv

    snmp-server access 4 read-view "iso"

    snmp-server access 4 notify-view "iso"

    !snmp-server target-address 1 name "ibmd_taddr_1" address 192.168.93.100

    snmp-server target-address 1 parameters-name "ibmd_tparam_1“

    !

    snmp-server target-parameters 1 name "ibmd_tparam_1“

    snmp-server target-parameters 1 user-name "DirectorServerSNMPv3User“

    snmp-server target-parameters 1 level authPriv

    Type: snmpv3User: adminmd5PW: adminmd5Proto: MD5

    Privacy Proto: DESPrivacy PW: adminmd5

    On a PureFlex System thereshould be a user adminmd5

  • 8/18/2019 PureSystems Networking

    28/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential28

    EN2092 / EN4093 SNMPv3 Access Configuration (4)

    You can also create & apply a Ethernet Network Template with NetworkControl

  • 8/18/2019 PureSystems Networking

    29/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential29

    EN2092 / EN4093 SNMPv3 Access Configuration (5)

    You can also create & apply a Ethernet Network Template with NetworkControl

  • 8/18/2019 PureSystems Networking

    30/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential30

    EN2092 / EN4093 SNMPv3 Access Configuration (5)

    You can also create & apply a Ethernet Network Template with NetworkControl

  • 8/18/2019 PureSystems Networking

    31/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential31

    EN2092 / EN4093 SNMPv3 Access Configuration (4)

    Change the SNMPv3 Trap Destination in the Ethernet Network Template

  • 8/18/2019 PureSystems Networking

    32/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential32

    EN2092 / EN4093 SNMPv3 Access Configuration (4)

    Deploy the new SNMPv3 Template

  • 8/18/2019 PureSystems Networking

    33/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential33

    EN2092 / EN4093 SNMPv3 Access Configuration

    Configure SNMPv3 Access by creating a SNMPv3 Credential

  • 8/18/2019 PureSystems Networking

    34/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential34

    EN2092 / EN4093 SNMPv3 Access Configuration

    Configure SNMPv3 Access by creating a SNMPv3 Credential

    SNMP Access Status is now OK

  • 8/18/2019 PureSystems Networking

    35/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential35

    Basic Network Integration

  • 8/18/2019 PureSystems Networking

    36/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential36

    Proven Interoperability: IBM and Cisco

    IBM Networking OS uses standards-compliant IEEE & IETF protocols

    Common IBM Networking OS on PureSystems, BladeCenter, and RackSwitch switches

    Extensive IBM interoperability testing with Cisco, Juniper and others

    14M+ Ethernet ports shipped worldwide connecting to servers, storage and other networks

     – IBM estimates 1-2M ports are connected & working with Cisco switches & cores today

    Cisco-like command line interface - familiar to Cisco-trained admins

    Certified Cisco Catalyst and Nexus Interoperability for IBM Networking products

    Find out more: contact yourlocal System Networking expert

    Tolly Group: Nexus Interoperability report

    Tolly Group: Catalyst Interoperability report

    36

  • 8/18/2019 PureSystems Networking

    37/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential37 37

    Network Interconnection Best Practices (1)

    Questions to ask regarding existing Networking Infrastructure Spanning-tree protocol deployed: PVSTP, PVRSTP, MSTP

    VLAN Trunking (802.1Q): Native (Default) VLAN, Usage of VTP (VLAN Trunking Protocol)

    Link Aggregation Protocol: Static, LACP, PAGP (Port Agregation Protocol)

    Existing Management Infrastructure

     –  Out-of-Band (OoB), In-Band, Management VLAN

     –  Protocols used: Telnet, SSH, SNMP, syslog, ICMP

    Interconnection Best Practices

    Crossed links with upper virtualized switch or

    Straight Forward with upper non virtualized switch

     –  Spanning-tree can be disabled on Flex switches

     –  BPDU guard needs then to be configured on upper switches ports connected to flex links

    Link aggregation using LACP

    PVRSTP for small to medium range of VLANs (500)

    OoB management using FSM and dedicated network infrastructure for management

    Enforce the End-Hosts Interconnection Link Configuration Verification

    All unused ports or ports not yet in production in the datacenter should be

     –  configured by default in a trash vLAN that is not flooded on all used access & dot1q trunk ports.

     –  shutdown by default and “not shutdown” only by networking staff or entitled people.

     C H E C K

     D O !

  • 8/18/2019 PureSystems Networking

    38/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential38 38

    Network Interconnection Best Practices (2)

     G o o d  a p

     p r o a c  h

     G o o d

      a p p r o

     a c  h

    IBM P S C / S b / Ok b B bli

  • 8/18/2019 PureSystems Networking

    39/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential39 39

    No spanning-Tree is needed on Flex switch with straight forward topology

    Two spanning-tree features, configured on upper switches ports connected toflex links, are associated with this solution:

    - Spanning-tree bpduguard needs to be configured

    - It is highly recommended to enable Edge Trunk (portfast)

    spanning-tree port type edge trunk

    Eliminates L2 loop blocked ports on upper switches- virtualized or not- 100% of available links are used

    Eliminates any spanning tree limitations

    Better convergence speed- LACP hashing better convergence than spanning-tree

    Simplify the forwarding path when upperswitches are not virtualized

    Upper access switches

    IBM P S t C 2012 19 /20 S t b 1 /2 Okt b 2012 Böbli

  • 8/18/2019 PureSystems Networking

    40/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential40

    LACP to the Servers

    Nexus 5K with vPC

     – One vPC channel to the two EN4093 switches

    EN4093 with vLAG

     – One vLAG port channel to each server

     – One vLAG port channel to the two Nexus 5K

    Server NICs in LACP mode

     – Linux NIC bonding mode 4 (LACP)

     – Shared MAC address All ports active in all directions

     – MAC table synchronization on Nexus

     – MAC table synchronization on EN4093

     – Rapid Failover on any link down event

    Spanning Tree mode: – MSTP for higher scalability or

     – Rapid PVST+ for easier configuration

    10Gb

    vPC

    vLAG

    Nexus 5K

    EN4093EN4093

    PureFlex Compute Node

    Nexus 5K

    IBM PureSystems Camp 2012 19 /20 September + 1 /2 Oktober 2012 Böblingen

  • 8/18/2019 PureSystems Networking

    41/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential41

    Servers with NICs that are not teamed

    Nexus 5K with vPC

     – Two vPC port channels to the chassis

     – One vPC channel to each EN4093switch

    EN4093 with LACP uplinks

     – One LACP port channel from each

    EN4093 to the pair of Nexus 5K – L2 Failover to assist link level failover

    on servers (no beaconing)

    Servers with unbonded NICs

     – Each NIC for unique purpose or

     – Utilizing ESX Virtual port ID loadbalancing

    LACP

    Flex System Chassis

    EN4093EN4093

    Nexus 5KNexus 5K

    vPC

    IBM PureSystems Camp 2012 19 /20 September + 1 /2 Oktober 2012 Böblingen

  • 8/18/2019 PureSystems Networking

    42/71

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential42

    Flex System Chassis

    Not recommended: Connecting to FEX

    FEX default behavior:

     – BPDU Guard

     – BPDU Filter

    Spanning Tree

     – Must be disabled on EN4093 uplinks

    Fabric Extender FEX FEX

    EN4093EN4093

    Nexus 5KNexus 5K

    IBM PureSystems Camp 2012 - 19 /20 September + 1 /2 Oktober 2012 Böblingen

  • 8/18/2019 PureSystems Networking

    43/71

    IBM PureSystems Camp 2012 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential43

    Connecting to Catalyst 6500 with L2 Failover

    EN4093 with L2 Failover

     –  Integrated switch monitors uplinks

     –  If enough uplinks fail, switch brings

    down link to servers, causing NICs tofailover over to the backup

    Servers NICs in Active/Backup mode

     –  Linux NIC Bonding mode 1

     –  One port is active

     –  One port is backup

     –  If active link fails, backup port takesover and send gratuitous ARPs tospeed convergence

     –  Half the servers use one switch asactive path, half use the other switch

    L2 Failover –  Spanning Tree Optional

     –  Rapid Layer 2 failover on any linkdown event

     –  No Interoperability issuesFlex System Chassis

    EN4093EN4093

    Catalyst 6500 Catalyst 6500

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

  • 8/18/2019 PureSystems Networking

    44/71

    IBM PureSystems Camp 2012 19./20. September + 1./2. Oktober 2012, Böblingen

     © 2012 IBM CorporationIBM Confidential44

    Layer 2 Failover in action

    2) Switch brings down server link

    1) Uplinks lose connection

    3) Server fails over to backup link

    4) Server sends gratuitous ARPs to speed failover

    5) Server MAC address is learned here

    Catalyst 6500 Catalyst 6500

    EN4093EN4093

    Flex System Chassis

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

  • 8/18/2019 PureSystems Networking

    45/71

    y p p , g

     © 2012 IBM CorporationIBM Confidential45

    Connecting to Catalyst 6500 with HotLinks

    Servers with unteamed NICs

     –  Each port is used for unique purpose

    EN4093 with HotLinks

     –  Integrated switch monitors uplinks –  One port (or LAG) in active mode

     –  One port (or LAG) in standby mode

     –  If the active uplinks fail, switch failsover to the standby port(s)

     –  Optional: Switch can send gratuitousARP to speed convergence

    HotLinks

     –  Spanning Tree Optional

     –  Rapid Failover on any link down event

     –  No Interoperability issuesEN4093EN4093

    Flex System Chassis

    Catalyst 6500 Catalyst 6500

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

  • 8/18/2019 PureSystems Networking

    46/71

    y p p g

     © 2012 IBM CorporationIBM Confidential46

    HotLinks in action

    2) EN4093 unblocks backup port3) EN4093 sends gratuitous ARP

    1) Active Uplink(s) fail

    4) Servers MAC addresses learned here

    Flex System Chassis

    Catalyst 6500 Catalyst 6500

    EN4093EN4093

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

  • 8/18/2019 PureSystems Networking

    47/71

     © 2012 IBM CorporationIBM Confidential47

    Connecting to Catalyst 6500 with Spanning Tree

    Servers with unteamed NICs

     –  Each port is used for uniquepurpose

    EN4093 with Spanning Tree

     – Redundant ports are blocked

    Spanning Tree modes

     –  Per VLAN RSTP for easy

    configuration –  MSTP for better scalability

     –  Root Guard on Cisco will keepmisconfigured access switchesfrom causing disruptions

     –  Active/active connections arepossible by balancing spanningtree instances on the uplinks

    Flex System Chassis

    Catalyst 6500 Catalyst 6500

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

  • 8/18/2019 PureSystems Networking

    48/71

     © 2012 IBM CorporationIBM Confidential48

    Rack Level Network Integration

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

  • 8/18/2019 PureSystems Networking

    49/71

     © 2012 IBM CorporationIBM Confidential49

    LACP on the Servers and vPC on the Nexus 5K/7K

    Nexus 5K/7K with vPC –  One vPC channel to the two RackSwitch G8264 switches

    RackSwitch G8264 with vLAG –  One vLAG port channel to the pair of Nexus 5K/7K switches

     –  One vLAG port channel to each Flex System Chassis

     –  Up to 30 Flex System Chassis per pair of G8264 switches

    EN4093 with vLAG –  One vLAG port channel to each server

     –  One vLAG port channel to the two RackSwitch G8264 switches

    Server NICs in LACP mode –  Linux NIC bonding mode 4

     –  Shared MAC address

    All ports active in all directions –  MAC table synchronization between Nexus switches

     –  MAC table synchronization between RackSwitch

     –  MAC table synchronization between EN4093 switches

     –  Rapid Failover on any link down event

    Spanning Tree mode –  MSTP for higher scalability or

     –  Rapid PVST+ for easier configuration

    vPC

    vLAG

    Flex System Chassis

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

  • 8/18/2019 PureSystems Networking

    50/71

     © 2012 IBM CorporationIBM Confidential50

    Cisco Fabric Path

    PureSystem Aggregation with Cisco Nexus 5548

    RackSwitchG8264 in

    PureFlexRack

    Nexus 5548

    Oversubscriptiondepends upon :

    Number of Uplinks toCore and Number of

    Switch Members of FP

    8 x 10G

    Layer 2 LAG or

    Layer 3 ECMP RouteDistribution

    Layer 2 orLayer 3 on

    uplinks

    Rack 1

    …Rack 2

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

  • 8/18/2019 PureSystems Networking

    51/71

     © 2012 IBM CorporationIBM Confidential51

    Cisco Fabric Path

    PureSystem Aggregation with Cisco Nexus 7000

    RackSwitchG8264 in

    PureFlexRack

    Nexus 7000

    Oversubscription depends

    upon :Number of Uplinks to Core

    8 x 10G

    Layer 2 LAG or

    Layer 3 ECMP RouteDistribution

    Layer 2 orLayer 3 on

    uplinks

    Rack 1

    …Rack 2

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

  • 8/18/2019 PureSystems Networking

    52/71

     © 2012 IBM CorporationIBM Confidential52

    Servers with independent NICs

    Nexus 5K/7K with vPC –  One vPC channel to the two RackSwitch G8264 switches

    RackSwitch G8264 with vLAG –  On vLAG port channel to each EN4093

     –  Up to 15 Flex System Chassis per pair of G8264 switches

    EN4093 with LACP uplinks –  One LACP port channel from each EN4093 to the pair ofRackSwitch G8264s

     –  L2 Failover to assist link level failover on servers

    Servers with unbonded NICs –  Each NIC for unique purpose or

     –  Utilizing ESX Virtual port ID load balancing

    All ports active in all directions –  MAC table synchronization on Nexus –  MAC table synchronization on EN4093

     –  Rapid Failover on any link down event

    Spanning Tree mode: –  MSTP for higher scalability or

     –  Rapid PVST+ for easier configuration

    vPC

    vLAG

    Flex System Chassis

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

    LACP h S d L F U C l

  • 8/18/2019 PureSystems Networking

    53/71

     © 2012 IBM CorporationIBM Confidential53

    LACP to the Servers and Loop-Free-U to Catalyst 6500

    Catalyst 6500 –  Utilize Loop-Free U, where switch interconnect is L3

     –  Spanning tree won’t block because there are no loops

     –  One LACP channel from each 6500 to the two RackSwitch G8264switches

    RackSwitch G8264 with vLAG –  One vLAG port channel to each Flex System Chassis

     –  One vLAG port channel to each Catalyst 6500

     –  Up to 30 Flex System Chassis per pair of G8264 switches

    EN4093 with vLAG –  One vLAG port channel to each server

     –  One vLAG port channel to the two RackSwitch G8264 switches

    Server NICs in LACP mode –  Linux NIC bonding mode 4

     –  Shared MAC address

    All ports active in all directions –  MAC table synchronization between Nexus switches

     –  MAC table synchronization between RackSwitch

     –  MAC table synchronization between EN4093 switches

     –  Rapid Failover on any link down event

    Spanning Tree mode: –  MSTP for higher scalability or

     –  Rapid PVST+ for easier configuration

    LACP

    vLAG

    L3L2

    Flex System Chassis

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

    PureSystem Aggregation with IBM System Networking G8316

  • 8/18/2019 PureSystems Networking

    54/71

     © 2012 IBM CorporationIBM Confidential54

    PureSystem Aggregation with IBM System Networking G8316

    RackSwitchG8264 in

    PureFlex Rack

    RackSwitch

    G8316

    Layer 2 orLayer 3 on

    uplinksISL

    2 x 40G

    2 x 40G

    Layer 2 LAG or

    Layer 3 ECMP RouteDistribution

    Rack 1

    …Rack 2

    Layer 2 or

    Layer 3 onuplinks

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

    PureSystem Aggregation with Juniper EX4500 Virtual Chassis

  • 8/18/2019 PureSystems Networking

    55/71

     © 2012 IBM CorporationIBM Confidential55

    PureSystem Aggregation with Juniper EX4500 Virtual Chassis

    RackSwitchG8264 in

    PureFlexRack

    EX4500Virtual Chassis

    Oversubscription dependsupon :

    Number of Stack Members

    Layer 2 LAG or

    Layer 3 ECMP RouteDistribution

    8 x 10G

    Layer 2 orLayer 3 on

    uplinks

    Rack 1

    …Rack 2

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

    PureSystem Aggregation with Juniper EX8208

  • 8/18/2019 PureSystems Networking

    56/71

     © 2012 IBM CorporationIBM Confidential56

    PureSystem Aggregation with Juniper EX8208

    RackSwitchG8264 in

    PureFlexRack

    EX8208

    Oversubscription dependsupon :

    Number of Uplinks to Core

    Layer 2 LAG or

    Layer 3 ECMP RouteDistribution

    Layer 2 orLayer 3 on

    uplinks

    8 x 10G

    Rack 1

    …Rack 2

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

  • 8/18/2019 PureSystems Networking

    57/71

     © 2012 IBM CorporationIBM Confidential57

    IBM Distributed Virtual Switch 5000V

    and IEEE 802.1Qbg

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

    What is the IBM Distributed Virtual Switch 5000V?

  • 8/18/2019 PureSystems Networking

    58/71

     © 2012 IBM CorporationIBM Confidential58

    Distributed Virtual Switch for VMware

    For vSphere 5.0 and beyond IBM Networking OS based Management Plane

    Advanced Layer-2 Features in Control and Data Plane

    Roughly equal to a stack of independent switchescontrolled by remote management plane

    58 IBM Confidential

    What is the IBM Distributed Virtual Switch 5000V?

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

    5000V Solution Components

  • 8/18/2019 PureSystems Networking

    59/71

     © 2012 IBM CorporationIBM Confidential59

    Controller

     –  ISCLI-driven Management Plane

     –  Delivered as a Virtual Appliance

     –  Open Virtual Appliance (OVA) Format –  One per Distributed Switch

    Host Module

     –  Implements Data/Control Plane

     –  Resides in ESXi Hypervisors (vSphere 5.0)

     –  Data Path Kernel Module (DPM) and Agent (User World)

     –  Delivered as VMware Installation Bundle (VIB) packaged in

    Offline Bundle (zip) format

    59 IBM Confidential

    5000V Solution Components

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

    5000V Architecture - Component Overview

  • 8/18/2019 PureSystems Networking

    60/71

     © 2012 IBM CorporationIBM Confidential60

    5000V Architecture - Component Overview

    5000VController

    vSphere API

    vSphere API

    IBM API

    Qbg Switch

    DPM Kernel Module

    Agent

    ESXi5

    vCenter

    Server

    HTTP

    VDP/LLDP

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

    IBM System Networking DVS 5000V for VMware vSphere 5 0

  • 8/18/2019 PureSystems Networking

    61/71

     © 2012 IBM CorporationIBM Confidential61

    Key Features Customer Benefits

    Managed Layer 2Distributed Virtual Switchfor VMware

    Configuration and management of Distributed Virtual Switchas any other IBM physical switch

    Distributed Virtual Switch visible to the network administrators

    Ability to manage and troubleshoot virtual machine traffic

    Familiar Cisco like CLI to manage the Distributed Virtual Switch

    Advanced NetworkingFeatures

    VLANS & Private VLAN for VM traffic separation

    ACLs for VM traffic control

    Local (SPAN) and remote (ERSPAN) Port Mirroring foradvanced VM traffic visibility and troubleshooting

    sFlow

    VM traffic statistics, Port Statistics

    802.1Qbg including VEPA, VDP nd VSI Manager for IEEEstandards based VM traffic management in the network

    QoS, LACP & Advanced Teaming

    Advanced ManagementFeatures

    Telnet and SSH

    Per-User access and Role Based Access Control (RBAC)

    SNMP (Read and Write), Syslog

    TACACS+, RADIUS

    Per User access

    IBM Distributed Virtual Switch for VMware vSphere 5.0

    IBM System Networking DVS 5000V for VMware vSphere 5.0

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

    IEEE 802 1Qbg: VEB versus VEPA

  • 8/18/2019 PureSystems Networking

    62/71

     © 2012 IBM CorporationIBM Confidential62

    IEEE 802.1Qbg: VEB versus VEPA

    VEPA advantages

    VM to VM traffic visibility to physical switch, leverage the physical switch capabilityto do traffic control - ACL, security features etc. Means VEB do not need to completecomplex features.

    Leverage physical switch management capability like statistics, S-flow, RMON etc.

    Minimizes changes to current NICs, vswitches, and external switches, by software upgrade.

    Reflective Relay Enables hairpin forwarding

    on a per port basis. Relies on the upstream

    switch for L2 switching.

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

    High Level VDP Use Case Example - VM Creation

  • 8/18/2019 PureSystems Networking

    63/71

     © 2012 IBM CorporationIBM Confidential63

    High Level VDP Use Case Example VM Creation

    VSI TypeDatabase

    System Admin

    Network

    Admin

    Query available VSI types

    Obtain a VSI instance

    Push VM & VSIinfo to server’s

    virtualizationinfrastructure

    Physical End Station

    Switch (a.k.a. Bridge)

    VM

    Apps

    VM

    Apps

    VM

    Apps

    VM

    Apps

    VSIManager

    VMManager

    1 Create set ofVSI Types

    2

    VEB or VEPA3

    4VSI Discovery andConfigurationProtocol (VDP)

    Retrieve VSIInformation

    5

    L2 net(s)

    6VM is brought on-line after VDPcompletes

    Push VSIManager ID andAddress 0

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

    802.1Qbg Solution for VMware vSphere 5

  • 8/18/2019 PureSystems Networking

    64/71

     © 2012 IBM CorporationIBM Confidential64

    Physical NetworkPhysical Network

    IBM VDS 5000VController

    VM Management(Creation/Migration/Deletion)VDP Parameter Communication

    VM Management(Creation/Migration/Deletion)VDP Parameter Communication

    Export VM Groups and VSIMapping Table to switches

    Export VM Groups and VSIMapping Table to switches

       Q   b  g   P   R   O

       T   O   C   O   L   S

    VSITable

    802.1Qbg Solution for VMware

    IBM 5000V with 802.1Qbg on ESXi5 IBM Physical switches

    - IBM BladeCenter Virtual Fabric 10G switch

    - G8264 RackSwitch

       Q   b  g   P   R   O   T   O   C   O   L   S

    IBM Switch

    VMGroups

    VMGroups

    VSITable

    VSITable

    802.1

    Qbg

    802.1Qbg

    IBM Switch

    VSITable

    VSITable

    802.1Qbg

    802.1Qbg

    VMGroups

    VMGroups

    802.1Qbg Solution for VMware vSphere 5

    ReflectiveRelay

    ReflectiveRelay

    ESXi5

    IBM DVS 5000VWith 802.1Qbg

    VM-NVM-1

    ESXi5

    IBM DVS 5000VWith 802.1Qbg

    VM-NVM-1

    VMWare vCenter

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

    Live Migration in 5000V VEPA Scenario

  • 8/18/2019 PureSystems Networking

    65/71

     © 2012 IBM CorporationIBM Confidential65

    g

    ESXi5 Host1

    FILE

    5000V

    VFSMIBM NOS 7.2.2.0

    DB

    Client

    Uplink

    8

    11

    RR

    108

    ESXi5 Host2

    Qbg

    DB WEB

    Uplink

    Qbg/VEPA

    7

    RR

    109

    VLAN 1100

    VLAN 1200

    ACL: Deny Port 80 Response

    Bandwidth Limiting

    Port Group

    standalone

    invisible

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

  • 8/18/2019 PureSystems Networking

    66/71

     © 2012 IBM CorporationIBM Confidential66

    FSM NetworkControlCEE/EVB Support

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

  • 8/18/2019 PureSystems Networking

    67/71

     © 2012 IBM CorporationIBM Confidential67

    IBM Systems Director 6.3.2 (Nov 2012 release)Support for CEE and EVB on IBM Switches

    - CEE Configuration Templates- EVB Configuration Templates- VSI Database Management

    Available on FSM for PureFlex 

    UPCOMING

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

    CEE Configuration Template Support for IBM Switches

  • 8/18/2019 PureSystems Networking

    68/71

     © 2012 IBM CorporationIBM Confidential68

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

    EVB Configuration Template Support for IBM Switches

  • 8/18/2019 PureSystems Networking

    69/71

     © 2012 IBM CorporationIBM Confidential69

    IBM PureSystems Camp 2012 - 19./20. September + 1./2. Oktober 2012, Böblingen

    EVB VSI Database Configuration for VSI Database on ISD/FSM

  • 8/18/2019 PureSystems Networking

    70/71

     © 2012 IBM CorporationIBM Confidential70

  • 8/18/2019 PureSystems Networking

    71/71