the coming decade of networking · the same structured network topologies were used in the data...
TRANSCRIPT
3/9/2012
1
© 2012 IBM Corporation
The Coming Decade ofData Center NetworkingDiscontinuities
Renato Recio
IBM Fellow &System Networking CTO
© 2012 IBM Corporation2
Agenda
Issues with today’s Networks
Discontinuous & Disruptive Trends
Coming decade of Data Center Networking Discontinuities–Optimized: Flat, Converged, Scalable fabrics
–Automated: Virtual & Overlay Networks
–Integrated: Software Defined Networks
Summary
3/9/2012
2
© 2012 IBM Corporation3
Campus
Access Layer
Aggregation Layer
Core Layer
Early Ethernet Campus Evolution
In the beginning, Ethernet was used to interconnect stations (e.g. dumb terminals), initially through repeater & hub topologies,
eventually through switched topologies.
Ethernet campuses evolved into a structured network typically divided into a Core, Service (e.g. firewall), Aggregation and Access Layer, where:
– Traffic pattern is mostly North‐South(directed outside campus vs peer‐peer).
– To avoid spanning tree problems, campus networks typically are divided at access.
WAN
5%5%
95
%
Laye
r3La
yer2
© 2012 IBM Corporation4
Layer2Laye
r2
SAN
Ethernet in the Data Center
The same structured network topologies were used in the data center, but…
– Traffic pattern is mostly East‐West(e.g. Application to Database tier).
– Large layer‐2 domains needed forclustering and Virtual Machine mobility.
Partly due to Ethernet limitations (e.g. lack of flow control), the data centerused additional networks, such as
– Fibre Channel Storage Area Networks (SAN)
– InfiniBand cluster networks.
WAN
Data Center
Access Layer
Core Layer
Aggregation Layer
> 75%
< 2
5%
3/9/2012
3
© 2012 IBM Corporation5
Ethernet in the Data Center
Net: today's Data Center Network looks similar to
Campus networks
But does it still fit to the current requirements of a
modern Data Center?
WAN
Data Center
Access Layer
SAN
Core Layer
Aggregation Layer
FCoE
TRILL
OpenFlow
OverlayECMP
STP
SDN
© 2012 IBM Corporation6
WAN
Data Center
Access Layer
SAN
Core Layer
Aggregation Layer
Traditional Data Center Network Issues
Discrete components and piece parts
Multiple managers and management domains
Box level point Services
Dynamic workload management complexity
Multi‐tenancy complications
SLAs & security are error‐prone
Too many network types, with too many nodes & tiers
Inefficient switching
Expensive network resources
LimitedScale
Manual & Painful
Discrete & Decoupled
3/9/2012
4
© 2012 IBM Corporation
Clients are looking for smarter Data Center Infrastructure that solves these issues.
© 2012 IBM Corporation8
Box buying
Integrated Systembuying
13%
8%
47%
47%
44%
36%
40%
44%
45%
56%
9%
11%
Virtual Machine Mobility
Integrated Compute Platform
Integrated Management
ConvergedFabrics
Priority Investment Trends in next 18 months
Not an Investment Area (0‐3)
Somewhat of an Investment Area (4‐7)
High Investment Area (8‐10)
Customers Want Larger Integrated Systems
Clients are seeking solutions to the complexities associated with inefficient networking, server sprawl and manual virtualization management.
Integrated system pre‐packagesserver, storage, network, virtualization and management and provides an automated, converged & virtualized solution
with fast time to value & simple maintenance
Blade
Rack
3/9/2012
5
© 2012 IBM Corporation9
Smarter Data Center Infrastructure
Expandable Integrated System
Simple, consolidated management
Software Driven Network stack
Workload Aware Networking
Dynamic provisioning
Wire‐once fabric. Period.
Converged network
Single, flat fabric
Grow as you need architecture
Optimized
Automated
Integrated
© 2012 IBM Corporation10
What technology trends are we pursuing to tackle
these issues?
3/9/2012
6
© 2012 IBM Corporation11
Discontinuous Technologies
Discontinuity – a: the property of being not mathematically continuous; b: an instanceof being not mathematically continuous;
especially a value of an independent variable at
which a function is not continuous
11
© 2012 IBM Corporation12
Discontinuous Technologies Examples
Distributed Overlay Virtual Ethernet (DOVE) Networks
TRILL, with disjoint multi‐pathing
Converged Enhanced Ethernet
Fibre Channel over Ethernet
Software Defined Networking
OpenFlow
(more later )
12
3/9/2012
7
© 2012 IBM Corporation13
Sustaining vs Disruptive Technologies
Low end
Mid-range
High end
Most demanding
Sustaining – doesn’t affect existing markets.
Capability
Time
© 2012 IBM Corporation14
Sustaining vs Disruptive Technologies
Low end
Mid-range
High end
Most demanding
DisruptiveTechnology
Disruptive – innovation that creates a new market or displaces existing technologies in a market.
3/9/2012
8
© 2012 IBM Corporation15
Personal Computing Technology ExamplesSustaining vs Disruptive
1971 HP 9100
1974 HP-65
1977-80 Commodore, AppleII, TRS-80
1981 ISA PC (DOS, CGA)
1985 IBM AT (Disk, 286)
1987 - 88386 & Graphics
1990-91GUI
1994-96
Multimedia , Internet
Calculators
High resolution LCDsHome EthernetCable/DSL
1970 1975 1980 1985 2010 20151990 1995 2000 2005
Capability
Organizers / PDAs(Calendar, Notepad, GPS, …)
Smart Phones(GPS, Camera, Organizers, …)
Tablets(Notebook, PC, TV)
PCs
© 2012 IBM Corporation16
Observations
2‐socket performance (per VM instance) is growing at ~60%
IO performance has lagged, but gap will temporarily close a bit
Basic Technology Trends Microprocessor
Processor (SAP & TPC-C Relative) vs Processor IO Performance (GB/s)
1
10
100
1000
2002 2007 2012
Microprocessor IO
SAP 2 socket
TPC-C 2 socket
3/9/2012
9
© 2012 IBM Corporation17
Observations
PCIe Generation 3 is tracking traditional uP IO bandwidth trend
1 PCIe Gen3 can support 2x 40GE (4x 40GE, but not at link rate)
Basic Technology Trends Server IO Links
Server IO Link Performance (GB/s)
.1
1
10
100
2002 2007
Microprocessor IO
PCI-X
PCI-ePCI-eGen3
PCI-eGen4
20152012
© 2012 IBM Corporation
Legend
Uni-Directional Bandwidth (GBytes/s)
.01
.1
1
10
100
2000 2005 2010 2015 2020
Ethernet IB 4x FC
Basic Technology Trends Server IO & Fabrics
Observations
Ethernet, with Data Center enhancements (more next), is a disruptive technology for SAN & cluster markets (now),and PCIe market in the future.
SAN
PCIe
IB
PCIe
uP IOx16x8x4
3/9/2012
10
© 2012 IBM Corporation19
Emerging Disruptive IP/Ethernet Technologies
19
Converged stacks: CEE, FCoE, RoCE
Layer‐2 based multi‐pathing: TRILL, Proprietary fabrics & OpenFlow
Software Defined Networking stack
Disrupted markets:
Fibre Channel SAN
InfiniBand Clusters
Enterprise Networks
© 2012 IBM Corporation20
Traditional Data Center Networks1. Multiple fabrics (SAN, LAN, …)
because Ethernet lacks convergence services:
– Lossy, no bandwidth guarantee, no multi‐pathing, not VM aware
2. Inefficient topology, with too many tiers & nodes
– Per hop forwarding
– Wasted paths (due to STP)
– East‐West traffic forced to go North‐South (e.g. due to multi‐cast containment or proprietary, VNTag, approach)
3. Server‐direct‐to‐core makes cabling a mess
Limited Scale
3/9/2012
11
© 2012 IBM Corporation
CEE the Path to Converged Fabrics
Multiple fabrics
Complex management
Inefficient RAS
Converged fabric
Simplified management
Improved RAS
21
CEE DCBX
LAN
Cluster
SAN
Mgt
Optimized
© 2012 IBM Corporation22
RDMAover
Ethernet
Remote DirectMemory Access
FC Forwarders,FC Data Forwarders,FC Snooping Bridges
Per prioritypause
EnhancedTransmission
Selection
Best available & per prioritybandwidthguaranteedservices
Data Center Convergence Enabling Technologies
Priority Based Flow
Control
Optimized
FibreChannel
over Ethernet
t1 t2 t3
3G/s HPC Traffic3G/s
2G/s
3G/sStorage Traffic3G/s
3G/s
LAN Traffic4G/s
5G/s3G/s
t1 t2 t3
3G/s 3G/s
3G/s 3G/s 3G/s
2G/s
3G/s 4G/s 6G/s
Offered traffic 10G Utilization
Pause Receive BuffersTransmit Buffers
12345678
12345678
3/9/2012
12
© 2012 IBM Corporation23
Traditional Arcane DCN Mesh Ahead
• Multi‐tiered tree topologies• High oversubscription• Expensive, high bandwidth uplinks• Small layer‐2 fabric• Robustness of higher tier products has been a concern
• Mesh, Clos, Jellyfish topologies• Oversubscription only to WAN/core• High cross sectional B/W (cheap TOR B/W)
• Layer‐2 scaling options (more next)• Robust, HA topologies
Optimized
© 2012 IBM Corporation24
•Arbitrary topologies(fat‐tree, Clos, Jellyfish…)
•Cross‐sectional bandwidth scaling
•Multiple redundant paths
•Scales to many ports
•Unifies physical & virtual
Integrated System Fabric Trends
StandaloneStacking
Single stacked switch
•Optimal local switching
•Low latency
•Converged•Full L2/3
•Active‐active multi‐link redundancies
•Single logical switch
+
+
Technology Trend
FabricSwitch cluster
Optimized
3/9/2012
13
© 2012 IBM Corporation25
Stacking Example: 40GE TOR Switch Stacked 40GE TOR switch (e.g. G8316) as spine pay as you grow
Layer‐2 network state is at rack level Adjustable from non‐blocking to 3:1 oversubscription
Maximum deployment up to 768 10GbE Server Ports
Rack #1
10 GE 10 GE
Rack # X
10 GE
Rack # X
10 GE
Rack # X
768x 10GbE Ports
4x G
8316
16x G8264
Optimized
16 QSFP+ Ports 40 Gig E
© 2012 IBM Corporation26
Fabric Technology OptionsOptimized
•Large layer‐2•Distributed control plane•Large scalability•HA with fast convergence•Emerging technology (some proprietary)
•Single disjoint multi‐path fabric may need new RFC
Layer‐3•Established technology•Standards based•Distributed control plane•Large scalability•HA with fast convergence•Small layer 2 without DOVE network
•Many devices to manage
TRILL
OpenFlow
•Large layer‐2•Distributed control plane•Large scalability•HA with fast convergence•Enables network functions delivered as Services e.g. disjoint multi‐pathing (more later)
•Emerging technology•Client acceptance barrier
ECMPTRILL
OpenFlow Controllers
Open‐Flow Switches
3/9/2012
14
© 2012 IBM Corporation27
TRILL FabricEthernetFabric 2
Cores
EthernetFabric 1Cores
Shared vs Disjoint Multi‐pathingData Center storage and cluster fabrics require full path redundancy.
Completely redundant Ethernet fabrics meet this requirement, but come with an administration burden (e.g. 2x SAN configuration & maintenance).
Dual‐TRILL fabrics are emerging today.
–Single TRILL fabric, with disjoint multi‐pathingwould eliminate administration burden.
Optimized
All switches are RBridges
Shared multi‐pathed
Disjoint multi‐pathed
© 2012 IBM Corporation28
Traditional Network
Management PlaneSwitch Management: Telnet, SSH, SNMP, SYSLOG, NTP, HTTP, FTP/TFTP
Control PlaneNetwork Topology, L2 Switching Decisions, Routing, ACLs, Link Management
Data Plane
Switching, Forwarding, Link Transceivers
CPU
Flash
Memory
CPU
Flash
Memory
Switching ASIC
TransceiverInterface
CPUInterface
So
ftware
Each Network element has its own control and management plane
Network devices are closed system
3/9/2012
15
© 2012 IBM Corporation29
Management PlaneSwitch Management: Telnet, SSH, SNMP, SYSLOG, NTP, HTTP, FTP/TFTP
OpenFlow Network
Management PlaneSwitch Management: Telnet, SSH, SNMP, SYSLOG, NTP, HTTP, FTP/TFTP
Control PlaneNetwork Topology, L2 Switching Decisions, Routing, ACLs, Link Management
Data Plane
Switching, Forwarding, Link Transceivers
CPU
Flash
Memory
CPU
Flash
Memory
Switching ASIC
TransceiverInterface
CPUInterface
Control plane is extracted out from the networkAPIs are provided to program the network
OpenFlow ProtocolSecure Channel
Network devices are open and controlled
from a server
© 2012 IBM Corporation30
OpenFlow
OpenFlow Based Fabric OverviewEach switch has Layer‐2 forwarding turned off.
Each switch connects to OpenFlow Controller.
OFC discovers switches and switch adjacencies.
OFC computes disjoint physical paths and configures switch forwarding tables.
All ARPs go to OFC.
Optimized
OpenFlowenabledswitchesOpenFlow
& FCFController
OpenFlow can also be used to create a disjoint multi‐pathing fabric.
Shared multi‐pathedDisjoint multi‐pathed
3/9/2012
16
© 2012 IBM Corporation31
Smart Data Center Networks1. Converged fabric
(LAN, SAN, Cluster, ...)
– Lossless
– Bandwidth allocation
– Disjoint multi‐pathing
– VM workload aware
2. Efficient topology, with few tiers & nodes
– Efficient forwarding
– Disjoint multi‐path enabled
– Optimized East‐West flows
3. Servers connect to switches within rack (eliminates cabling mess)
Optimized
© 2012 IBM Corporation32
Virtualization increased network management complexity
Manual &Painful
ServerWeb
Server
Database
ServerApp
Server ServerServer
vSwitch vSwitchvSwitch
Before Virtualization
Static workloads ran on bare-metal OS
Each workload had network state associated with it.
Physical network was static & simple (configured once)
Physical Network with vSwitches
Server virtualization = dynamic workloads
VM’s network state resides in vSwitch/DCN
Physical network is dynamic and more complex (VMs come up dynamically & move)
3/9/2012
17
© 2012 IBM Corporation33
Hypervisor vSwitch Automation
1. Per‐VM switching in HW
2. Hypervisor Vendor Agnostic
3. Platform Manager Integration
4. Standards based (IEEE 802.1Qbg)
5. Network state migrates ahead of VM
Server ServerServer
vSwitch vSwitchvSwitch
N MotionPrecedes VM
VM Migration
PlatformManager
Automated
© 2012 IBM Corporation34 34
East-West traffic just goes thru first hop
Automated networkstate migration
3/9/2012
18
© 2012 IBM Corporation35
Network Virtualization Trends
Number of VMs per socket is rapidly growing (10x every 10 years).– Increases amount of VM‐VM traffic in Enterprise Data Centers (e.g. co‐resident Web, Application & Database).
–VM growth increases network complexity associated with creating/migrating: layer‐2 (VLANs, ACLs…) & layer‐3 (e.g. Firewall, IPS) attributes.
(approximately 10x every 10 years)
Automated
1
10
100
1000
2006 2008 2010 2012
ApplicationGroupware
Database
Web
Terminal ServerEmailInfrastructure
Virtual Machines per 2‐Socket Server
20162014
© 2012 IBM Corporation36
DC 2DC 2DC 1DC 1
Server ServerServer ServerDOVE
Hypervisor Network VirtualizationTechnology Trend
Layer‐2 vSwitch features, plus:
1. Layer‐3 Distributed Overlay Virtual Ethernet (DOVE)
2. Simple “configure once” network (physical network doesn’t have to be configured per VM).
3. De‐couples virtual from physical
4. Multi‐tenant aware
5. Enables cross‐subnet virtual appliance services (e.g. Firewall, IPS)
ServiceVM1
ServiceVM2
Automated
3/9/2012
19
© 2012 IBM Corporation37
OverlayNetwork
Overview of DOVE Technology Elements
DOVE Controller–Performs management & a portion of control plane functions across DOVE Switches
DOVE Switches (DOVES)–Provides layer‐2 over UDP overlay (e.g. based on OTV/VXLAN)–Performs data and some control plane functions–Runs in Hypervisor vSwitch or gateways–Provides interfaces for Virtual Appliances to plug into (Analogous to appliance line‐cards on a modular switch)
DOVEController
Physicalnetwork
DOVES
DOVES
Automated
© 2012 IBM Corporation38
DOVE TechnologyVXLAN based Encapsulation Example
Original Packet
Encapsulation Protocol (EP) Header (e.g. VXLAN based)
(VXLAN extension in Yellow necessary IETF version field)
EP HeaderOuter IP
Encapsulation
UDPOuterMAC
Version I R R R Reserved
Domain ID Reserved
Automated
PayloadInner IPInnerMAC
PayloadInner IPInnerMAC
3/9/2012
20
© 2012 IBM Corporation39
Site
Server
Server
Improving Networking Efficiency for Consolidated Servers
Hypervisor vSwitches enable addition of virtual appliances (vAppliances), which provide secure communication across subnets (e.g. APP to Database tier).
– However, all traffic must be sent to an external Layer‐3 switch, which is inefficient considering VM/socket growth rates and integrated servers.
To solve this issue requires cross‐subnet communications in Hypervisor’s vSwitch.
10.0.5.400:23:45:67:00:14
10.0.5.700:23:45:67:00:04
10.0.3.4100:23:45:67:00:23
10.0.0.4200:23:45:67:00:25Database
HTTP
APPAPP A Virtual Machine
10.0.3.300:23:45:67:00:24HTTP
10.0.5.100:23:45:67:00:01APP
10.0.3.600:23:45:67:00:16
HTTP
10.0.5.500:23:45:67:00:15
APP
10.0.3.900:23:45:67:00:17
HTTP10.0.3.8
00:23:45:67:00:18HTTP
Lay
er-2
Dis
trib
ute
d v
Sw
itch
vAppliance
vAppliance
Lay
er-3
Layer-3
Appliance(e.g. IPS)
Automated
© 2012 IBM Corporation40
Site
Site
Server
Server
DO
VE
Net
wo
rk
Multi‐Tenant with Overlapping Address Spaces
Multi‐tenant, Cloud environments require multiple IP address spaceswithin same server, within a Data Center and across Data Centers (see above).
– Layer‐3 Distributed Overlay Virtual Ethernet (DOVE) switches enable multi‐tenancy all the way into the Server/Hypervisor,with overlapping IP Address spaces for the Virtual Machines.
Co
keO
verl
ay N
etw
ork
10.0.3.100:23:45:67:00:01
10.0.5.700:23:45:67:00:04
10.0.5.700:23:45:67:00:04
10.0.0.400:23:45:67:00:25 P
epsi
Ove
rlay
Net
wo
rk
Database
Database
HTTPAPP
A Virtual Machine
Note, vSwitches and vAppliances are not shown.
10.0.3.4200:23:45:67:00:25
APP
vAppliance
vAppliance
10.0.5.400:23:45:67:00:01
HTTP
10.0.5.100:23:45:67:00:01
HTTP
10.0.3.100:23:45:67:00:01HTTP
10.0.3.4200:23:45:67:00:01HTTP
4Automated
3/9/2012
21
© 2012 IBM Corporation41
Integrated
A comprehensive tool to automate Virtualized Data Center Network workflows
Planner (or Engineer)Investment = business direction +
application requirements + utilization trends
Engineer Physical Firmware Configuration Advanced PD
OperatorMonitoring Initial PD Automation
Systems Network Element ManagerSNEM
© 2012 IBM Corporation42
SNEM ExamplesIntegrated
Plan
Engineer
Operate
Perform Efficient Firmware or Configuration Updates to Multiple Switches
Automate VM network resident port profiles and converged fabric Quality of Service
Performance trend & root-cause analysis, fault management, ..
3/9/2012
22
© 2012 IBM Corporation43
Network Controller
Network APIs
DOVEDriver
Native Switch(L‐2/3) Driver
…
Multi-tenant Services
Integrated System Network OS
SANServices
SNEM
…Path
Services
Software Defined Networking Technologies
Network functions delivered as services– Multi‐tenant VM security
– Virtualized load balancing
Network API’s provides an abstract interface into underlying controller– Distributes, configures & controls state between services & controllers
– Provides multiple abstract views
Network Operating System drives set of devices– Physical devices (e.g. TOR)
– Virtual devices (e.g. vSwitch)
Control PlaneSoftware
HW & embedded SW
Integrated
OpenFlowDriver
5KV 5KV
5KV5KV
© 2012 IBM Corporation44
Network Controller
Network APIs
DOVEDriver
Native Switch(L‐2/3) Driver
…
Multi-tenant Services
Integrated System Network OS
SANServices
SNEM
…Path
Services
Software Defined Networking Value Network Services value:
– Eco‐system for network Apps vstoday’s closed switch model(think smart‐phone App model)
– Examples: SAN (FC) services;Application performance monitor…
DOVE Network value:– Virtualized network resource provisioning
– De‐couples virtual network from physical network
– Simple “configure once” network (physical network doesn’t have to be configured per VM).
– Cloud scale (e.g. multi‐tenant)
OpenFlow value:– De‐couples switch’s control plane from data plane
– Data center wide physical network control
Control PlaneSoftware
HW & embedded SW
Integrated
OpenFlowDriver
5KV 5KV
5KV5KV
3/9/2012
23
© 2012 IBM Corporation45
DOVE Technology + Multi‐pathing
DOVE network simplifies virtual machine network–Enables multi‐tenancy all the way to the VM–Enables single MAC Address per physical server (2 for HA)–Significantly reduces size of physical network TCAM & ACL tables– Increases layer‐2 scale within Data Center and across Data Centers,by decoupling VM’s layer‐2 from physical network
–Qbg automates layer‐2 provisioning , DOVE automates layer 3‐7 provisioning
Standards based multi‐pathed physical network
.. ..
Standards basedmulti-pathing
DOVE controls overlay network
forwarding
DOVE Gateways Software Defined Networking Stack
(more next)
Integrated
© 2012 IBM Corporation46
Integrated System
High‐performance Integrated Systems
COMPUTE
vSwitch
single scalable interconnect
STORAGE COMPUTE
vSwitch
STORAGE
…Scale‐out Elasticity
Rack
An Integrated System: provides fast time to value, is simple to maintain and scales without limits
Integrated
Optimized: Converged, multi‐path fabric
Automated: Network virtualization
Integrated: Software Defined Networking
3/9/2012
24
© 2012 IBM Corporation47
Renato J RecioIBM Fellow & Systems Networking CTO
11400 Burnett RoadAustin, TX 78758512 973 2217
recio us ibm com
Thank You
© 2012 IBM Corporation48
3/9/2012
25
© 2012 IBM Corporation49
TRILL Fabric
TRILL Based Fabric Overview TRILL provides multi‐pathingthrough the network
– Uses IS‐IS to provide shortest layer‐2 path
– Works with arbitrary topologies
– Easy implementation –minimum configuration
– Scalable deployment, from few to large number of switches
Switches that support TRILL use IETF Routing Bridge (Rbridge) protocol
– Rbridges discovers each other and distribute link state
– Rbridges use TRILL header to encapsulate packets
Optimized
RBridges
RBridges
TRILL = Transparent Interconnection of Lots of Links
MgtConsole
© 2012 IBM Corporation50
Engineer- Perform Efficient
Firmware or Configuration Updates to Multiple Switches
Firmware & Configuration
Single operation on multiple devices
Group management 1. Simultaneous management operations to multiple switches2. Back-up, restore, compare & maintain configuration history3. Firmware upgrades4. Control operations5. CLI script execution
Integrated
3/9/2012
26
© 2012 IBM Corporation51
Converged Network SetupConverged Enhanced Ethernet management• Priority Flow
Control (PFC)• Bandwidth
allocation by Priority Group [a.k.a. Enhanced Transmission Selection (ETS)]
OperatorMonitoring Initial PD Automation
Integrated
© 2012 IBM Corporation52
Port Profile (VSI Database) Setup
Full VM Migration with connectivity persistence requires Port Profile (VSI database in 802.1Qbg terms) automationQbg set up:
VLANs for each VSI typeAccess Control Lists Send & Receive Bandwidth
OperatorMonitoring Initial PD Automation
Integrated
3/9/2012
27
© 2012 IBM Corporation53
Performance Management
Real-time performance monitoring of switch
statistics
Fault management & root-cause analysis
Performance data trend charts
Planner (or Engineer)Investment = business direction + application requirements +
utilization trends
Integrated
© 2012 IBM Corporation54
Example of Qbg Channels and CEE
Storage vHBA1Storage vHBA2Storage vHBA3
Network vNIC1Network vNIC2Network vNIC3Network vNIC4
HPC vNIC1HPC vNIC2HPC vNIC3HPC vNIC4
3G/s HPC Traffic3G/s
2G/s
3G/sStorage Traffic3G/s
3G/s
LAN Traffic4G/s
5G/s3G/s
Time 1 Time 2 Time 3
CEE link
Server CNA(Qbg Channels for traffic)
Switch(CEE for traffic)
Storage vHBA1
Storage vHBA2
Storage vHBA3
HPC vNIC1
HPC vNIC2
HPC vNIC3
HPC vNIC4
LAN vNIC3
LAN vNIC4
LAN vNIC1
LAN vNIC2