plnog 13: piotr szolkowski: 100g ethernet – case study

Post on 08-Jun-2015

329 Views

Category:

Documents

4 Downloads

Preview:

Click to see full reader

DESCRIPTION

Piotr Szolkowski – TBD Topic of Presentation: 100G Ethernet – Case Study Language: Polish Abstract: TBD

TRANSCRIPT

Piotr Szołkowski – Extreme Networks Paweł Nastachowski – TelecityGroup (ex. PLIX) PLNOG 2014 Kraków

Connectivity & Consolidation • 14.5 RU - 1/3rd of a rack • 384 x 1G wire-speed • 768 x 10G wire-speed • 192 x 40G wire-speed • 32 x 100G wire-speed

The Flagship – BlackDiamond X8

Performance & Scale • Orthogonal architecture • 10 or 20 Tbps capacity/switch • 1.28 Tbps bandwidth/slot • 30.5 Bpps throughput capacity • 1 Million L2/L3 entries scale*

Latency • No chopping & reassembly • 2.3/3.5 uSec across fabric • <1.5/2.5 uSec within IO

Availability & Assurance • Designed for Tier 1-4 DC • 1+1 Control Plane • N+1 Data Plane • N+N Power Plane • EAPS, MLAG, VRRP, ISSU**

Virtualization & Segmentation • 128K/1M* Virtual Machines • VM lifecycle management • Tenant isolation • VEPA, VPP, XNV™ • VR, MPLS, TRILL, PSTAG*

Storage & Convergence • 10/40G storage •  iSCSI, NFS, CIFS • DCBx (PFC, FIPS, ETS) • FCoE transit capability • VBLOCK certified

Efficiency & Conservation • Front-to-Back cooling • Variable fan speed •  Intelligent power control • Efficient next-gen optics • Only 5.6W per 10GbE port

2

* Roadmap ** Patch level support

New

SW Defined Networking • OpenFlow 1.3* • Open Stack (Neutron) plugin • API for scripting/applications

3

BlackDiamond X8 IO For Next-Gen Networks

Non

-XL

10G

40

G

100G

X

L 10

0G

40G

12-Port 40GbE QSFP+ Module

48-Port 10GbE SFP+ Module

24-Port 40GbE QSFP+ Module

12-Port 40GbE-XL QSFP+ Module

48-Port 100/1000/10000MbE RJ45 Module

4-Port 100GbE-XL CFP2 Module

PoR 4-Port 100GbE CFP2 Module

GA

PoR

* *

Fabric Module

I/O module

Direct Mating

I/O M

odules

Mid-plane only for management path No mid-plane for data path!

Fabric Modules

4

•  Direct Connect data path for ultimate performance

•  Future-Proof chassis architecture: No mid-data plane design

Bandwidth to Performance Proportionality

•  4-port wire-speed 100GbE (CFP2) or 40-port 10GbE (SFP+) •  Choice of 100G-SR10 (100m) & LR4 (10Km) optics or 10x10 breakout •  Supported with either 10Tbps or 20Tbps fabric module types •  N+1 data and power plane support with fully populated chassis •  Only 368 Watt un-terminated, 440 Watts terminated worst case •  Availability: Now

Proud to Present – BlackDiamond X8 100GbE

Extreme’s first 100GbE product ever…

100GbE Optics Comparison

6

CFP CFP2 CFP4

•  Brocade MLXe •  Cisco Nexus 7000 •  Juniper MX •  HP12900 •  ALU 7750

•  Extreme BDX8

•  2015

C

FP2

(N

ot A

vaila

ble

Yet)

CFP4 / QSFP28

100GbE Transceiver (Optics) Comparison

C

FP

CFP2 / (CPAK)

Density

10Km

40Km

4 12 8 24 2

300m CXP / (MXP)

Distance

•  Industry leading CFP2 (IEEE 802.3ba) standard based efficient optics •  Roughly half the form-factor and power compared to existing CFP •  Lower-cost short range (< 100m) connectivity with SR10 •  10 x 10G breakout option using MPO ribbon cable •  Long range (< 10Km) connectivity with LR4 over single-mode fiber •  Availability: Now

Extreme’s first 100GbE product ever…

SR10 LR4

Proud to Present – BlackDiamond X8 100GbE

The Rise of 100GbE

9 9

Source: Dell’Oro

•  Initial adoption of 100GbE will remain to be in Service Provider and Internet Exchange Points (IXP) – and significantly on the Modular Products like BlackDiamond X8.

•  In Data Centers, 100GbE adoption will depend on relative price per port. optics and 40GbE adoption on the server side.

•  40GbE server access will start gaining traction after 2014. Servers will quickly drive a significant portion of 40GbE ports.

•  Customers will gravitate to 40GbE, not because they need all the bandwidth, but because 2 10GbE ports may not be sufficient.

•  Vendors like Extreme Networks may prefer to push down the cost barriers of 100GbE instead of 40GbE for switch interconnects for better consolidation (such as in HPC Interconnect).

10

100GbE Focus for Extreme Networks

Service Provider

Core Data Center

Campus

Edge

Agg.

Metro Border Core Edge

HPC Border Core

Increased Cost

HFT

Immediate Focus

IXP.

100GbE Play Positioning

11

Density

Scale

Combination of Large Memory and MPLS / BGP drives

high scale

Combination of High Density and Lower

Cost drives BW consolidation

Data Center Downlink / HPC / IXP

Dat

a C

ente

r Bor

der /

C

loud

/ In

tern

et

IXP Edge

Site 2

SP 4

SP 5

Cloud 2

SP 6

BDX8

10G

100G

IXP Core

ISP Edge

100G

DWDM

Opt

ical

Net

wor

k

Service Provider: Internet Exchange Point

IXP Edge

Site 1

SP 1

SP 2

SP 3

Cloud 1

BDX8

10G

100G

IXP Core

ISP Edge

100G

100G on customer side (PE) and in the core (P)

Core

Service Provider: Aggregation/Core Offload

High density 10G aggregation through switches, fewer 100G into the core

100G

P Router

P Router

Edge/Agg.

10G

Customers

Customers

PE Router

P Router

Edge/Agg.

10G

Customers

Customers

PE Router B

DX

8 BD

X8

14

48 x 10 GbE

12 x 40 GbE

4 x 100 GbE

24 x 40 GbE

12 x 100 GbE

24 x 100 GbE

10Tbps

Next-Gen

20Tbps

96 x 10 GbE

128 x 10 GbE

240 x 10 GbE

Fabrics Density Capacity

BlackDiamond X8 – Awesome Growth Capacity

32 x 40 GbE

Possibilities

100G w TelecityGroup (ex. PLIX)

Proof of Concept – 100G

•  Podłączenie przełącznika BlackDiamond X8 do szkieletu sieci PLIX –  10 x 10G Ethernet

•  Zestawienie trasy optycznej na długości 8.4 km •  Przyłączenie interfesju 100G do urządzenia klienckiego (Cisco ASR) •  Praktyczne testy współdziałania interfejsów 100G •  Przeniesienie konfiguracji z istniejących w PLIX urządzeń •  Konfiguracja BlackDiamond X8

–  3 x 10T Switch Fabric Module –  1 x 4 porty 100G Ethernet + 2 x CFP2 – 100GBASE-LR4 (10 km) –  2 x 48 portów 10G SFP+

©2014 Extreme Networks, Inc. All rights reserved. 16

Topologia sieci PLIX - PoC

©2014 Extreme Networks, Inc. All rights reserved. 17

Podsumowanie PoC

•  Czas trwania testów 4 tygodnie •  Udało się uruchomić produkcyjnie jeden kliencki port 100G na czas

MŚ w piłce nożnej •  Jedna z optyk 100G CFP2 wykazywała błędy CRC – zamiana optyk

wyeliminowała problem. Dostarczone optyki pochodziły z wstępnych partii przedprodukcyjnych – wersje beta

•  Przeprowadzone praktyczne testy w realnej infrastrukturze nie wykazały żadnych problemów

©2014 Extreme Networks, Inc. All rights reserved. 18

Przyszłość 100G w PLIX

•  Pojawiło się realne zapotrzebowanie klientów PLIX na przyłącza 100G

•  Po testach produkcyjnych podjęta została decyzja o zakupie dwóch przełączników BlackDiamond X8 w następującej konfiguracji:

–  4 x 20T Switching Fabric –  3 x 48 portów 10G SFP+ –  2 x 24 porty 40G QSFP+ –  1 x 4 porty 100G CFP2

•  Urządzenia zostaną wpięte do głównego ringu 4 x 40G (EAPS), a między sobą w pierwszym etapie 6 x 40G

©2014 Extreme Networks, Inc. All rights reserved. 19

Nowa planowana topologia sieci PLIX

©2014 Extreme Networks, Inc. All rights reserved. 20

•  Przełom listopada i grudnia 2014 roku – uruchomienie pierwszego klienta z łączem 100G

©2014 Extreme Networks, Inc. All rights reserved. 21

Thank You

22

top related