future internet: new network architectures and technologies
DESCRIPTION
Future Internet: New Network Architectures and Technologies. An Introduction to OpenFlow and Software-Defined Networking. Christian Esteve Rothenberg [email protected]. Agenda. Future Internet Research Software-Defined Networking An introduction to OpenFlow/SDN - PowerPoint PPT PresentationTRANSCRIPT
Future Internet: New Network Architectures and Technologies
An Introduction to OpenFlow and Software-Defined Networking
Christian Esteve [email protected]
Agenda
• Future Internet Research• Software-Defined Networking
– An introduction to OpenFlow/SDN– The Role of Abstractions in Networking– Sample research projects– RouteFlow
Credits• Nick McKeown's SDN slides
– http://www.openflow.org/downloads/NickTalk_OFELIA_Feb2011.ppt
• GEC 10 OpenFlow Tutorial– http://www.openflow.org/downloads/OpenFlowTutorial_GEC10.ppt
• Scott Shenker's talk on the “The Future of Networking, and the Past of Protocols”– http://www.slideshare.net/martin_casado/sdn-abstractions
• The OpenFlow Consortium– http://www.openflow.org
What is OpenFlow?
Short Story: OpenFlow is an API
• Control how packets are forwarded (and manipulated)• Implementable on COTS hardware• Make deployed networks programmable
– not just configurable– Vendor-independent
• Makes innovation easier• Goal (experimenter’s perspective):
– Validate your experiments on deployed hardware with real traffic at full line speed
• Goal (industry perspective):– Reduced equipment costs through commoditization and competition
in the controller / application space– Customization and in-house (or 3rd party) development of new
networking features (e.g. protocols).
Why OpenFlow?
7
Million of linesof source code
5400 RFCs Barrier to entry
Billions of gates Bloated Power Hungry
Many complex functions baked into the infrastructureOSPF, BGP, multicast, differentiated services,Traffic Engineering, NAT, firewalls, MPLS, redundant layers, …
An industry with a “mainframe-mentality”, reluctant to change
The Ossified Network
Specialized Packet Forwarding Hardware
OperatingSystem
Feature Feature
Routing, management, mobility management, access control, VPNs, …
8
Research Stagnation• Lots of deployed innovation in other areas
– OS: filesystems, schedulers, virtualization– DS: DHTs, CDNs, MapReduce– Compilers: JITs, vectorization
• Networks are largely the same as years ago– Ethernet, IP, WiFi
• Rate of change of the network seems slower in comparison– Need better tools and abstractions to demonstrate
and deploy
9
Closed Systems (Vendor Hardware)
• Stuck with interfaces (CLI, SNMP, etc)• Hard to meaningfully collaborate• Vendors starting to open up, but not usefully• Need a fully open system – a Linux equivalent
10
Open Systems
Performance Fidelity
Scale Real User Traffic?
Complexity Open
Simulation medium medium no medium yes
Emulation medium low no medium yes
Software Switches
poor low yes medium yes
NetFPGA high low yes high yes
Network Processors
high medium yes high yes
Vendor Switches
high high yes low no
gap in the tool spacenone have all the desired attributes!
11
OpenFlow: a pragmatic compromise
• + Speed, scale, fidelity of vendor hardware• + Flexibility and control of software and
simulation• Vendors don’t need to expose
implementation• Leverages hardware inside most switches
today (ACL tables)
13
How does OpenFlow work?
14
Ethernet SwitchEthernet Switch
15
Data Path (Hardware)Data Path (Hardware)
Control PathControl PathControl Path (Software)Control Path (Software)
16
Data Path (Hardware)Data Path (Hardware)
Control PathControl Path OpenFlowOpenFlow
OpenFlow ControllerOpenFlow Controller
OpenFlow Protocol (SSL/TCP)
17
Controller
PC
HardwareLayer
SoftwareLayer
Flow Table
MACsrc
MACdst
IPSrc
IPDst
TCPsport
TCPdport Action
OpenFlow Client
**5.6.7.8*** port 1
port 4port 3port 2port 1
1.2.3.45.6.7.8
OpenFlow Example
18
OpenFlow Basics Flow Table Entries
SwitchPort
MACsrc
MACdst
Ethtype
VLANID
IPSrc
IPDst
IPProt
L4sport
L4dport
Rule Action Stats
1. Forward packet to zero or more ports2. Encapsulate and forward to controller3. Send to normal processing pipeline4. Modify Fields5. Any extensions you add!
+ mask what fields to match
Packet + byte counters
19
VLANpcp
IPToS
ExamplesSwitching
*
SwitchPort
MACsrc
MACdst
Ethtype
VLANID
IPSrc
IPDst
IPProt
TCPsport
TCPdport Action
* 00:1f:.. * * * * * * * port6
Flow Switching
port3
SwitchPort
MACsrc
MACdst
Ethtype
VLANID
IPSrc
IPDst
IPProt
TCPsport
TCPdport
Action
00:20.. 00:1f.. 0800 vlan1 1.2.3.4 5.6.7.8 4 17264 80 port6
Firewall
*
SwitchPort
MACsrc
MACdst
Ethtype
VLANID
IPSrc
IPDst
IPProt
TCPsport
TCPdport Action
* * * * * * * * 22 drop
20
ExamplesRouting
*
SwitchPort
MACsrc
MACdst
Ethtype
VLANID
IPSrc
IPDst
IPProt
TCPsport
TCPdport Action
* * * * * 5.6.7.8 * * * port6
VLAN Switching
*
SwitchPort
MACsrc
MACdst
Ethtype
VLANID
IPSrc
IPDst
IPProt
TCPsport
TCPdport
Action
* * vlan1 * * * * *
port6, port7,port9
00:1f..
21
Centralized vs Distributed ControlBoth models are possible with OpenFlow
Centralized Control
OpenFlow Switch
OpenFlow Switch
OpenFlow Switch
Controller
Distributed Control
OpenFlow Switch
OpenFlow Switch
OpenFlow Switch
Controller
Controller
Controller
22
Flow Routing vs. AggregationBoth models are possible with OpenFlow
Flow-Based
• Every flow is individually set up by controller
• Exact-match flow entries• Flow table contains one
entry per flow• Good for fine grain
control, e.g. campus networks
Aggregated
• One flow entry covers large groups of flows• Wildcard flow entries• Flow table contains one entry per category of flows• Good for large number of flows, e.g. backbone
23
Reactive vs. Proactive (pre-populated)Both models are possible with OpenFlow
Reactive
• First packet of flow triggers controller to insert flow entries
• Efficient use of flow table• Every flow incurs small
additional flow setup time• If control connection lost,
switch has limited utility
Proactive
• Controller pre-populates flow table in switch• Zero additional flow setup time• Loss of control connection does not disrupt traffic• Essentially requires aggregated (wildcard) rules
24
Usage examples
• Alice’s code:– Simple learning switch – Per Flow switching– Network access
control/firewall– Static “VLANs”– Her own new routing protocol:
unicast, multicast, multipath– Home network manager– Packet processor (in
controller)– IPvAlice
– VM migration– Server Load balancing– Mobility manager– Power management– Network monitoring
and visualization– Network debugging– Network slicing
… and much more you can create!
Intercontinental VM MigrationMoved a VM from Stanford to Japan without changing its IP. VM hosted a video game server with active network connections.
Quiz Time• How do I provide control connectivity? Is it really clean slate?
• Why aren’t users complaining about time to setup flows over OpenFlow? (Hint: What is the predominant traffic today?)
• Considering switch CPU is the major limit, how can one take down an OpenFlow network?
• How to perform topology discovery over OpenFlow-enabled switches?
• What happens when you have a non-OpenFlow switch in between?
• What if there are two islands connected to same controller?
• How scalable is OpenFlow? How does one scale deployments?
27
What can you not do with OpenFlow ver1.0
• Non-flow-based (per-packet) networking– ex. Per-packet next-hop selection (in wireless mesh)– yes, this is a fundamental limitation– BUT OpenFlow can provide the plumbing to connect these
systems
• Use all tables on switch chips– yes, a major limitation (cross-product issue)– BUT an upcoming OF version will expose these
28
What can you not do with OpenFlow ver1.0
• New forwarding primitives– BUT provides a nice way to integrate them through
extensions
• New packet formats/field definitions – BUT a generalized OpenFlow (2.0) is on the horizon
• Optical Circuits– BUT efforts underway to apply OpenFlow model to circuits
• Low-setup-time individual flows– BUT can push down flows proactively to avoid delays
Where it’s going
• OF v1.1: Extensions for WAN– multiple tables: leverage additional tables– tags and tunnels– multipath forwarding
• OF v2+– generalized matching and actions: an “instruction
set” for networking
30
OpenFlow Implementations(Switch and Controller)
31
OpenFlow building blocks
ControllerNOXNOX
SlicingSoftwareFlowVisorFlowVisor
FlowVisorConsole
32
ApplicationsLAVILAVIENVI (GUI)ENVI (GUI) ExpedientExpedientn-Castingn-Casting
NetFPGANetFPGASoftware Ref. SwitchSoftware
Ref. SwitchBroadcom Ref. SwitchBroadcom Ref. Switch
OpenWRTOpenWRT PCEngine WiFi AP
PCEngine WiFi AP
Commercial Switches Stanford Provided
OpenFlowSwitches
SNACSNAC
Stanford Provided
Monitoring/debugging toolsoflopsoflopsoftraceoftrace openseeropenseer
OpenVSwitchOpenVSwitch
HP, NEC, Pronto, Juniper.. and many more
HP, NEC, Pronto, Juniper.. and many more
BeaconBeacon HeliosHelios MaestroMaestro
Ciena Coredirector
NEC IP8800UNIVERGE PF5240
Current OpenFlow hardware
More coming soon...
Juniper MX-series
HP Procurve 5400
Pronto 3240/3290
WiMax (NEC)
PC EnginesNetgear 7324
33
Outdated! SeeONF: https://www.opennetworking.org/ ONS: http://opennetsummit.org/
Commercial Switch VendorsModel Virtualize Notes
HP Procurve 5400zl or 6600
1 OF instance per VLAN
-LACP, VLAN and STP processing before OpenFlow-Wildcard rules or non-IP pkts processed in s/w-Header rewriting in s/w-CPU protects mgmt during loop
NEC IP8800 Series andUNIVERGE PF5240
1 OF instance per VLAN
-OpenFlow takes precedence-Most actions processed in hardware-MAC header rewriting in h/w-More than 100K flows (PF5240)
Pronto 3240 or 3290 with Pica8 or Indigo firmware
1 OF instance per switch
-No legacy protocols (like VLAN and STP)-Most actions processed in hardware-MAC header rewriting in h/w 34
Controller VendorsVendor Notes
Nicira’s NOX
•Open-source GPL•C++ and Python•Researcher friendly
Nicira’s ONIX
•Closed-source•Datacenter networks
SNAC •Open-source GPL•Code based on NOX0.4•Enterprise network•C++, Python and Javascript•Currently used by campuses
Vendor Notes
Stanford’s Beacon
•Open-source•Researcher friendly•Java-based
BigSwitch controller
•Closed source•Based on Beacon•Enterprise network
Maestro (from Rice Univ)
•Open-source•Based on Java
NEC’s Helios •Open-source•Written in C and Ruby
NEC UNIVERGE PFC
• Closed source• Based on Helios
35
Outdated! SeeONF: https://www.opennetworking.org/ ONS: http://opennetsummit.org/
Growing CommunityVendors and start-ups Providers and business-unit
More... More...
36Note: Level of interest variesNote: Level of interest varies
Industry commitment
• Big players forming the Open Networking Foundation (ONF) to promote a new approach to networking called Software-Defined Networking (SDN).
http://www.opennetworkingfoundation.org/ http://www.opennetworkingfoundation.org/
Software-Defined Networking (SDN)
38
Specialized Packet Forwarding Hardware
App
App
App
Specialized Packet Forwarding Hardware
App
App
App
Specialized Packet Forwarding Hardware
App
App
App
Specialized Packet Forwarding Hardware
App
App
App
Specialized Packet Forwarding Hardware
OperatingSystem
OperatingSystem
OperatingSystem
OperatingSystem
OperatingSystem
App
App
App
39
Current Internet Closed to Innovations in the Infrastructure
Closed
Specialized Packet Forwarding Hardware
App
App
App
Specialized Packet Forwarding Hardware
App
App
App
Specialized Packet Forwarding Hardware
App
App
App
Specialized Packet Forwarding Hardware
App
App
App
Specialized Packet Forwarding Hardware
OperatingSystem
OperatingSystem
OperatingSystem
OperatingSystem
OperatingSystem
App
App
App
Network Operating System
App App App
“Software Defined Networking” approachto open it
40
App
Simple Packet Forwarding Hardware
Simple Packet Forwarding Hardware
Simple Packet Forwarding Hardware
App App
Simple Packet Forwarding Hardware Simple Packet
Forwarding Hardware
Network Operating System
1. Open interface to hardware
3. Well-defined open API2. At least one good operating system
Extensible, possibly open-source
The “Software-defined Network”
41
Virtualizing OpenFlow
42
Windows(OS)
Windows(OS)
Linux MacOS
x86(Computer)
Windows(OS)
AppApp
LinuxLinuxMacOS
MacOS
Virtualization layer
App
Controller 1
AppApp
Controller2
Virtualization or “Slicing”
App
OpenFlow
Controller 1NOX(Network OS)
Controller2Network OS
Trend
Computer Industry Network Industry
Simple Packet Forwarding Hardware
Network Operating System 1
Open interface to hardware
Virtualization or “Slicing” Layer
Network Operating System 2
Network Operating System 3
Network Operating System 4
App App App App App App App App
Many operating systems, orMany versions
Open interface to hardware
Isolated “slices”
Simple Packet Forwarding Hardware
Simple Packet Forwarding Hardware
Simple Packet Forwarding Hardware
Simple Packet Forwarding Hardware
44
Switch Based VirtualizationExists for NEC, HP switches but not flexible enough
Normal L2/L3 Processing
Flow Table
Production VLANs
Research VLAN 1
Controller
Research VLAN 2
Flow Table
Controller
45
FlowVisor-based Virtualization
OpenFlow Switch
OpenFlowProtocolOpenFlowProtocol
OpenFlow FlowVisor & Policy Control
Craig’sController
Heidi’sControllerAaron’s
Controller
OpenFlowProtocolOpenFlowProtocol
OpenFlow Switch
OpenFlow Switch
46
Topology discovery is
per slice
Topology discovery is
per slice
OpenFlowProtocol
OpenFlowFlowVisor & Policy Control
BroadcastMulticast
OpenFlowProtocol
httpLoad-balancer
FlowVisor-based Virtualization
OpenFlow Switch
OpenFlow Switch
OpenFlow Switch
47
Separation not onlyby VLANs, but any
L1-L4 pattern
Separation not onlyby VLANs, but any
L1-L4 pattern
dl_dst=FFFFFFFFFFFFdl_dst=FFFFFFFFFFFF tp_src=80, ortp_dst=80tp_src=80, ortp_dst=80
FlowSpace: Maps Packets to Slices
Use Case: Aaron’s IP• A new layer 3 protocol• Replaces IP• Defined by a new Ether Type
SwitchPort
MACsrc
MACdst
Ethtype
VLANID
IPSrc
IPDst
IPProt
TCPsport
TCPdport
* * * AaIP * * * * * *
* * * !AaIP * * * * * *
51
Slicing traffic
All network traffic
Researchtraffic
Experiment #1
Experiment #2
…
Experiment N
Ways to use slicing
Slice by featureSlice by user
Slice by featureSlice by user
Home-grown protocolsDownload new feature
Versioning
Home-grown protocolsDownload new feature
Versioning
SDN Interlude…Returning to fundamentals
55
Software-Defined Networking (SDN)
• Not just an idle academic daydream– Tapped into some strong market need
• One of those rare cases where we “know the future”– Still in development, but consensus on
inevitability
• Much more to story than “OpenFlow” trade rag hype– A revolutionary paradigm shift in the
network control planeScott Shenker's talk
“The Future of Networking, and the Past of Protocols”, http://www.youtube.com/watch?v=WVs7Pc99S7w
The Role of Abstractions in Networking
Weaving Together Three Themes• Networking currently built on weak foundation
– Lack of fundamental abstractions
• Network control plane needs three abstractions– Leads to SDN v1 and v2
• Key abstractions solve other architecture problems– Two simple abstractions make architectures evolvable
Weak Intellectual Foundations
• OS courses teach fundamental principles – Mutual exclusion and other synchronization primitives– Files, file systems, threads, and other building blocks
• Networking courses teach a big bag of protocols– No formal principles, just vague design guidelines
Source: Scott Shenker
Weak Practical Foundations
•Computation and storage have been virtualized– Creating a more flexible and manageable infrastructure
•Networks are still notoriously hard to manage– Network administrators large share of sysadmin staff
Source: Scott Shenker
Weak Evolutionary Foundations
•Ongoing innovation in systems software– New languages, operating systems, etc.
•Networks are stuck in the past– Routing algorithms change very slowly– Network management extremely primitive
Source: Scott Shenker
Why Are Networking Foundations Weak?
• Networks used to be simple– Basic Ethernet/IP straightforward, easy to manage
• New control requirements have led to complexity– ACLs, VLANs, TE, Middleboxes, DPI,…
• The infrastructure still works...– Only because of our great ability to master complexity
• Ability to master complexity both blessing and curseSource: Scott Shenker
How Programming Made the Transition
• Machine languages: no abstractions– Had to deal with low-level details
• Higher-level languages: OS and other abstractions– File system, virtual memory, abstract data types, ...
• Modern languages: even more abstractions– Object orientation, garbage collection,...
Abstractions simplify programmingEasier to write, maintain, reason about programs
Source: Scott Shenker
Why Are Abstractions/Interfaces Useful?
• Interfaces are instantiations of abstractions
• Interface shields a program’s implementation details– Allows freedom of implementation on both sides– Which leads to modular program structure
• Barbara Liskov: “Power of Abstractions” talk“Modularity based on abstraction
is the way things get done”http://www.infoq.com/presentations/liskov-power-of-abstraction
• So, what role do abstractions play in networking?
Source: Scott Shenker
Layers are Main Network Abstractions
• Layers provide nice data plane service abstractions– IP's best effort delivery– TCP's reliable byte-stream
• Aside: good abstractions, terrible interfaces– Don’t sufficiently hide implementation details
• Main Point: No control plane abstractions– No sophisticated management/control building blocks
Source: Scott Shenker
No Abstractions = Increased Complexity
• Each control requirement leads to new mechanism– TRILL, LISP, etc.
• We are really good at designing mechanisms– So we never tried to make life easier for ourselves– And so networks continue to grow more complex
• But this is an unwise course:– Mastering complexity cannot be our only focus– Because it helps in short term, but harms in long term– We must shift our attention from mastering complexity to
extracting simplicity….Source: Scott Shenker
How Do We Build Control Plane?
• We define a new protocol from scratch– E.g., routing
• Or we reconfigure an existing mechanism– E.g., traffic engineering
• Or leave it for manual operator configuration– E.g., access control, middleboxes
Source: Scott Shenker
What Are The Design Constraints?
• Operate within the confines of a given datapath– Must live with the capabilities of IP
• Operate without communication guarantees– A distributed system with arbitrary delays and drops
• Compute the configuration of each physical device– Switch, router, middlebox– FIB, ACLs, etc.
This is insanity!Source: Scott Shenker
Programming Analogy
• What if programmers had to:– Specify where each bit was stored– Explicitly deal with all internal communication errors– Within a programming language with limited expressability
• Programmers would redefine problem by:– Defining higher level abstractions for memory– Building on reliable communication primitives– Using a more general language
• Abstractions divide problem into tractable pieces– Why aren’t we doing this for network control?
Source: Scott Shenker
Central Question
• What abstractions can simplify the control plane?– i.e., how do we separate the problems?
• Not about designing new mechanisms!– We have all the mechanisms we need
• Extracting simplicity vs mastering complexity– Separating problems vs solving problems– Defining abstractions vs designing mechanisms
Source: Scott Shenker
Abstractions Must Separate 3 Problems
• Constrained forwarding model
• Distributed state
• Detailed configuration
Source: Scott Shenker
Forwarding Abstraction
• Control plane needs flexible forwarding model– With behavior specified by control program
• This abstracts away forwarding hardware– Crucial for evolving beyond vendor-specific solutions
• Flexibility and vendor-neutrality are both valuable– But one economic, the other architectural
• Possibilities:– General x86 program, MPLS, OpenFlow,…..– Different flexibility/performance/deployment tradeoffs
Source: Scott Shenker
State Distribution Abstraction
• Control program should not have to deal with vagaries of distributed state– Complicated, source of many errors– Abstraction should hide state dissemination/collection
• Proposed abstraction: global network view– Don’t worry about how to get that view (next slide)
• Control program operates on network view– Input: global network view (graph)– Output: configuration of each network device
Source: Scott Shenker
Network Operating System (NOS)
• NOS: distributed system that creates a network view– Runs on servers (controllers) in the network
• Communicates with forwarding elements in network– Gets state information from forwarding elements– Communicates control directives to forwarding elements
• Using forwarding abstraction
• Control program operates on view of network– Control program is not a distributed system
• NOS plus Forwarding Abstraction = SDN (v1)
Source: Scott Shenker
Global Network View
Protocols Protocols
Control Program
Network Operating System
Current NetworksSoftware-Defined Networking (v1)
Control via forwarding interface
Source: Scott Shenker
Major Change in Paradigm
• No longer designing distributed control protocols– Now just defining a centralized control function
• Control program: Configuration = Function(view)• Why is this an advance?
– Much easier to write, verify, maintain, reason about, ….
• NOS handles all state dissemination/collection– Abstraction breaks this off as tractable piece– Serves as fundamental building block for control
Source: Scott Shenker
What Role Does OpenFlow Play?
• NOS conveys configuration of global network view to actual physical devices
• But nothing in this picture limits what the meaning of “configuration” is
• OpenFlow is one possible definition of how to model the configuration of a physical device
• Is it the right one? Absolutely not.– Crucial for vendor-independence– But not the right abstraction (yet)
Source: Scott Shenker
Are We Done Yet?
• This approach requires control program (or operator) to configure each individual network device
• This is much more complicated than it should be!
• NOS eases implementation of functionality– But does not help specification of functionality!
• We need a specification abstraction
Source: Scott Shenker
Specification Abstraction
• Give control program abstract view of network– Where abstract view is function of global view
• Then, control program is abstract mapping– Abstract configuration = Function(abstract view)
• Model just enough detail to specify goals– Don’t provide information needed to implement goals
Source: Scott Shenker
One Simple Example: Access Control
Full Network View
Abstract NetworkView
Source: Scott Shenker
More Detailed Model
L2 L3 ACLPacket In Packet Out
Service model can generally be described by a table pipeline
Source: Scott Shenker
Implementing Specification Abstraction
L2L2 L3L3 ACLACL
Network Hypervisor (Nypervisor)
Compiles abstract pipeline into physical configuration
Given: Abstract Table Pipeline
Need: pipeline operations distributed over network of physical switches
Source: Scott Shenker
Two Examples
• Scale-out router:– Abstract view is single router– Physical network is collection of interconnected switches– Nypervisor allows routers to “scale out, not up”
• Multi-tenant networks:– Each tenant has control over their “private” network– Nypervisor compiles all of these individual control
requests into a single physical configuration– “Network Virtualization”
Source: Scott Shenker
Nypervisor
Abstract Network View
Global Network View
Network Operating System
Moving from SDNv1 to SDNv2
Control Program
Source: Scott Shenker
Clean Separation of Concerns
• Network control program specifies the behavior it wants on the abstract network view– Control program: maps behavior to abstract view– Abstract model should be chosen so that this is easy
• Nypervisor maps the controls expressed on the abstract view into configuration of the global view– Nypervisor: abstract model to global view– Models such as single cross-bar, single “slice”,…
• NOS distributes configuration to physical switches– NOS: global view to physical switches
Source: Scott Shenker
Three Basic Network Interfaces
• Forwarding interface: abstract forwarding model– Shields higher layers from forwarding hardware
• Distribution interface: global network view– Shields higher layers from state dissemination/collection
• Specification interface: abstract network view– Shields control program from details of physical network
Source: Scott Shenker
Control Plane Research Agenda
• Create three modular, tractable pieces– Nypervisor– Network Operating System– Design and implement forwarding model
• Build control programs over abstract models– Identify appropriate models for various applications– Virtualization, Access Control, TE, Routing,…
Source: Scott Shenker
Implementations vs Abstractions
• SDN is an instantiation of fundamental abstractions– Don’t get distracted by the mechanisms
• The abstractions were needed to separate concerns– Network challenges are now modular and tractable
• The abstractions are fundamental– SDN implementations are ephemeral– OpenFlow/NOX/etc. particular implementations
Source: Scott Shenker
Future of Networking, Past of Protocols
• The future of networking lies in cleaner abstractions– Not in defining complicated distributed protocols– SDN is only the beginning of needed abstractions
• Took OS researchers years to find their abstractions– First they made it work, then they made it simple
• Networks work, but they are definitely not simple– It is now networking’s turn to make this transition
• Our task: make networking a mature discipline– By extracting simplicity from the sea of complexity…
Source: Scott Shenker
"SDN has won the war of words, the real battle over customer adoption is just beginning...."
- Scott Shenker
OpenFlow Deployments
90
Current Trials and Deployments68 Trials/Deployments - 13 Countries
Brazil University of Campinas Federal University of Rio de Janeiro Federal University of Amazonas Foundation Center of R&D in Telecomm.CanadaUniversity of Toronto Germany T-Labs Berlin Leibniz Universität HannoverFrance ENS Lyon/INRIA India VNITMahindra SatyamItaly Politecnico di TorinoUnited Kingdom University College LondonLancaster UniversityUniversity of EssexTaiwanNational Center for High-Performance Computing Chunghwa Telecom Co
Current Trials and DeploymentsJapan NEC JGN PlusNICT University of Tokyo Tokyo Institute of Technology Kyushu Institute of Technology NTT Network Innovation Laboratories KDDI R&D Laboratories Unnamed UniversitySouth Korea KORENSeoul National University Gwangju Institute of Science & TechPohang University of Science & TechKorea Institute of Science & TechETRIChungnam National UniversityKyung Hee UniversitySpain University of Granada Switzerland CERN
3 New EU Projects:OFELIA, SPARC, CHANGE
EU Project Participants
Kansas StateKansas State
GENI OpenFlow deployment (2010)10 institutions and 2 National Research Backbones
95
National Lambda
Rail
GENI Network Evolution
FIBRE: FI testbeds between BRazil and Europe
• Project between 9 partners from Brazil (6 from GIGA), 5 from Europe (4 from Ofelia and OneLab) and 1 from Australia (from OneLab), with a proposal for the design, implementation and validation of a shared Future Internet sliceable/programmable research facility, supporting the joint experimentation of European and Brazilian researchers.
• The objectives include:– the development and operation of a new experimental facility in Brazil– the development and operation of a FI facility in Europe based on
enhancements and the federation of the existing OFELIA and OneLab infrastructures
– The federation of the Brazilian and European experimental facilities, to support the provisioning of slices using resources from both testbeds
• What? Main goal– Create a common space between the EU and Brazil for Future Internet (FI) experimental research into
network infrastructure and distributed applications, by building and operating a federated EU-Brazil Future Internet experimental facility.
• Who? 15 partners
• How? Requested to the EC ~1.1M€ and CNPq R$ 2.6 in funding to perform 6 activities– WP1: Project management– WP2, WP3: Building and operating the Brazilian (WP2) and European (WP3) facilities– WP4: Federation of FIBRE-EU and FIBRE-BR facilities– WP5: Joint pilot experiments to showcase the potential of the federated FIBRE facility– WP6: Dissemination and collaboration
Project at a glance
100
Nextworks
UEssex
i2CATUTH
UPMCNICTA
UNIFACS
UFPA
UFG
UFSCar CPqD,USP
RNP, UFFUFRJ
FIBRE
• What? Main goal– Create a common space between the EU and Brazil for Future Internet (FI) experimental
research into network infrastructure and distributed applications, by building and operating a federated EU-Brazil Future Internet experimental facility.
• Who? 15 partners
• How? Requested to the EC ~1.1M€ and CNPq R$ 2.6 in funding to perform 6 activities– WP1: Project management– WP2, WP3: Building and operating the Brazilian (WP2) and European (WP3) facilities– WP4: Federation of FIBRE-EU and FIBRE-BR facilities– WP5: Joint pilot experiments to showcase the potential of the federated FIBRE facility– WP6: Dissemination and collaboration
Project at a glance
3
Nextworks
UEssex
i2CATUTH
UPMCNICTA
UNIFACS
UFPA
UFG
UFSCar CPqD,USP
RNP, UFFUFRJ
The FIBRE consortium in Brazil
• The map shows the 9 participating Brazilian sites (islands) and the expected topology of their interconnecting private L2 network
• Over GIGA, Kyatera and RNP experimental networks
FIBRE site in Brazil
Optical TestbedsOptical TestbedsWireless Testbeds
Wi-fi APsWimax
OF-enabled Switch
NetFPGA Servers
Compute Servers
FIBRE Common Resources
Orbit Nodes
Other Internal Testbeds(e.g. Emulab)
Site-Specific Resources
To FibrePartners
RNP Ipê
GIGA
Kyatera
Technology pilots examples
103
• High-definition content delivery
• Seamless mobility testbed
OpenFlow Deployment at Stanford
104
Switches (23)APs (50)WiMax (1)
Research Examples
Example 1Load-balancing as a network primitive
Nikhil Handigol, Mario Flajslik, Srini Seetharaman
LOAD-BALANCER
Load Balancing is just Smart Routing
Nikhil’s Experiment: <500 lines of codeFeature Feature
Network OS (NOX)
Example 2Energy Management in Data Centers
Brandon Heller
ElasticTreeGoal: Reduce energy usage in data center networksApproach:
1. Reroute traffic2. Shut off links and switches to reduce power
[Brandon Heller, NSDI 2010]
Network OS
DCManager
“Pick paths”
ElasticTreeGoal: Reduce energy usage in data center networksApproach:
1. Reroute traffic2. Shut off links and switches to reduce power
XXXX XX
XX XXNetwork OS
DCManager
“Pick paths”
[Brandon Heller, NSDI 2010]
Example 3Unified control plane for packet and circuit networks
Saurav Das
Feature Feature
Network OS
Converging Packet and Circuit Networks
IPRouter
IPRouter
TDMSwitchTDM
Switch
WDMSwitchWDMSwitch
WDMSwitchWDMSwitch
IPRouter
IPRouter
Goal: Common control plane for “Layer 3” and “Layer 1” networksApproach: Add OpenFlow to all switches; use common network OS
OpenFlowProtocol
OpenFlowProtocol
[Saurav Das and Yiannis Yiakoumis, Supercomputing 2009 Demo][Saurav Das, OFC 2010]
Example 4Using all the wireless capacity around us
KK Yap, Masayoshi Kobayashi, Yiannis Yiakoumis, TY Huang
KK’s Experiment: <250 lines of code
WiMax
Feature
Network OS (NOX)
Evolving the IP routing landscape with Software-Defined Networking technology
Providing IP Routing & Forwarding Services in OpenFlow networks
http://go.cpqd.com.br/routeflow
• Unicamp
• Unirio
• Ufscar
• Ufes
• Ufpa
• ....
• Indiana University
• Stanford
• Deutsche Telekom
• NTT MCL
• ....
Motivation
Current “mainframe” model of networking equipment:- Costly systems based on proprietary HW and closed SW- Lack of programmability limits cutomization and in-house innovation- Inefficient and costly network solutions- Ossified Internet
Control Logic
RIP BGP OSPF ISIS
O.S.Driver
Hardware
ROUTERProprietary IPC / API
Management
Telnet, SSH, Web, SNMP
RouteFlow: Main Goal
Control Logic
RIP BGP OSPF
O.S.Driver
Hardware
O.S. API
Standard API (i.e. OpenFlow)
Switch
ControllerManagement
API
Open commodity routing solutions:+ open-source routing protocol stacks (e.g. Quagga)
+ commercial networking HW with open API (i.e. OpenFlow)
= line-rate performace, cost-efficiency, and flexibility!
Innovation in the control plane and network services
RouteFlow: Plain Solution
RouteFlow A RouteFlow B
OSPF HELLOOSPF HELLO
RouteFlow A VM RouteFlow B VM
Internet
RouteFlow: Control Traffic (IGP)
RouteFlow A RouteFlow BBGP
MessageBGP
Message
RouteFlow A VM(acts as eBGP speaking Routing Control Platform)
RouteFlow B VM
Internet
RouteFlow: Control Traffic (BGP)
RouteFlow A RouteFlow B
PacketDest IP: 192.168.1.1
Src MAC: 00:00:00:00:00:01
PacketDest IP: 192.168.1.1
Src MAC: 00:00:00:00:00:01
RouteFlow A VM RouteFlow B VM
1) Change Src MAC to A’s MAC address 00:00:00:00:00:022) Decrement TTL*
3) 192.168.1.0/24 -> port 2Packet
Dest IP: 192.168.1.1Src MAC:
00:00:00:00:00:02
PacketDest IP: 192.168.1.1
Src MAC: 00:00:00:00:00:02
Internet
RouteFlow: Data Traffic
RouteFlow: Architecture
Key Features• Separation of data and control planes• Loosely coupled architecture:
– Three RF components:1. Controller, 2. Server, 3. Slave(s)
• Unmodified routing protocol stacks– Routing protocol messages can be sent
'down' or kept in the virtual environment
• Portable to multiple controllers – RF-Controller acts as a “proxy” app.
• Multi-virtualization technologies• Multi-vendor data plane hardware
Advancing the Use Cases and Modes of Operation
• From logical routers to flexible virtual networks
Prototype evaluation
Lab Setup- NOX controller- Quagga routing engine- 5 x NetFPGAs switches
Results- Interoperability with traditional networking gear- Route convergence time is dominated by the protocol time-out
configuration (e.g., 4 x HELLO in OSPF) not by slow-path operations- Compared to commercial routers (Cisco, Extreme, Broadcom-
based), larger latency only for those packets that need to go to the slow-path: Lack FIB entry, need processing by the OS networking / routing stack
e.g., ARP, PING, routing protocol messages
NOX OpenFlow-Controller
RF-Server
5 x NetFPGA “Routers”
• 1 physical OpenFlow switch– Pronto 3290
• 4 Virtual routers out of the physical OpenFlow switch
• 10 Gig and 1 Gig connections• 2 BGP connections to external
networks– Juniper routers in Chicago and
Indianapolis
• Remote Controller• New User Interface
Field Trial at the University of Indiana Network setup
Field Trial at the University of Indiana User Interface
Demonstrations
Demonstration at Supercomputing 11
Routing configuration at your fingertips
RouteFlow community
10k visits from across the world (from 1000 different cities): - 43% new visits- ~25% from Brazil, ~25% from the USA, ~25% Europe, ~20% from Asia
100s of downloads
10s of known users from academia and industry
303days since
Project Launch
Key Partners and Contributors
Conclusions
Worldwide pioneer virtual routing solution for OpenFlow networks!
RouteFlow proposes a commodity routing architecture that combines line-rate performance of commercial hardware with the flexibility of open-source routing stacks (remotely) running on PCs;
Allows for a flexible resource association between IP routing protocols and a programmable physical substrate:
- Multiple use cases around virtualized IP routing services. - IP routing protocol optimization- Migration path from traditional IP deployments to SDN
Hundreds of users from around the world
Yet to prove in large-scale real scenarios- To run on the NDDI/OS3E OpenFlow testbed
questions?
Thank you!