data plane and vnf acceleration mini summit
TRANSCRIPT
© 2017 Open-NFP 1
Data Plane and VNF Acceleration Mini-SummitOPNFV Summit Beijing
June 12, 2017
© 2017 Open-NFP 2
Mini-Summit Agenda9:00 – 9:15 AM: Welcome and introduction
9:15 – 10:00 AM: NFVi acceleration models and offload architectures for OVS and VPP data planes. The role of SmartNICs for NFVi acceleration will be discussed.
10:00 – 10:45 AM: VNF acceleration models and offload architectures for NFV. The role of SmartNICs for VNF acceleration will be discussed.
10:45 – 11 AM: Break
11:00 – 11:30 AM: An open API model for enabling NFVi and VNF acceleration, including the use of sandbox functions using P4 and/or C programming languages. Proposal to collaborate on the definition of an open API within the context of an OPNFV project.
11:30AM – 12:15 PM: Proposal for developing and testing VNF acceleration utilizing resources and support from the Open-NFP community. A brief introduction to available resources including SmartNIC data plane development tools, and proposal for a VNF acceleration-focused Pharos community lab.
12:15 – 12:30 PM: Summary, call to action, adjourn
© 2017 Open-NFP 3
About Open-NFP www.open-nfp.org
Support and grow reusable code in accelerating data plane network functions processing using SmartNICs and COTS servers
Reduce/eliminate the cost and technology barriers in this space ▪ Technologies: • P4, SDN, OpenFlow, Open vSwitch (OVS) offload, eBPF ▪ Tools: • Hardware, development tools, software, cloud access ▪ Community: • Website (www.open-nfp.org): learning & training materials, active Google group https://
groups.google.com/d/forum/open-nfp, open project descriptions, code repository ▪ Learning/Education/Research support: • Regular seminar series/webinars, conferences, tutorials, research proposal support for
proposals to the NSF and state agencies • Present applied reusable research
© 2017 Open-NFP 4
Contact Us
Web: www.open-nfp.org Google Group: https://groups.google.com/d/forum/open-nfp Email: [email protected] Facebook: www.facebook.com/opennfp Twitter: https://twitter.com/Open_NFP YouTube: Open-NFP SlideShare: http://www.slideshare.net/Open-NFP
Become a member today!
© 2017 Open-NFP 5
Session 1: NFVi Acceleration Models and Offload Architectures
© 2017 Open-NFP 6
Agenda
Problem statement
NFVi performance bottlenecks
Acceleration and offload of NFVi data planes
Towards an open data plane acceleration framework
Data plane extensions via programming
© 2017 Open-NFP 7
NFV Challenges
ServerCPUresourcesnotgettingcheaperbutnetworkbandwidthcontinuestogrow
60
50
40
30
20
10
0
2015 2018 2021
Data:mobilePCs,Tabletsandmobilerouters
Data:Smartphone
Voice
ExaBytesperm
onth
GlobalNetworkTraffic Transistorsper$
© 2017 Open-NFP 8
Networking Consuming Server CPU Cycles
SmartNICsolutionsreturnexpensiveCPUcorestorevenuegeneratingVMs
CPU Cycles Used up for Networking With increasing adoption of SDN and NFV
CPU Cycles Available for Revenue Generating Applications
Per
cent
of C
PU
C
ycle
s in
a S
erve
r
5%
100%
80%
1990s 2000s 2010s 2020s
Costpercycleincreasingwithslowdownor demiseof Moore’slaw
Output per serverdwindling as VMs and Apps are starved for CPU cycles and I/O bandwidth
© 2017 Open-NFP 9
Data Plane in the NFV Infrastructure
VNF VNF VNF
Virtualization and Virtual Switching
Compute Hardware Storage Hardware Network Hardware
NFVi
EMS EMS EMS
OSS/BSS
VNFs
NFV Management & Orchestration
Orchestrator VNF Manager
Virtualized Infrastructure
Manager
ThisisthefocusareaforNFVidataplaneacceleration
© 2017 Open-NFP 10
Agenda
Problem statement
NFVi performance bottlenecks
Acceleration and offload of NFVi data planes
Towards an open data plane acceleration framework
Data plane extensions via programming
© 2017 Open-NFP 11
A word about Bottlenecks…
• In Fluid Dynamics, the throughput of water flowing through a pipe is limited by the narrowest diameter opening
• In Networking, the throughput of packets flowing through a network is limited by the slowest processing step
© 2017 Open-NFP 12
What about the bottlenecks in the NFV Cloud?
NetworkSwitch/NIC VirtualSwitch VNFs
• DesignedinHWtooperateat64-ByteWireRate
• RarelyaBottleneckunlessthereisproblem
• Softwareimplementationlimitspacketrates
• AggregationpointforallVNFs–typicalbottleneck
• Softwareimplementationlimitspacketrates
• CanbestarvedbythevSwitch
© 2017 Open-NFP 13
Example of the Server Bottleneck…
VM
NICEthernetController
VM
VM
VM
VM
vSW
vSW
VM
VM
vSW
vSW
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
24 Core Server Compute NodeAssuming20VMsconsuming1Mppseach
Assuming4CoresforOVSat1Mppspercore
60 Mpps Availablefrom
Network(40GbE)
20 Mpps WantedbyVMs4Mpps
ThroughputofvSwitch
BottleneckwillcauseVMstobeunder-utilizedby80%
40GEBasicNICFunctions
© 2017 Open-NFP 14
You can use more cores for OVS but…
VM
SmartNICEthernetController
VM
VM
VM
VM
vSW
vSW
VM
VM
vSW
vSW
VM
VM
VM
VM
VM
24 Core Server Compute NodeAssuming12VMsconsuming1Mppseach
Assuming12CoresforOVSat1Mppspercore
12 Mpps WantedbyVMs
12MppsThroughputof
vSwitch
NoBottleneck,but50%oftheServeriswasteddoingnetworking
vSW
vSW
vSW
vSW
vSW
vSW
vSW
vSW
60MppsAvailablefrom
Network(40GbE)
BasicNICfunctions40GE
© 2017 Open-NFP 15
Eliminating the Bottleneck with a SmartNIC
VM
SmartNICEthernetController
VM
VM
VM
VM
VM
VM
OVS
VM
VM
VM
VM
VM
24 Core Server Compute NodeAssuming23VMsconsuming1Mppseach
Assuming1CoresforOVSControlPlane
23 Mpps WantedbyVMs
23MppsThroughputofvSwitchonSmartNIC
NoBottleneck,95%oftheserveravailableforVNFsorrevenuegeneratingVMs
60MppsAvailablefrom
Network(40GbE)
OVSOffloadFunctions40GE
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM VM
© 2017 Open-NFP 16
Agenda
Problem statement
NFVi performance bottlenecks
Acceleration and offload of NFVi data planes
Towards an open data plane acceleration framework
Data plane extensions via programming
© 2017 Open-NFP 17
Data Plane Offload Models
Server
NIC
Partial Functionality Offload
Control Agent
VM
Server
NIC
Full Flow Offload
Control Agent
Functionsc,d
Functionsa,b
Functionsa,b,c,d
Functionsa,b,c,dFlowCache
VMVM VM
HitMiss
© 2017 Open-NFP 18
Embedded Offload Models
Server
NIC
Functions A, B, C, D, E, F
Full Data Plane Offload
Control Agent
Server
NIC
Functions A, B, C, D, E, F
Full Data Plane Offload & Full Control Plane Offload
Control Agent
VM VM VM VM
© 2017 Open-NFP 19
Implementation of OVS Offload to a SmartNIC
OpenvSwitchSubsystem
OVSAgent
OpenFlow
VirtualMachineVirtualMachine
VirtualMachine
x86Kernel
x86Userspace
PCIe
VirtualMachine
SR-IOV/virtioVFs
SR-IOVVFs
SmartNIC
Apps
Apps
1
1
netdev/DPDK
netdev/DPDK
Apps
netdev/DPDK
Apps
netdev/DPDK
Apps
netdev/DPDK
OVSCLI
1 ConfigurationviacontrolprotocolorCLI
2
2 OVSuserspaceagentpopulateskernelcache
(Nova,Neutron)
ExecuteAction
OpenvSwitchDatapathExecuteAction
(e.g.Entunnel,DelivertoVM,SendtoPort)
3 Offloaddatapath:copymatchtables,syncstats
3
OVS KernelDPMatch/Act
OVS KernelDPMatch/Act
Miss
Miss
© 2017 Open-NFP 20
Example of Significant Offload Gains
ThroughputwithsingleserverCPUcore
50XEfficiencyGainvs.KernelOVS
20XEfficiencyGainvs.UsermodeOVS
Kernel OVS0
5
10
15
25
30
20
MillionPacketsPerSecond
Usermode OVS SmartNIC OVS
VXLAN TunnelingL2 Forwarding
© 2017 Open-NFP 21
Agenda
Problem statement
NFVi performance bottlenecks
Acceleration and offload of NFVi data planes
Towards an open data plane acceleration framework
Data plane extensions via programming
© 2017 Open-NFP 22
Overview
• Multiple SmartNIC vendors are working with the Linux community(e.g. Red Hat) to define a unified/common API for data plane offload
• Approach is based on TC Flower implementation to support offload
• Currently working towards preview level implementation for OVS
• Longer term goal is to incrementally reach feature parity with OVS
• Other data planes can be offloaded with the same mechanism • For example, additional data planes supported by the OPNFV Danube
software stack
© 2017 Open-NFP 23
OVS Offloaded via TC Flower
ovs-vswitchd
TCFlower KernelDatapath
Driver
SmartNIC
User-Space
Kernel
Hardware
© 2017 Open-NFP 24
What is TC Flower
• Packet classifier for Linux kernel traffic classification (TC) subsystem • Allows match on key with a wide number of packet and metadata fields • TC actions may be used to provide match-action behavior similar to
OVS
© 2017 Open-NFP 25
How to Participate
• Feedback on feature set • OVS has many features • OVS-TC is starting with few • Plans for adding new features to OVS-TC is not fixed
• Discussion and code review on mailing lists • Kernel: [email protected] • Open vSwitch: [email protected]
© 2017 Open-NFP 26
User-Space Offload Hooks
• Offload hooks are present at Netdev (datapath vport) layer • Called by DPIF (datapath flow) layer • Translates DPIF flows to TC filters (flows) that use Flower classifier
• Communicates with kernel TC implementation using netlink
• May flag TC filters as software-only, hardware-only
• Default is software and if available hardware
© 2017 Open-NFP 27
Agenda
Problem statement
NFVi performance bottlenecks
Acceleration and offload of NFVi data planes
Towards an open data plane acceleration framework
Data plane extensions via programming
© 2017 Open-NFP 28
Extending OVS using P4/C Plugins
OpenvSwitchSubsystem
OVSAgent
VirtualMachineVirtualMachine
VirtualMachine
x86Kernel
x86Userspace
PCIe
VirtualMachine
SR-IOV/VirtIOVFs
SR-IOV/VirtIOVFs
SmartNIC
Apps
Apps1
netdev/DPDK
netdev/DPDK
Apps
netdev/DPDK
Apps
netdev/DPDK
Apps
netdev/DPDK
OVSCLI
1 ConfigurationviacontrolprotocolorCLI
2
2 OVSuserspaceagentpopulateskernelcache
(Nova,Neutron)
ExecuteAction
OpenvSwitchDatapath ExecuteAction(e.g.Entunnel,DelivertoVM,SendtoPort)
3 Offloaddatapath:copymatchtables,syncstats
3
OVS KernelDPMatch/Act
DatapathExtensionorPluginP4/C
DPExt.
4Datapathextensionsoftwarein“sandbox” (inP4,C,and/oreBPF)
4
4
OVS KernelDPMatch/Act
Miss
Miss
1
© 2017 Open-NFP 29
P4 Based OVS Datapath
OpenvSwitchSubsystem
OVSAgent
OpenFlow
VirtualMachineVirtualMachine
VirtualMachine
x86Kernel
x86Userspace
PCIe
VirtualMachine
SR-IOV/virtioVFs
SR-IOVVFs
SmartNIC
Apps
Apps
netdev/DPDK
netdev/DPDK
Apps
netdev/DPDK
Apps
netdev/DPDK
Apps
netdev/DPDK
OVSCLI(Nova,Neutron)
P4GeneratedDatapath
ExecuteP4/OVSAction
ExecuteP4/OVSAction
P4/OVSMatching
P4/OVSMatching
Fallback
Fallback
P4/OVSDatapath
© 2017 Open-NFP 30
P4 Based “Other” Datapath
HostCode
Control
Agent
VirtualMachineVirtualMachine
VirtualMachine
x86Kernel
x86Userspace
PCIe
VirtualMachine
SR-IOV/virtioVFs
SR-IOVVFs
SmartNIC
Apps
Apps
netdev/DPDK
netdev/DPDK
Apps
netdev/DPDK
Apps
netdev/DPDK
Apps
netdev/DPDK
(Nova,Neutron)
P4Datapath ExecuteAction
ExecuteActionP4
MatchingP4
Fallback
Fallback?Protocol(s)tobedefined(couldbecomecallableAPI)
Otheropenissues:-Downloadingprogramsvia OpenStackvs.viaothersystems-SchedulingVMstorunonnodeswithaccelerationhardware(Nova)
© 2017 Open-NFP 31
Offload Concept for VPP Acceleration
VPPSubsystem
VPPAgente.g.
Honeycomb
VirtualMachineVirtualMachine
VirtualMachine
x86Userspace
PCIe
VirtualMachine
SmartNIC
Apps
Apps
1 netdev/DPDK
Apps
netdev/DPDK
Apps
netdev/DPDK
Apps
netdev/DPDK
1 Configurationviacontroller(Yang)
2
2 Policydownload:preemptive
(Nova,Neutron)
OffloadedVPPDatapath
VPPDatapathXVIOtovirtioendpoints
netdev/DPDK
4 Indirectconnection(inlinepartialoffload)
4
SR-IOVVFs
SR-IOVVFs
5 Directconnectiontohost/guestendpoints
5
Egressoffload
e.g.entunnel/L3forward/meter+QoS…
Ingressoffloade.g.L2/L3/detunnel…
Packet+metadata3
Policydownload:preemptive/ondemand(Controlmessagesystem)
3
Control
© 2017 Open-NFP 32
Session 2: VNF Acceleration Requirements and Models
© 2017 Open-NFP 33
Agenda
• Representative use cases and VNFs • Virtualized Mobile Network: virtual Evolved Packet Core (vEPC) • Virtualized Security: Firewall / IPS / UTM • Monitoring for Auto-Scaling • (More: ETSI NFV use cases, OPNFV scenarios, OVS DPDK use cases…)
• Common requirements • Packet operations: Matching + Actions, Tunnels • Stateful operations • Service chain related operations • Other processing
• Acceleration models • Location - inline / lookaside / fastpath • Management - exposed to cloud management vs. handled by VNF
© 2017 Open-NFP 34
Virtualized Mobile Networks Use Case
Presented by Dejan Leskaroski - Affirmed Networks
AFFIRMED NETWORKS © 2016 Affirmed Networks, Inc. All rights reserved. 35
Affirmed Networks VNF Portfolio
AP/AC
MME SGSN
eNB
Gi / SGi
Gx
S11
S6a
S1MME
PCRF
HSS
S1-U
ePDGIPSec
Affirmed’s Mobile Content Cloud (MCC)
OFCS
Affirmed VNFs3rd Party PNFs
Control PlaneData Plane
Ga/Gz/Rf
GRE
InternetSGSNGp
OCS
S2b
UE
GiLAN
Captive Portal
Gy
AAA
TWAG
AP/ACUE
S2aPacket
Analyzer Event Stream Analyzer
Active Flow Analytics
SGW PGW GGSN
Automation Automation
S6b
vProbe
AFFIRMED NETWORKS © 2016 Affirmed Networks, Inc. All rights reserved. 36
High-Level Virtualization Architecture
Hypervisor
PHY
Ser
ver
…PHY PHY
Top of Rack (TOR) Switch
ETH1 ETH2 ETH n
CPU1 CPU2
Kernel Space
User Space
Linu
x G
uest
OS
vSwitch
vNICs
Kernel Space
User Space
Linu
x G
uest
OS
vNICs
Affirmed VM #1 Affirmed VM #2
Phy
sica
lVi
rtual
Data Plane Development Kit (DPDK)
• Used for high performance network I/O • SR-IOV or DPDK-Accelerated vSwitch
vCPUs …
• Hypervisor: KVM, VMware • vSwitch: OVS, DPDK-OVS, SDN
vRouter, VMware vSD • Smart NIC Offload/Acceleration
AFFIRMED NETWORKS © 2016 Affirmed Networks, Inc. All rights reserved. 37
U-Plane Flows with Smart NIC
MCC-C
Standard NFVi Standard NFVi + Smart NIC
MCC-U
x86
x86
MCC-C
MCC-U
x86
x86
AFFIRMED NETWORKS © 2016 Affirmed Networks, Inc. All rights reserved. 38
NFVi and VNF Flow Offload using Smart NICs• Current Support – NFVi Offload
• OVS and vRouter Flow Offload/Acceleration (Flow-Tail Offload) • Software vSwitch-agent still sees all flows (first packet(s)) • SDN-C decision enforced in Smart-NIC • Standard capability and APIs architected into OVS for this purpose
• Future Support – VNF Offload (simultaneous with OVS Flow Offload) • vEPC flow handling requirements
• Required synchronization between VNF & Smart-NIC (API integration) • Encapsulate and Decapsulate various tunnels (e.g., GTP-U, GRE, L2TP/PPP, etc.) • MAC header rewrite (L3 nexthop) • Hierarchical QoS (e.g., QosFlow, AMBR) • Online Charging and Granular Charging (e.g., Rating Groups) • Subscriber Firewall (e.g., Rate Limiting, Flow Counting, etc.) • Flow Based Analytics
© 2017 Open-NFP 39
Firewall / IPS / UTM Use Case
• Firewall • Rule based policies - Access Control List (stateless) • Stateful operation - permit forward and reverse traffic for a TCP connection / UDP
session, handle related flows (FTP control + data, SIP signaling + media, etc.)
• Intrusion Prevention - e.g. Snort • Scans network traffic for intrusions e.g. malware / exploits • Implies reassembly of various protocols (defragment, TCP stream, HTTP/SMTP…)
• Rules (signatures) have 5-tuple / n-tuple packet header match componentand string / regexp match component => could offload n-tuple part (match offload)
• Flow inspection depth => offload “rest of microflow” once no further intrusions possible
• Hybrids - UTM, Next Generation Firewall • Identifies applications, users (by content - not just headers) • Incorporates anti-virus, web filtering, anti-spam
© 2017 Open-NFP 40
Monitoring for App Auto-Scaling Use Case
• (Grey area - NFVi vs. VNF) • Monitor dataplane traffic
• By routing traffic through the monitoring VNF • By interfacing with NFVi datapath
• Detect cases requiring action • Bandwidth above / below threshold • Excessive latency (perhaps determined using INT)
• Take action - VNFs + traffic • Add / delete / reconfigure application VNF instances • Reconfigure tables (e.g. load balancing) accordingly
• Related: monitoring for DoS attack mitigation • Reconfigure tables to drop or rate limit traffic
© 2017 Open-NFP 41
Agenda
• Representative use cases and VNFs • Virtualized Mobile Network: virtual Evolved Packet Core (vEPC) • Virtualized Security: Firewall / IPS / UTM • Monitoring for Auto-Scaling • (More: ETSI NFV use cases, OPNFV scenarios, OVS DPDK use cases…)
• Common requirements • Packet operations: Matching + Actions, Tunnels • Stateful operations • Service chain related operations • Other processing
• Acceleration models • Location - inline / lookaside / fastpath • Management - exposed to cloud management vs. handled by VNF
© 2017 Open-NFP 42
VNF Requirements: Packet Operations
• Matching • Wildcard match of n-tuple to apply policy (security - ACL, QoS etc.) based on categories • Exact match of n-tuple to apply policies to (rest of) microflow • Fields consulted - pre-defined protocols (L2/L3/L4…) vs. program defined protocols (P4)
• Actions • Forwarding: send to physical or virtual interface (allow), drop, load balance • QoS: rate limit, schedule, shape • Header modification: L3 forwarding, add / remove VLAN tags or MPLS labels, NAT • Fields affected - pre-defined vs. program defined
• Encapsulation • Tunnel termination / origination • Tunnel conversion - combined termination and re-origination • Affixing headers or options for telemetry
• Related: NFVi’s use of these operations, e.g. to build overlay network, build service chain…
© 2017 Open-NFP 43
VNF Requirements: Stateful Operations
• Security / QoS • Learn microflows - connections / sessions; time out actively or passively • Track protocols, e.g. TCP (flags/sequence numbers), HTTP (header/payload, chunks) • Apply policy - allow / drop, rate limit / schedule / shape
• Monitoring • Learn at required granularity - microflows (connection) vs. larger flows (IP / MAC etc.) • Keep statistics (throughput / latency) or other aggregated information • Mirror or log - only headers / all traffic
• State synchronization • Peer - create distributed state table to facilitate load sharing / high availability • External - enable related state (in accelerated VNF or other VNFs) to be stored
© 2017 Open-NFP 44
VNF Requirements: Service Chain / Other
• Service chain • Unsubscribe - remove my VNF (myself) or other VNFs from chain • Scaling and fail over - adjust chain as more / fewer / different instances are available
• Other - non-network-related • Examples: crypto, compression, transcoding, storage related • Refer to ETSI NFV specifications for details
• Other - VNF specific • VNF could need to offload other classification / processing • Potentially stateful, algorithms potentially complex - not possible to predict in general • Therefore accommodate with flexible mechanism - plugin in datapath, running in
software datapath or in accelerated datapath • Concerns: portability - language / RT environment, deployment, scaling
© 2017 Open-NFP 45
Agenda
• Representative use cases and VNFs • Virtualized Mobile Network: virtual Evolved Packet Core (vEPC) • Virtualized Security: Firewall / IPS / UTM • Monitoring for Auto-Scaling • (More: ETSI NFV use cases, OPNFV scenarios, OVS DPDK use cases…)
• Common requirements • Packet operations: Matching + Actions, Tunnels • Stateful operations • Service chain related operations • Other processing
• Acceleration models • Location - inline / lookaside / fastpath • Management - exposed to cloud management vs. handled by VNF
© 2017 Open-NFP 46
ETSI NFV IFA 002 Pass Through Model
ETSI
ETSI GS NFV-IFA 002 V2.1.1 (2016-03) 10
Figure 4.1.1-1: Pass-through model
• An implementation independent VNF is a VNF that makes no assumption whatsoever on the underlying NFVI. Its VNFD does not contain any accelerator specific information elements. Should a new hardware become available on the market, the operator will update its NFVI to allow the VNF to make use of the new hardware. An implementation independent VNF is thus based on implementation independent VNF software that makes use of a functional abstraction of an accelerator supported by an adaptation layer in the NFVI. This model is close to the abstracted model defined in ETSI GS NFV-INF 003 [i.1], clause 7.2.2.
Figure 4.1.1-2: Abstracted model
NOTE 1: An implementation independent executable VNFC allows for VNF deployment in both hypervisor based and non-hypervisor based environments. The latter configuration is outside the scope of the present document.
Live migration of such hardware independent accelerated VNF may be possible if any associated acceleration state information required can also be migrated.
NOTE 2: Live migration from a compute node with accelerator to a compute node without accelerator or with a different accelerator is allowed (in particular to cope with emergency response situations).
• VNF must be updatedbefore new hardware(yellow) can be used
• Drivers baked into VNFs
© 2017 Open-NFP 47
ETSI NFV IFA 002 Abstracted Model
ETSI
ETSI GS NFV-IFA 002 V2.1.1 (2016-03) 10
Figure 4.1.1-1: Pass-through model
• An implementation independent VNF is a VNF that makes no assumption whatsoever on the underlying NFVI. Its VNFD does not contain any accelerator specific information elements. Should a new hardware become available on the market, the operator will update its NFVI to allow the VNF to make use of the new hardware. An implementation independent VNF is thus based on implementation independent VNF software that makes use of a functional abstraction of an accelerator supported by an adaptation layer in the NFVI. This model is close to the abstracted model defined in ETSI GS NFV-INF 003 [i.1], clause 7.2.2.
Figure 4.1.1-2: Abstracted model
NOTE 1: An implementation independent executable VNFC allows for VNF deployment in both hypervisor based and non-hypervisor based environments. The latter configuration is outside the scope of the present document.
Live migration of such hardware independent accelerated VNF may be possible if any associated acceleration state information required can also be migrated.
NOTE 2: Live migration from a compute node with accelerator to a compute node without accelerator or with a different accelerator is allowed (in particular to cope with emergency response situations).
• Operator can update NFVito add support for new HW- no VNF changes needed
• HW drivers and SW impl. in NFV infrastructure, e.g. host OS or hypervisor
© 2017 Open-NFP 48
VNF Acceleration Models
NFVI
VNFs
Datapath
VNF #1
1%
Accel. #199%
• Fastpath / slowpath (fallback)- for certain traffic categories- for part (e.g. rest) of flow
NFVI
VNFs
Datapath
VNF #1
Accel. #1
Accel. #1
• Inline - ingress - egress
NFVI
VNFs
Datapath
VNF #1
Accel. #1
• Lookaside- packet related- stream related - other
© 2017 Open-NFP 49
Control Considerations
• Key axiom for SDN: logically centralized control • One (redundant) SDN controller is typically the master for a given datapath • Need provisions for selective delegation to grant other entities direct access • Need “meta-policy” to govern which VNFs can control / access which resources
NFVI
VNFs
Datapath
API
VNFa
TBL1 TBL2TBLa
exclusiveaccess
SDNController
exclusiveaccess
© 2017 Open-NFP 50
Representation of Tables and Plugins
• VNF acceleration table representation • Reserve table per VNF
(or VNFC, or instance thereof) • Reserve logical switch per VNF
(or VNFC, or instance thereof) • Not recommended: co-locate in shared table
• VNF acceleration plugin representation • Visible to controller vs. not exposed
(e.g. bump in the wire) • Represent as logical interface on switch • Represent as custom action
NFVI
VNFs
Datapath
VNFa
tableentries
TBLa
packetscontrol /events /
data
TBL1Plugin
forVNFa
API
© 2017 Open-NFP 51
VNF Acceleration Summary
• Use cases => distilled requirements • Table driven - match/action tables, state tables • Plugin
• Acceleration Platforms • SmartNIC in server • External physical switch • Software datapath (fallback - when hardware not available)
• Acceleration Models • Inline / lookaside / fastpath • Coordination with SDN control - delegation / meta-policy
© 2017 Open-NFP 52
Session 3: Proposed VNF Acceleration API
© 2017 Open-NFP 53
Agenda
• Survey of existing initiatives and APIs • Proposed offload models • Proposed mapping to hardware / software platforms • Next steps - invitation to collaborate
© 2017 Open-NFP 54
Existing APIs and Initiatives
• Initiatives and Projects • ETSI NFV - requirements, management aspects • OPNFV - DPACC project • ONF / ON.lab - OpenFlow + SDN Evolution, models, PIF Intermediate Representation • P4 - language, architecture, API • Open-NFP - network functions processing for e.g. SmartNICs • NFVi dataplanes - OVS, Contrail vRouter, fd.io VPP, ODP/OFP, DPDK, Linux/iovisor eBPF
• APIs • DPDK rte_flow - match/action • Linux kernel TC FLOWER - match/action (used by OVS) • OpenFlow - match/action
• Enhanced by BEBA project / OpenState proposal for statefulness • P4 run-time APIs (generic vs. generated)
© 2017 Open-NFP 55
OPNFV - DPACC Project
Presented by Tapio Tallgren - Nokia
OpenDataplane.org
DPDK
VPP
VPP on ODP
7
DPACC Virtio Inline Accelerator
2017-06-11
SAL in hostuser spaceSW Accelerator
SAL in user space
sio in Kernelvirtio-inline
hio
HW Accelerator
Hos
t use
r spa
cegu
est
devi
ce
Virtio InlineBackend vHost-user
VNF Application
g-API
Virtio Inline User Frontend Driver
Commands/re-injectedpackets
Status/ExceptionPackets
AcceleratedTraffic
Non-acceleratedTraffic
IncomingTraffic
g-accel-driver
Virtio-net User Frontend Driver
Vhost-netBackend
Physical ports
vRings vRings
Virtio-net User Frontend Driver
Vhost-netBackend
SRL
g-net-driver g-net-driver
Physical ports
© 2017 Open-NFP 56
Agenda
• Survey of existing initiatives and APIs • Proposed offload models • Proposed mapping to hardware / software platforms • Next steps - invitation to collaborate
© 2017 Open-NFP 57
VNF Offload - Table Model
• VNF can delegate processing to NFVi datapath
• Match-action: e.g. match dest IP 10.* dest port 80, action drop
• Match only: e.g. match VLAN 1 port 22, mark 123, send to VNFa
• Stateful: e.g. match 5-tuple (a,b,c,d,e), action forward to port 1, count packets, timeout 30s
• Table entries populated by VNFsor SDN controller
NFVI
Connect.Table
VNFs
Datapath
VNFa
matchtableentries
API
VNFb
API
TBLaTBL1 TBL2
packets
TBLb
statetableentries
packets
© 2017 Open-NFP 58
Table Model - Mapping to Hardware
Datapath in SmartNIC(e.g. accelerated vSwitch)
Datapath in vSwitchwith traditional NIC
Datapath inphysical switch
Server
VMb
vSwitch
VMa
Datapath
VNFa
API
VNFb
API
TBLaTBL1 TBL2 TBLb
NIC
ToR Switch
Server
VMbVMa
Datapath
VNFa
API
VNFb
API
TBLaTBL1 TBL2 TBLb
SmartNIC
Server
VMbVMa
Datapath
VNFa
API
VNFb
API
TBLaTBL1 TBL2 TBLb
© 2017 Open-NFP 59
VNF Offload - Table + Plugin Model
• Embedded Network Function (eNF) = plugin in NFVi datapath • “Satellite” to a VNFC (or part thereof)
• Each eNF offloads and accelerates the corresponding VNF • API enables VNF to control the eNF and
receive events (e.g. security /telemetry) • Traffic can be flow between eNFs and VNFs
• Match tables can direct traffic to eNFs • Table entries populated by VNFs or
SDN controller • Table entries can have associated state
NFVI
VNFs
Datapath
VNFa
tableentries
TBLa
packetscontrol /events /
data
Datapathplugin
offloadingVNFa
VNFb
control /events /
data
TBL1eNFa
eNFb
APIAPI
© 2017 Open-NFP 60
Plugin Model - Mapping to Datapath + Hardware
Physical switchSmartNIC in server(e.g. OVS, vRouter, VPP)
Software vSwitch in server(e.g. OVS, vRouter, VPP)
eNF in P4 / C / eBPF Kernel: eNF in eBPF / P4Usermode: eNF in C / P4
eNF in P4
Server
Host
VMb
NIC
VMa
vSwitch
VNFa
TBLa
VNFb
TBL1eNFa
eNFb
APIAPI
Server
VMb
ToRSwitch
VMa
Datapath
VNFa
TBLa
VNFb
TBL1eNFa
eNFb
APIAPI
SmartNIC
Server
VMbVMa
Datapath
VNFa
TBLa
VNFb
TBL1
eNFa
eNFb
APIAPI
© 2017 Open-NFP 61
Agenda
• Survey of existing initiatives and APIs • Proposed offload models • Proposed mapping to hardware / software platforms • Next steps - invitation to collaborate
© 2017 Open-NFP 62
VNF Acceleration API Evolution
• Requirements • API must be vendor + OS + platform + hardware independent…… but need to start with implementing specific instances, then expand
• Leverage existing APIs, e.g. P4 RT API, DPDK flow API, Linux TC FLOWER API • Consider existing specifications / initiatives - e.g. ETSI NFV, OPNFV DPACC
• Proposing activity to further evolve VNF acceleration APIs and mechanisms • Primarily hosted at OPNFV — name of project TBD • Coordinate with other relevant groups, e.g. ETSI NFV, P4, fd.io VPP, OVS… • Collaborators - VNF and NFVi vendors (hardware/software), end users (operators)
© 2017 Open-NFP 63
Potential First Prototype (to discuss)• Selected Platforms
• Linux guest and host OS, KVM hypervisor, x86_64 server, C language binding • P4-capable SmartNIC, e.g. Netronome Agilio • P4-capable software datapath, e.g. fd.io VPP with P4 to VPP translator,
and/or P4 for Linux kernel eBPF • P4-capable physical switch, e.g. Barefoot Tofino powered devices
• Selected Use cases (applications / VNFs) • Firewall • vEPC
• Features • API exposing match-action and stateful table operations in chosen datapath • P4 plugin implementing additional capabilities for the use caseNote: other hardware and software platforms, use cases, languages for plugins to follow
© 2017 Open-NFP 64
Session 4: Developing and Testing VNF Acceleration with the Open-NFP Community
© 2017 Open-NFP 65
Agenda
Problem Statement
Open-NFP Community Application Testing
Towards an Open Data Plane Acceleration Test Framework
© 2017 Open-NFP 66
NFV Current PoC / Testing Process
EachPoC/TestCyclerequiresdifferenttools,differentcriteriaanddifferenttestenvironment.
© 2017 Open-NFP 67
NFVi Test Requirements with SmartNIC
• Need test environment that allows for SmartNIC-based testing of NFVi • Test Cases change based on functionality enabled by SmartNIC
OpenStackNeutronOVSML2
ComputeNode
VM VM VMVM...OpenStackNova
Agent
OVSDBOpenFlow
SmartNICOVSDatapath
ActionsMatchTables
OpenStackNova
OpenDaylightController(ODL)
Tunnels
DelivertoHost
UpdateStatistics
SR-IOV&Virtio
ovs-vswitchd
ovs-dbserver
OVSDatapath
ActionsMatchTables
Tunnels
© 2017 Open-NFP 68
VNF Offload – Table Model Requirements
• Need ability to offload VNF API to SmartNIC • Need test environment that allows for SmartNIC-based testing of VNF API
© 2017 Open-NFP 69
VNF Offload – Plugin Model Requirements
• Need Ability to Offload VNF API to SmartNIC • Need tools to develop and deploy eNF in P4 / C / eBPF • Need Test Environment that allows for SmartNIC-based testing of VNF
© 2017 Open-NFP 70
Summary Of Test Requirements
NeedACommunity-BasedApproachtoaddressallcomponentsandrequirements.
• NFVi should have ability to offload NFVi data path to SmartNIC
• VNF should have ability to offload VNF API to SmartNIC
• Developers need tools to develop and deploy eNF on SmartNIC
• End Users need standard test cases to exercise SmartNIC functionality
• Test framework should allow testing of NFVi, VNF, eNF and NFV System with SmartNIC
© 2017 Open-NFP 71
Agenda
Problem Statement
Open-NFP Community Application Testing
Towards an Open Data Plane Acceleration Test Framework
© 2017 Open-NFP 72
Open-NFP Infrastructure• Growing community for Data Plane Developers -- www.open-nfp.org.
• ~40 contributing organizations • SDK, Working Code examples, Tutorials, Webinars etc. • Wide range of P4 / C projects on server-based networking • Ability to engage with over 200 community members through the open-nfp Google Group. • Annual DXDD event for developer community
• Hardware • Netronome Agilio Platform – 10 GbE – 100GbE SmartNICs • Remote access with the cloud lab infrastructure
• Software • OVS acceleration software, P4 / C SDK • Lots of example code at https://github.com/open-nfpsw
InfrastructurewithHardware,SoftwareandCommunitySupportforDataPlaneApps
© 2017 Open-NFP 73
P4 / C SDK Tool Chain
SDK-6-IntegratedDevelopmentEnvironment(IDE)
Simple_Router.p4 Packet_filter.c
Editorwithlanguagehighlightandbreakpointsupport
P4frontendcompiler
Simple_Router.IR
P4backendcompiler
Compiler Linker
Loader
Simulator
Cscripting
Assembler(NFAS)
Debugger
© 2017 Open-NFP 74
P4 / C Application Work Flow
Native code compiler
Sandbox C
© 2017 Open-NFP 75
NFVi Test Case Examples
SmartNIC
Test/TrafficGenerator
(e.g.IXIA,Spirent)
External Traffic Generator with SR-IOV
Test/TrafficGenerator
Test/TrafficGenerator
VM-VM - Virtio Configuration
SmartNICSmartNIC
© 2017 Open-NFP 76
VNF Test Case Examples
© 2017 Open-NFP 77
Agenda
Problem Statement
Open-NFP Community Application Testing
Towards Open Data Plane Acceleration Testing
© 2017 Open-NFP 78
Towards Open Data Plane Acceleration Testing
• Cloud-Based Test Framework Access • Enable OPNFV Community to access Open-NFP Remote Access Cloud Lab Infrastructure • Working with Tier 1 Cloud Providers to setup additional Community VNF Acceleration Test Lab
• Define Acceleration POD • Working with Tier 1 Server vendors to define Acceleration POD(s)
• Defining VNF, VNF Models and Test Tools • Open Source: iPerf, DPDK pktgen, Moongen, vsperf, trafgen etc. • Commercial: Spirent, IXIA – Both VM-to-VM and Physical-to-VM Test Models.
• Define repeatable process • Leverage Pharos, Lab-As-A-Service and OPNFV Infrastructure to define an repeatable process
Joinustodefine,refineandexecutetestcases.
© 2017 Open-NFP 79
Enable TCO Analysis Of Acceleration OptionsIn
crea
sing
TC
O B
enef
its
Fewer number of CPU cores per VM App
6X
3X
1X
8 cores/VM 1 core/VM4 cores/VM
High VM Workload vEPC, vRAN
Med VM Workload vCDN, vCPE
Low VM Workloads IT Apps
TCO Improves for:
VM Apps that consume fewer CPU cores and higher PPS
More services enabled in the data path – policies, per flow statistics, mirroring for analytics, service chains, per flow load balancing etc.
Increased connections per second
BringyourVNFtoOffloadand/orTest.
© 2017 Open-NFP 80
Summary and Call to Action
© 2017 Open-NFP 81
Summary
● NFVi Data Plane Acceleration with SmartNICs is moving forward • Acceleration and Offload needed to make NFV efficient • Many acceleration models and data planes exist • Need for HW and SW independence is key to success of offload solutions • Movement towards common/unified API already underway in Linux community
● VNF Acceleration with SmartNICs will bring the next set of gains • VNFs become the bottleneck once the NFVi layer is accelerated • VNF requirements and models for acceleration are being discussed • Open API for VNF acceleration is critical to success • Need to agree on the proper framework for the Open API definition (OPNFV, ETSI, etc)
● Important to have a common framework for VNF acceleration and VNF testing and API compliance (OPNFV Danube, Pharos Labs, etc.)
© 2017 Open-NFP 82
Call To Action
● Get involved in Linux community and other activities around NFVi offload with common/unified approaches
● Join the discussion and contribute to establishing a framework to define an Open API for VNF acceleration (OPNFV Project, ETSI NFV, etc.)
● Provide input around VNF interoperability and performance testing and infrastructure requirements: help to define test cases, and supply VNFs to be tested
Contactustoprovidefeedbackanddiscusshowyoucangetinvolved:
VNFaccelerationAPI:[email protected]
VNFtesting:[email protected]
Enjoy the rest of the OPNFV Summit!
© 2017 Open-NFP 83
Thank You!