boosting vnf with vector packet processing - qosmos · boosting vnf with vector packet processing ....
TRANSCRIPT
Boosting VNFs with VPP
VPP is an Linux Foundation Project: FD.io
• High Performance, rich feature set, but flexible packet processing software
• Initiated by Cisco, backed by Intel, Ericsson and others
Opportunities based on Vector Packet Processing (VPP)
• As a framework for vswitch/vrouter and/or as VNFs
• As a basis for Openstack and Containers virtual networks implementation
Service Function Chaining, Security Groups, Group Based Policy
This talk will describe practical experience from:
• Making multiple applications coexist on a single VPP
• Stretching the envelope from low end to high end implementations
• Specific use cases such as DPI for monitoring purposes
Vector Packet Processing - Overview
• User space application
• VPP is an Open Source full fledged
switch/router and a framework to build VNF
• VPP processes arrays of packets through a
packet processing graph
• Plug-in nodes can be added at runtime
• Already deployed in commercial networks*
2wiki.fd.io - What is VPP ?
plugin
A
plugin
B
mpls-
ethernet-
inputip6-
input
ip4-
inputarp-
input
llc-
input
ip6-
lookup
ethernet-
input
dpdk-
input
• Shipping on several Cisco® products (ASR 9000, CGSE, CSR1000v)
http://www.cisco.com/c/dam/global/cs_cz/assets/ciscoconnect/2014/assets/tech_sdp5_sp_esp_jirichaloupka.pdf
Why work on VPP ?
Carrier grade performances
• Multi-core scaling switching benchmarks
1 core: 9 MPPS in+out, 2 cores: 13.4 MPPS in+out, 4 cores: 20.0 MPPS in+out
- UCS-C240 M3, 3.5 gHz, all memory channels forwarded, simple ipv4 forwarding
300 MPPS on 36 cores (ipv4 forwarding + security 2kwhitelist,1k to 8M routes)
- UCS 4-CPU server with 4x Intel E7-8890v3 (18C 2.5GHz)
• Efficient l2 switching and L3 routing with high number of entries in corresponding tablesRef; https://wiki.fd.io/view/VPP/What_is_VPP%3F#Performance_Expectations
and 592_05-25-16-fdio_is_the_future_of_software_dataplanes-v2.pdf (ciscoknowledge.com)
Easy to add new functionality
• Modular design allows to add / override functions
Strong Operation and Management focus
• VPP Binary Control API
• Statistics, Logs,…
A serious alternative to OVS !
Proof point: VPP is a strong contender to OVS-DPDK in OPNFV
Opportunity for VNF vendors: use VPP as a framework to accelerate TTM and boost VNF performance
VPP everywhere: Suitable both for high end platforms and low end CPE devices
4
VPP as a vSwitch
VPPVNF
VPPVNF
NIC NICClients Servers
VPPNF
NIC
Move to Interrupt mode, allowing low
consumption on low end devices
e.g: Ported to ARM v7 and tested on
Raspberry Pi 2 with 256M of RAM
OtherVNFOtherVNF
Polling mode on data acquisition
High End NFV
Low End
CPE
VPP everywhere: finally a strong contender for OPNFV and Service Chaining
5
VPP as a vSwitch / SFF
Traffic Shapping
CGNAT
NICClients NIC Servers
• VPP can be as a vswitch for
Openstack/OPNFV
• VPP for Service Chaining in
OPNFV
• Service Forwarding Function
• NSH Proxy
• VPP can also be used to build
VNFs
• Packet driven, transparent
middle boxes
L7 Firewalling
VNF requirements and Qosmos added value
Many VNF types
• HTTP Proxy, Traffic shaping,
Firewall, …
• Many are Stateful
Cloud Ready
• Management: Netconf,
VNFMgr
• Troubleshoot:Tap as a
Service, …
High Performance
• 10 to 40 Gb/s NICs
Qosmos added value to VPP:
• Application awareness: Real time traffic
visibility L2-L7
• Shared high performance flow table
• Integration of DPI as a shared resources
serving multiple VPP value add plugins
• Integration within management
environment
OpenStack, OpendayLight, OPNFV
Hands on VPP experience
VPP port on ARMv7
• Raspberri Pi 2 – No DPDK
• nVidia Tegra TK1 – With DPDK
• Upstream work on gerrit.fdio.com
Plugin development
• Flow table
• Sticky load balancer
• Configurable Port Span
• Work available on github
Deployement & Integration in OpenStack
• Upstream work on review.fdio.com
Live demo: Monitoring and troubleshooting
dpdk-
input
af_pac
ket -
input
flow
table
Load
balancer
iface-
output
dpdk-
output
af_pack
et-
output
netmap
-output
spanNSH
Proxy
Flow table support: VPP as a VNF for Stateful LBS
Client 1
Server A
VPP
Server B
Client 2
FT • Packet metadata is used to include Flow table
entry id associated to the packet
• Multiple VPP nodes can share the flow information
• Dev & Demo at the last IETF in Berlin
• Stickiness is stored in the flow table
• code available at http://www.github.com/christophefontaine
Monitoring ecosystem
Physical infrastructures are monitored with
• Packet brokers
• Probes and associated external applications (Analytics, BI,..)
With VMs and Containers based applications
• New metrics and better integration required
• Monitor external as well as inter-vm traffic
• OpenStack, Docker hold configuration files
• Integrated event & alerting system with Ceilometer+Gnocchi+Aodh
Automate monitoring is now an achievable target,
Providing an open framework to support this should be an objective
Monitoring: Smart SPAN Port
Smart SPAN Port: a configurable VPP Node
forward packet copies to DPI Probe
Offload support thanks to the Flow Table
• DPI Engine update the flowtable with the classification
Netmap support
Enabling
• High performance vprobe
• Application monitoring and troubleshooting
Client Server
VPP
IxEngine
DPI Probe
Filtered
packetsoffload
FT
syslog
Monitor your Openstack VMs with one click
vpp + Qosmos
Pluginsprobe
VM VM
VM VM
1
3
0
SPAN init/config by local agent
TAP port / Netmap / DPDK Ring
OpenStack
starts VMs
Smart Port
Mirroring Thru
UI or CLI
2
Probe feeds
an external DB
Visualization
thanks to an
existing engine
4
Integration may in OpenStack
Horizon or an existing
dashboard
Qosmos, Qosmos ixEngine, Qosmos ixMachine and Qosmos DeepFlow are trademarks or registered trademarks in France and other countries.
Other company and products name mentioned herein are the trademarks or registered trademarks of their respective owners. Copyright Qosmos
Non-contractual information. Products and services and their specifications are subject to change without prior notice
© Qosmos