lightness extensions - cordis 0.executive summary the lightness control plane architecture has been...
TRANSCRIPT
Low latency and high throughput dynamic network infrastructures for high performance datacentre interconnects
Small or medium‐scale focused research project(STREP)
Co‐funded by the European Commission within the Seventh Framework Programme
Project no. 318606
Strategic objective: Future Networks (ICT‐2011.1.1)
Start date of project: November 1st, 2012 (36 months duration)
Deliverable D4.2
The LIGHTNESS network control plane protocol extensions
Due date: 30/04/14
Submission date: 10/06/14
Deliverable leader: UPC
Author list: Fernando Agraz (UPC), Salvatore Spadaro (UPC), Jordi Perelló (UPC), Bingli Guo
(UNIVBRIS), Shuping Peng (UNIVBRIS), Reza Nejabati (UNIVBRIS), Georgios Zervas
(UNIVBRIS), Yan Yan (UNIVBRIS), Yi Shu (UNIVBRIS), Dimitra Simeonidou (UNIVBRIS),
Giacomo Bernini (NXW), Paolo Cruschelli (NXW), Nicola Ciulli (NXW), Wang Miao
(TUE), Nicola Calabretta (TUE).
Dissemination Level
PU: Public
PP: Restricted to other programme participants (including the Commission Services)
RE: Restricted to a group specified by the consortium (including the Commission Services)
CO: Confidential, only for members of the consortium (including the Commission Services)
2
3
Abstract The aim of this deliverable is to report the OpenFlow protocol extensions that have been designed to implement the LIGHTNESS southbound interface functionalities. On the basis of the characteristics of the heterogeneous data plane devices, the OpenFlow protocol extensions for: 1) collection of the devices’ attributes and features; 2) configuration and provisioning; 3) collection of statistics, and 4) monitoring purposes are reported and discussed in this document.
4
Table of Contents
0. Executive Summary 7
1. LIGHTNESS control plane architecture 8
2. LIGHTNESS control plane protocols and extensions 10
2.1. LIGHTNESS data plane devices 11
2.1.1. Architecture‐on‐Demand (AoD) node 11
2.1.2. Optical ToR and Network Interface Card (NIC) 13
2.1.3. Optical Packet Switching (OPS) 14
2.2. OpenFlow protocol extensions 14
2.2.1. Collection of attributes 17
2.2.2. Configuration 22
2.2.3. Collection of statistics 26
2.2.4. Network Status Monitoring 28
3. Conclusions 32
4. References 33
5. Acronyms 34
5
Figure Summary
Figure 1.1: LIGHTNESS Control Plane architecture .................................................................................................8 Figure 2.1: AoD Architecture View ...................................................................................................................... 12 Figure 2.2: AoD Functional Description ............................................................................................................... 12 Figure 2.3: The ToR Interconnection ................................................................................................................... 13 Figure 2.4: FPGA‐based NIC functional description............................................................................................. 13 Figure 2.5: OPS node functional description ....................................................................................................... 14 Figure 2.6: NIC configuration example ................................................................................................................ 24
6
Table Summary
Table 2.1: Functionalities defined for the southbound interface ....................................................................... 17 Table 2.2:OF protocol extensions to support optical ToR attribute collection ................................................... 18 Table 2.3:OF protocol extensions to support NIC attribute collection ............................................................... 20 Table 2.4:OF protocol extensions to support AoD attribute collection .............................................................. 21 Table 2.5:OF protocol extensions to support OPS switch attribute collection ................................................... 22 Table 2.6:OF protocol extensions to support optical TOR configuration (1) ...................................................... 23 Table 2.7:OF protocol extensions to support optical TOR configuration (2) ...................................................... 23 Table 2.8:OF protocol extensions to support NIC configuration ......................................................................... 25 Table 2.9:OF protocol extensions to support AoD configuration ........................................................................ 25 Table 2.10:OF protocol extensions to support OPS switch configuration .......................................................... 26 Table 2.11:OF protocol extensions to support TOR statistics collection ............................................................. 27 Table 2.12:Potential OF protocol extensions to support flow statistics collection from the TOR ...................... 27 Table 2.13:Potential OF protocol extensions to support flow statistics collection from the NIC ....................... 27 Table 2.14:OF protocol extensions to support extended OPS switch statistics .................................................. 28 Table 2.15:OF protocol extensions to support extended optical TOR monitoring ............................................. 29 Table 2.16:OF protocol extensions to support extended NIC monitoring .......................................................... 30 Table 2.17:OF protocol extensions to support extended AoD monitoring ......................................................... 30 Table 2.18:OF protocol extensions to support extended OPS switch monitoring .............................................. 31
7
0. Executive Summary
The LIGHTNESS control plane architecture has been designed to provide dynamic and automated procedures
for the setup and monitoring of the data centre network (DCN) connectivity services. Moreover, LIGHTNESS
control plane has been conceived to be able to provide an abstraction of the LIGHTNESS novel optical
interconnected data centre fabrics (composed by ToR, NIC, AoD, and OPS devices). Its main purpose is to
enable the dynamic programmability of the DCN, making it closely tied to the requirements of future data
centre applications and services, and allowing, at the same time, the joint optimisation of the overall data
centre resources.
In light of this approach, in LIGHTNESS, a control plane architecture that relies on Software Defined
Networking (SDN) has been chosen. In fact, the SDN control framework perfectly fits with the requirements of
the future data centres, as deeply detailed and already reported in deliverable D4.1 [del‐d41].
On the other hand, to enable the communications between the SDN controller and the optical devices at the
southbound interface, the OpenFlow (OF) protocol has been selected. This has been motivated by its flow‐
based switching approach and its capability to execute software/user‐defined flow‐based control functions
from an SDN controller decoupled from the data plane devices.
More and more efforts are being spent by standardization bodies such as Open Networking Forum (ONF) to
extend the original capabilities of OF protocols to manage heterogeneous data plane devices besides
electronic packet switching nodes. In LIGHTNESS, the proper OF protocol extensions in support of the specific
capabilities and features of the optical data plane devices have been defined and reported in this deliverable.
Finally, following the high‐level LIGHTNESS control plane architecture definition performed in the deliverable
D4.1 [del‐d41], this document complements D4.3 that focuses on the definition of the functionalities and
procedure of both northbound and southbound interfaces [del‐d43]. In this way, an overall design of
functionalities, interfaces, procedures and enabling protocols of the LIGHTNESS control plane is fully achieved.
8
1. LIGHTNESS control plane architecture
The main goal of LIGHTNESS is to design, implement and experimentally demonstrate an innovative data
centre infrastructure harnessing the power of optics to effectively cope with the ever‐increasing workload
generated by new and emerging data centre applications and services.
From the control plane perspective, the LIGHTNESS architecture overcomes the limitations of current data
centre management and control frameworks, which mainly offer semi‐automated or static procedures to
provide network connectivity to IT services and applications in the data centre [del‐d41]. To do this, the
LIGHTNESS control plane relies on Software Defined Networking (SDN) based architecture to implement
dynamic and automated procedures for setup, monitoring, recovery and optimization of the data centre
network (DCN) connections. In fact, the SDN control framework perfectly fits with the requirements of novel
optical interconnected data centre fabric proposed by LIGHTNESS where both Optical Circuit Switching (OCS)
and Optical Packet Switching (OPS) technologies are deployed.
Figure 1.1: LIGHTNESS Control Plane architecture
More specifically, the LIGHTNESS control plane is composed of an SDN controller implementing a set of basic
network control and management functionalities (e.g. path/flow computation, topology manager and
DCManagement& Orchestration
CloudManagementSystem
Network ServiceRecovery
NetworkOptimization
EnhancedPath/FlowComputation
VDCComposition
AoD OPS ToR NIC
Agent Agent Agent
NorthboundAPI
ResourceManagerAbstraction Layer
East‐WestA
PIs
NetworkServiceManager
Path/FlowComputation
MonitoringManager
TopologyManager
DCNResourcesDB
VirtualizationManager
Management Network Applications
9
network status monitoring manager) and both northbound and southbound interfaces, which are crucial to
meet the essential requirements for data centre services and applications [del‐d41], [del‐d43].
Looking at the control plane interfaces, on one hand, the northbound interface enables the programmability
of the underlying network devices, triggered by the data centre applications. Additionally, it allows requesting
control plane functions in the top‐down direction. The detailed functionalities and procedures and reference
standards for the LIGHTNESS northbound interface are deeply discussed in [del‐d43].
On the other hand, the southbound interface enables the communications between the control plane and the
underlying data plane. Such communication enables the configuration of the LIGHTNESS data plane devices
(optical ToR, AoD, NIC and OPS node) [del‐d31] from the control plane/SDN controller. Moreover, through the
southbound interface, the SDN‐based control plane collects optical devices attributes, statistics information
and network status in bottom‐up direction. In LIGHTNESS, this communication is performed by means of the
OpenFlow (OF) protocol [of‐proto] which has arisen as a de‐facto standard to implement the SDN southbound
interface. However, the OF protocol has to be significantly and properly extended in order to fulfil the
functional requirements of the southbound interface and control the heterogeneous devices in the data plane.
The key entity of the LIGHTNESS southbound interface is the OpenFlow Agent (OF‐Agent). In brief, an OF‐
Agent operates on top of each optical data plane device and takes the responsibility to: 1) translate the OF
protocol messages coming from the control plane into the actual configuration of the device, and 2) notify any
incidence occurred/statistics collected in the device up to the control plane. The general architecture and
functional design of the agent is depicted in [del‐d43]. Additionally, [del‐d32] provides further details on the
hardware (data plane) dependant part of the OF‐Agent.
In summary, while the functionalities of the southbound interface of LIGHTNESS control plane and the OF
agents design approach are elaborated in [del‐d43], this deliverable focuses on the definition of the required
OF protocol extensions.
10
2. LIGHTNESS control plane protocols and extensions
The LIGHTNESS control plane specification work in WP4 began with a wide analysis and investigation of
candidate control approaches suitable for a hybrid optical data centre environment as LIGHTNESS. In
particular the effort was concentrated towards the deep analysis and comparison of two control paradigms.
On the one hand, the IETF GMPLS architecture and protocols [rfc‐3945] have been considered, because they
provide control plane procedures for automated provisioning of network connectivity services with functions
for Traffic Engineering (TE), network resource management, and service recovery, combined with the IETF PCE
architectures [rfc‐4655] and protocols [rfc‐5440] for path computation. GMPLS and PCE are designed to
operate over multiple switching technologies: they include standard support to optical technologies and
specific mechanisms for operating multi‐layer, multi‐region and multi‐technology networks. On the other
hand, the SDN specifications defined by the Open Networking Forum (ONF) [onf], have been investigated as a
control framework for enhanced programmability of network functions and open standard, vendor and
technology agnostic protocols and interfaces in the data centre [onf‐sdn]. In particular, the OpenFlow
protocol [of‐proto] has been evaluated for the realization of SDN in LIGHTNESS, mainly due to its flow
switching based approach and capability to execute software/user‐defined flow based routing, control and
management functions from an SDN controller decoupled from the data plane devices.
As already deeply described and detailed in deliverable D4.1 [del‐d41], the LIGHTNESS control plane has been
defined following the SDN/OpenFlow approach to fully exploit its programmability and flexibility combined
with the benefits and novel capabilities of the LIGHTNESS multi‐technology optical data centre flat fabric.
However, the IETF PCE concepts are also applied to provide enhanced path computation functions to the
LIGHTNESS control plane for both service computation and provisioning purposes, as detailed in deliverable
D4.3 [del‐d43], by leveraging PCE and PCE Communication Protocol (PCEP) stateful and active concepts [ID‐
pce‐stateful‐08][ID‐pce‐initiated‐lsp‐00] mainly in the context of inter data centre scenarios.
Therefore, the LIGHTNESS SDN control architecture supports and implements OpenFlow and PCEP as
standardized control plane protocols. However, while OpenFlow is used at the LIGHTNESS SDN southbound
interface for the communications between the SDN controller and the optical data plane devices, and needs
dedicated protocol extensions in support of specific capabilities and features of these devices, the PCEP
protocol is used in its standard (or under standardization) version. Indeed, the PCEP is used in the LIGHTNESS
SDN controller for interaction with standard PCEs orchestrating the inter data centre connectivity provisioning
(please see D4.3 [del‐d43]), in full compliance with the procedures described in [rfc‐6805] for hierarchical PCE
deployments, therefore without specific protocol extensions required. The standard PCEP path request and
response messages are enough to support the LIGHTNESS inter data centre path computation scenario, while
11
the usage of the Path‐Keys as described in [rfc‐5520] allows to hide intra data centre technology details and
specific capabilities at the end‐to‐end path level.
Therefore, the remainder of this section describes the LIGHTNESS data plane devices, including AoD, OPS,
optical ToR and NIC, and OpenFlow protocol extensions that have been defined in support of these devices.
2.1. LIGHTNESS data plane devices
For sake of clarification, details are given in this section about the physical devices that compose the optical
data plane of LIGTHNESS. Hence, the following sections are introduced with the aim to provide a better
understanding of the extensions that need to be applied to the standard OpenFlow protocol, in order to make
them compatible with LIGHTNESS.
Following the flow concept proposed in the OpenFlow standardization, an optical flow can be identified by a
flow identifier comprising:
Port (fibre, core, mode);
Wavelength or the centre frequency (CF) of the optical carrier;
Associated bandwidth to the wavelength or the CF;
Signal type (e.g. optical transport format: sub‐wavelength switching header information, time slot,
bit‐rate, protocol, modulation format, etc.) associated to a specific optical transport and switching
technology;
Constraints specific to the physical layer (e.g. sensitivity to impairments and power range).
This generic definition allows applying the concept of optical flow to either existing or emerging optical
transport technologies. Moreover, it is in line with the packet domain OpenFlow flow matching.
2.1.1. Architecture‐on‐Demand (AoD) node
The AoD is composed by a set of different devices connected through a large port backplane. This architecture
allows to the AoD to provide on‐demand transport capabilities according to the requirements of the
requested connection. Figure 2.1 illustrates a more precise view of the AoD node and its modules. Sets of
AWGs, WSSs and splitters are connected to the large port switch backplane. The connectivity between these
devices can be reconfigured, thus providing the so‐called architecture on demand. The AoD is then connected
to the OPS and ToR switches to provide end‐to‐end connectivity services.
The capabilities of the AoD interconnected subsystem/modules are abstracted in the control plane, hiding the
technology‐specific implementation details. In this way, the complexity of the control, configuration and
management tasks is drastically reduced for the SDN controller.
12
Figure 2.1: AoD Architecture View
In this regard, Figure 2.2 shows the abstracted view of the AoD. In particular, virtual entities with a set of
ports supporting different switching capabilities (such as wavelength, fibre, etc.) are exposed to the control
plane. From a functional perspective, the AoD features are collected by the OF agent and sent to the control
plane (i.e., the SDN controller).
The configuration of the architecture on demand is realized in a two‐step process. First, the controller sends
the cross‐connection requirements associated to the connectivity service taking into account the abstracted
resources. In the second step, the AoD translates these requirements into the actual device specific
configuration, that is, the involved backplane ports, WSSs, AWGs, etc., are configured according to the
connectivity service needs.
Figure 2.2: AoD Functional Description
3D MEMS
Control Module
TL1
Cross‐connect table
13
2.1.2. Optical ToR and Network Interface Card (NIC)
Figure 2.3 depicts the physical connection of the optical devices within the data plane, central to the ToR
switches. In particular the connection to the NIC is shown in one side, and the connection to the AoD is shown
in the other one. The ToR works as an optical switch that supports both fibre and wavelength switching.
Figure 2.3: The ToR Interconnection
The FPGA‐based NIC is built in the server and provides optical transmission capability to the packets received
through the PCIe interface. On the other way round, the NIC also processes packets received from the optical
domain (through its optical ports) and forwards them either to the corresponding server applications or to
another optical port for intra‐rack communication. In brief, the NIC is able to both adapt and extract packets
to or from optical circuit or optical packet switching connections.
Figure 2.4: FPGA‐based NIC functional description
In order to enable these capabilities, the SDN controller needs to be able to process the new features of the
NIC, as well as to implement the processes to configure the NIC according to the requirements of the new
connectivity services. Figure 2.4 shows the functional description of the configuration of the NIC from the
control plane. In particular, the controller is able to install, update and delete entries in the look‐up‐table (LUT)
of the NIC. It is worth noting here that, depending on the connectivity service type (i.e. OCS or OPS) different
information is introduced and used in the LUT. During the packet forwarding, the incoming packets are
14
classified and stored in specific buffers according to the rules installed in the LUT. Afterwards, in the
transmitter, the LUT is visited again to check the wavelength (in the OCS case) and time slot or the label (in
the OPS case) to be used for the forwarding. The FPGA‐based NIC communicates to the controller by means of
a 10Gbps Ethernet interface.
2.1.3. Optical Packet Switching (OPS)
The OPS switch, depicted in Figure 2.5, consists of set of modules, and each module is connected to a cluster
of racks through the AoD. In this way, the inter‐cluster connectivity is provided and, thus, low‐latency OPS
flows can be established between the servers. Two components, namely the label processor and the switch
controller, are the fundamental pieces of each module of the OPS. More specifically, the label processor is the
responsible element to extract the label associated to a flow, whilst the switch controller takes the
responsibility of forwarding the optical packets to the output port according to their label. A LUT installed in
an FPGA board is used to this end. Thus, during operation, the optical packets received in an input port pass
through the label processor who extracts the label and passes it to the switch controller. The switch controller,
in turn, forwards the packet to the output port associated to its label. To do this, it checks the entries stored
in the LUT. More details can be found in [del‐d31], [del‐d32].
Figure 2.5: OPS node functional description
In light of this, it is clear that the main responsibility to be assumed by the SDN controller is to manage the
entries of the LUT of each module of the OPS node. So, the extensions needed by the OF protocol in support
of the OPS are devoted to the management of such LUTs, which translates into the installation, modification
and release of OPS‐based data flows.
2.2. OpenFlow protocol extensions
OpenFlow is considered the first SDN standard and defines the open communications protocol that enables a
controller to interact with the forwarding layers in the SDN architecture. OpenFlow allows direct access to and
manipulation of the forwarding plane of network devices such as switches and routers, either physical or
virtual (hypervisor‐based, like Open vSwitch). The lack of an open standard interface to the forwarding plane
15
led the concentration of standardization efforts towards the definition of OpenFlow, with the final aim of
overcoming the characterization of today’s networking devices as monolithic, closed, and mainframe‐like
systems. Indeed, the ONF is in charge of standardize OpenFlow through technical working groups responsible
for the protocol, configuration, testing, and other activities, also working towards interoperability between
network devices and control software from different vendors. As a result, OpenFlow is being widely adopted
by telco vendors, who typically have implemented it as a basic firmware or software upgrade for their
equipment.
OpenFlow can be considered as unique among the control plane protocols, as it is needed to move network
control out of the networking switches to logically centralized control software. It can be compared to the
instruction set of a CPU. The protocol specifies basic primitives that can be used by an external software
application (e.g., the controller itself or even a network application running on top) to program the forwarding
plane of network devices, like the instruction set of a CPU would program a computer system.
The OpenFlow protocol is implemented on both sides of the interface between network infrastructure devices
and the SDN control software, which in LIGHTNESS is defined as the southbound interface. OpenFlow uses the
concept of flows to match network traffic against pre‐defined rules that can be statically or dynamically
programmed by the SDN controller. This enables a network application running on top of the controller to
define and program how traffic should flow through network devices based on parameters such as usage
patterns, applications, and even cloud resources. Since OpenFlow allows the network to be programmed on a
per‐flow basis, an OpenFlow‐based SDN architecture provides extremely granular control, enabling the
network to respond to real‐time changes at the application level.
Therefore, OF protocol is the key enabler for SDN and currently is the only standardized SDN protocol that
allows direct manipulation of the forwarding plane of network devices. While initially applied to Ethernet‐
based networks, OpenFlow switching and approach can be extended to a much broader set of use cases and
network technologies. It has to be considered that OpenFlow‐based SDNs can be even deployed on existing
networks, which makes it smooth for operators and providers to progressively introduce OpenFlow‐based
SDN technologies, also in multi‐vendor network environments.
In light of the above, since it arises as the most promising technology to implement the southbound interface,
the OpenFlow protocol has been chosen as the most appropriate southbound interface implementation to be
adopted for the LIGTHNESS.
As previously mentioned, the former versions of the OF protocol were focused on the electrical packet
switching domain. To cope with the optical transport technology requirements an addendum considering
SONET/SDH, Optical Cross Connects (OXCs), and Ethernet/TDM convergence as circuit switched technologies
was published [of‐ocs]. Although several optical domain extensions have been added to further versions of
the protocol [of‐proto], they do not support the requirements of the LIGHTNESS.
For this reason, the decision of using OF version 1.0 extended with the circuit switching addendum as the base
to start the LIGHTNESS development was taken.
More precisely, the above‐mentioned specification does not support the advanced optical network
technologies used by the LIGHTNESS data plane (like flexible DWDM grid switching, which is the technology
used by the optical TOR and the AoD). Neither does it support the proposed heterogeneous optical data plane
16
(i.e. OCS/OPS). To address such shortcomings, this deliverable proposes an extended optical flow specification
for the OF protocol.
In LIGHTNESS, the key entities for the implementation of OpenFlow at the southbound interface are the
OpenFlow Agents (OF‐Agents), which act as bridging entities deployed on top of each data plane device to
hide their specific technology and management/control interface details. In brief, an OF‐Agent operates on
top of each optical data plane device and takes the responsibility to:
1) Translate the OpenFlow protocol messages coming from the control plane into the actual
configuration of the device.
2) Maintain a uniform resource model for the given device and provide capabilities and availabilities up
to the controller through the proper OpenFlow protocol messages.
3) Notify any failure occurred in the device up to the control plane.
The generalized architecture and functional design of the LIGHTNESS OF‐agent is provided in D4.3 [del‐d43],
while D3.2 [del‐d32] gives further details on the hardware dependant part of the agents.
The OpenFlow protocol itself is composed by a variety of messages and procedures between the OpenFlow
enabled switch and the SDN controller, including:
Messages sent from the SDN controller to the switch: these messages are initiated by the SDN
controller to manage or inspect the state of the switch. These messages may contain information on
“Features,” “Configuration,” “Modify‐State,” “Read‐State,” “Packet‐Out,” or “Barrier.”
Asynchronous messages: they are sent from the switch to the SDN controller without an explicit
trigger from the controller itself. These messages may contain information on “Packet‐In,” “Flow‐
Removed,” “Port Status,” or “Error.”
Symmetric messages: they are not initiated by either the SDN controller or the switch, but are used
for handshake and reliability purposes. These messages may include “Hello,” “Echo,” or
“Experimenter,” information.
The OpenFlow protocol specification and its set of messages and mechanisms perfectly fits with the
LIGHTNESS southbound interface requirements and procedures, that have been detailed in [del‐d43] and are
summarized in Table 2.1 for sake of completeness of this document.
Functionality Description
Collection of attributes
To build the resource database, the control plane collects the features of the different devices
of the optical data plane. This information is used for subsequent service configuration and
management of the elements in the data plane.
Configuration In order to provide data centre connectivity services, the involved optical devices need to be
configured appropriately.
Collection of statistics The collection of statistics from the data plane is crucial not only for providing and
17
guaranteeing quality services (QoS), but also to provide information to the control plane about
the utilization of the network.
Monitoring Monitoring is aimed to provide updated information of the status of the data plane, which is of
paramount importance for the control plane to learn about the topology and status of the
elements of the network, as well as to adequately allocate the network resources when
providing data centre connectivity services.
Table 2.1: Functionalities defined for the southbound interface
The remainder of this chapter is devoted to the description of the extensions that have been designed to
implement the functionalities of the southbound interface by means of the OF protocol.
2.2.1. Collection of attributes
The characteristics of the optical devices are sent, upon request, to the controller by means of the standard
OF FEATURES REQUEST/REPLY message pair. Hence, once the OF session has been established, the controller
sends a FEATURES_REQUEST message to the network element which, in turn, replies with a FEATURES_REPLY
conveying its features and capabilities. More specifically, the information collected by the controller includes
the features, actions and peering of the devices.
As previously mentioned, the extended version of the OF protocol including the extensions for OCS [of‐ocs]
can be used to provide the controller with appropriated information about the optical data plane. However,
due to the particular characteristics of some devices, additional extensions need to be applied to this already
extended version of the protocol.
In light of this, the subsections below detail the extensions that have been designed to the OF protocol in the
framework of LIGHTNESS to provide the control plane with complete information about the underlying optical
data plane.
2.2.1.1. TOR
In LIGHTNESS, the optical TOR implementation is based on WSS. The FEATURES_REPLY message needs to be
generated, which contains the information about the features and capabilities of the device and peering
information.
18
Table 2.2:OF protocol extensions to support optical ToR attribute collection
While some features of the ToR can be mapped into the OCS extensions of the OF protocol (i.e. the port
number and peer information), the switching capability requires some extensions to show if the optical flexi‐
grid switching capability is supported, and its spacing as well.
Specifically, the ofp_capabilities attribute is extended with an OFPC_FlexGrid flag, which indicates whether
the flex grid switching capability is supported or not. In addition, some extensions to the port features are
needed (Table 2.2). The bandwidth in ofp_phy_cport is used to show the grid spacing supported by this port,
as well as C or L band support. Thus, another bit in the bandwidth attribute is needed to show more spacing
options (7 in this case since it is not used in the current OF). Bits 1, 6 and 7 are used to identify the channel
spacing, i.e., 000 (1,6,7) = 50GHz, 100=100GHz, 010=25Ghz, 110=12.5GHz. Table 2.2 summarizes the
extensions that need to be applied to the OpenFlow protocol to support the attribute collection of an optical
TOR switch.
Structure Attribute Description Extension
ofp_switch_features
datapath_id
Datapath identifier. Different from switch ID because there might be multiple datapaths, e.g. hybrid mode where only certain ports are OF enabled
n_cports Number of circuit ports supported in TOR
capabilities Capabilities that the TOR supported, bitmap of support ofp_capabilities
Use the flex grid switching capability flag to show if the flex grid switching is supported: OFPC_FlexGrid
actions This denotes the actions supported by the device, bitmap of supported ofp_action_types
Extend to show if the switch supports attenuation (VoA), filter selection options
ofp_phy_cport
port_no Number in range (0 – 0xfa00)
config Behaviours of port that can be configured
Enum ofp_port_config. Only OFPPC_PORT_DOWN status is relevant for circuit
state Current state of the physical port. These are not configurable from the controller.
If no optical signal present, OFPPS_LINK_DOWN
supp_swtype Supported switch type by this physical ports.
For Flexi grid , OFPST_WAVE will be used
peer_port_no Port number at the switch connected to that port.
port_no
peer_datapath_idDatapath ID of the connected switch
Dpid
bandwidth
Supported bandwidth/spectrum. This field is used to identify the supported/used frequency slots. Lower 6 bits are used as flags as defined in OF 1.0‐V0.3 [of‐cs0.3] and the upper 54 bits to identify the spectrum in C or L bands.
Bit 0 represents WSS. C or L band support is set at bit 2. For flexible grid, bit 1, 6 and 7 are used to identify the channel spacing, i.e., 00 = 50GHz, 01=100GHz, 10=25Ghz, 11=12.5GHz. For flexible grid the remaining 54 bits identify the min & max of the supported frequency slots
19
2.2.1.2. NIC
The NIC is a special device with optical ports able to send/receive packets coming/going from/to other inter‐
rack or intra‐rack servers. At the same time, thanks to a FPGA logic, the NIC can process packets in the
electrical domain. Hence, electrical Ethernet packets are received through the PCIe interface and
electronically processed, that is, the packets are forwarded according to their Ethernet header to the
appropriated optical port. In light of this, from the control plane point of view, the NIC is abstracted as an
entity with both packet and circuit switching features.
Upon the reception of a FEATURES_REQUEST message, the agent of the NIC builds a reply that contains
general features (capabilities and circuit ports number), as well as detailed packet port and circuit port
information.
The EPS‐like capabilities supported by the NIC (thanks to the FPGA) such as packet switch or statistics
information require no extensions from the OF protocol perspective. For example, the existing
OFPC_FLOW_STATS and OFPC_PORT_STATS attributes are used to indicate flow and port statistics collection
capabilities, respectively. Moreover, for the optical switching, the OFPC_POS capability is used for the OCS
while the OFPC_CTG_CONCAT flag defines the OPS adaptation. Two actions, OFPAT_CKT_OUTPUT and
OFPAT_CKT_INPUT, which are already defined in [of‐ocs] to account adapting packet flows to circuit and
extracting circuit packet flows from circuit respectively, are used here to indicate the NIC speciality. Table 2.3
summarizes the mentioned extensions of the OF protocol along with some further ones needed by the port
features structure.
Structure Attribute Description Extensions
ofp_switch_features
datapath_id
Datapath identifier. Different from switch ID because there might be multiple datapaths e.g. hybrid mode where only certain ports are OF enabled
n_cports Number of circuit ports supported in NIC
capabilities Capabilities that the NIC node supported, bitmap of support ofp_capabilities
OFPC_FLOW_STATS OFPC_PORT_STATS OFPC_POS OFPC_CTG_CONCAT
actions Supported action type. OFPAT_CKT_OUTPUT OFPAT_CKT_INPUT
struct ofp_phy_port ports Use the standard structure defined in OpenFlow 1.0
ofp_phy_cport
port_no Number in range (0 – 0xfa00)
config Behaviours of port that can be configured.
Enum ofp_port_config. Only OFPPC_PORT_DOWN status is relevant for circuit
state Current state of the physical port. These are not configurable from the controller.
If no optical signal present, OFPPS_LINK_DOWN
supp_swtype Supported switch type by this physical ports.
OFPST_WAVE is used to indicate the port support
20
OCS; OFPST_OPS is extended to indicate the port support OPS.
peer_port_no Port number at the switch connected to that port.
port_no
peer_datapath_idDatapath ID of the connected switch
dpid
bandwidth Indicate the wavelength supported or used
Use the definition in OpenFlow 1.0 circuit extension 0.3
Table 2.3:OF protocol extensions to support NIC attribute collection
2.2.1.3. AoD
As said, the AoD is composed of several optical devices interconnected by a larger port count. In the control
plane, the AOD is abstracted as an entity with ports supporting different features (e.g. if a port is connected
with an AWG, MUX/DEMUX switching type needs to be supported by this port).
Upon the reception of a FEATURES_REQUEST, the OF agent of the AoD constructs the FEATURES_REPLY
message. This message conveys a new OFPC_AOD flag to indicate the AoD capability of the device. Besides
this, the extended OFPC_FlexGrid flag indicates the capability of the device to support flex grid switching.
Since new capabilities, such as port based or frequency based attenuation (VOA), may be available in the AoD,
some further extensions may be required with regard to the optical device capabilities and the supported
actions. Moreover, given that the AoD is a composition of devices, the capabilities of each supported
component define the actual capabilities of the AoD. Therefore, any change in the components of the AoD
may result in additional extensions to the protocol. That is why, in this case, the extensibility of the OF
protocol is left open.
Table 2.4 details the extensions that need to be applied to the OF protocol to support the attribute collection
of the AoD.
Structure Attribute Description Extensions
ofp_switch_features
datapath_id
Datapath identifier. Different from switch ID because there might be multiple datapaths, e.g. hybrid mode where only certain ports are OF enabled
n_cports Number of circuit ports supported in AOD
capabilities Capabilities that the AOD node supported, bitmap of support ofp_capabilities
OFPC_AOD is extended to indicate the node is an abstracted AOD node; flag OFPC_FlexGrid is extended to indicate bandwidth variable superchannel connection capability.
21
actions Bitmap of supported ofp_action_type
If the devices connected with the back plane support attenuation (VoA), filter selection options, etc., more flags should be extended.
ofp_phy_cport
port_no Number in range (0 – 0xfa00)
config Behaviours of port that can be configured.
Enum ofp_port_config. Only OFPPC_PORT_DOWN status is relevant for circuit
state Current state of the physical port. These are not configurable from the controller.
If no optical signal present, OFPPS_LINK_DOWN
supp_swtype Supported switch type by this physical ports.
Extension: OFPST_MUX/DEMUX, OFPST_FlexGrid
peer_port_no Port number at the switch connected to that port.
port_no
peer_datapath_id Datapath ID of the connected switch
Dpid
bandwidth Supported and used wavelength Also follows the
description for Flex grid in 2.2.1.1
Table 2.4:OF protocol extensions to support AoD attribute collection
2.2.1.4. OPS
Upon the reception of a FEATURES_REQUEST from the controller, the OF agent of the OPS switch builds the
reply message which contains information about the general characteristics and capabilities of the switch, and
information of its ports. While the information of the ports of the OPS can be mapped into the OCS extensions
of the OF protocol (i.e. the cport), the optical packet switching capabilities require some extensions. First of all,
the optical switching capability has to be added to the ones already supported by the OF protocol. Thus, a
new byte (OFPC_OPS) needs to be dedicated in the ofp_capabilities bitmap to describe this behaviour in an
OF‐controlled device.
In addition, as will be detailed in next section, to install an OPS‐based connectivity service two new types of
actions are required and need to be added in the actions bitmap of the switch features. These two actions,
namely OFPAT_SET_OPS_LABEL and OFPAT_SET_OPS_LOAD, have to be added to the already existing action
types and will further enable the OPS data flow configuration. As said before in this section, the OPS port
information and capabilities can be supported over the standard OF protocol version with support to OCS.
However, for description purposes, a new switching type (OFPST_OP) should be added to the port switching
type bitmap.
The following Table 2.5 summarizes the extensions that need to be applied to the OF protocol to support the
attribute collection of an OPS switch.
22
Structure Attributes Description Extensions
ofp_capabilities OFPC_OPS Describes the capabilities supported by
OpenFlow implementation
Add the new OPS capability
support to the OF switch
ofp_switch_features capabilities The bitmap that describes the
capabilities supported by the OF switch
The bitmap has to support
the new OFPC_OPS
capability
ofp_port_swtype OFPST_OP Describes the switching types
supported by the OpenFlow
implementation
Add the new Optical Packet
Switching support to the
port (OFPST_OP)
ofp_phy_cport supp_swtype The bitmap that describes the
switching types supported by the OF
switch
The bitmap has to support
the new OFPST_OP
switching type
ofp_action_type OFPAT_SET_OPS_LABEL,
OFPAT_SET_OPS_LOAD
Describes the actions supported by the
OpenFlow implementation to be
applied over an OF switch
Add two new actions to
configure OPS‐enabled
connectivity services
ofp_switch_features actions The bitmap that describes the actions
supported by the OF switch
The bitmap has to support
the new actions to
configure OPS data flows
Table 2.5:OF protocol extensions to support OPS switch attribute collection
2.2.2. Configuration
The next sub‐sections describe the OF protocol messages that are required for the configuration of the
LIGHTNESS data plane devices.
2.2.2.1. TOR
As deeply illustrated in [del‐d43], the configuration of the TOR basically includes adding, modifying and
deleting cross connections. By doing this, the NIC can be properly connected/disconnected to/from the AoD
ports and, as a consequence, the OPS or OCS connectivity can then be established/released. It is worth noting
that, from the TOR perspective, there is no difference between OPS and OCS connectivity.
From the OF protocol point of view, the configuration of such cross connections in the TOR is achieved using
the CFLOW_MOD message defined in [of‐ocs]. However, as detailed in the previous subsection, specific
configurations need to be defined to support flex grid connection. In this regard, Table 2.6 depicts the
extensions in ofc_connect conveyed in the CFLOW_MOD message to support flex grid and wavelength
connections. Although 64 bits are originally used to indicate the wavelength used by the connection (since
23
just the fixed grid scenario is considered in [of‐proto, of‐ocs]), in LIGHTNESS these 64 bits are split into two
(32‐bit) parts to support the flex grid. The first part is used for general purpose (e.g. to show the spacing) and
the second one is employed to indicate the central frequency and the bandwidth (16 bits are used for each of
them). For general fix grid connections, the standard approach is followed.
Structure Attribute Description Extensions
ofp_cflow_mod
command The operation to be implemented. cross connection setup, teardown, modify is supported: one of the OFPFC_* commands
hard_timeout If the duration of flow is pre‐determined, this filed is used to set the exact duration time. If it is zero, this connection will be permanent.
ofp_connect
wildcards identifies which field in the ofp_connect should be ignored when looking to cross connect an input to the output port, e.g. , FIBRE, TDM, WAVE ports
WAVE connection is used to indicate bandwidth variable connection
num_components Number of cross connections
ofp_wave_port: in_wport
wport Identifies the port for cross connections
Wavelength (lower 32) Flags and experimental fields
Bit 0 for wave port, Bit 1, 6 and 7 denotes channel spacing.
Central frequency & bandwidth
for flexigrid connection, central frequency and & bandwidth slots values
Lower and upper 16 bits for each value
ofp_wave_port: out_wport
The same as in_wport.
Table 2.6:OF protocol extensions to support optical TOR configuration (1)
For the port modification (Table 2.7) a number of actions over the optical devices need to be supported. For
instance, the attenuation or the frequency slots may be configured. Therefore, the protocol needs to be
extended accordingly.
Structure Attribute Description Extensions
ofp_cport_mod
Port_no
Hw_addr Hardware address is not configurable
configuration OFPPC flag to change port attributes Attenuation, filter options
Table 2.7:OF protocol extensions to support optical TOR configuration (2)
24
2.2.2.2. NIC
As described previously, the NIC implements hybrid packet/circuit processing capabilities. This processing is
realized according the entries of the LUT. Such entries are updated upon the reception of Ethernet frames
from the agent, which are translated into the actual control plane configuration.
Figure 2.6 depicts an example of a flow entry processing in the NIC. The figure shows the basic concept of how
packets are forwarded from the packet domain to a circuit output port. First, thanks to its packet processing
capability, the matching by different layers headers (e.g. VLAN id, source or destination MAC, etc.) can be
done and used to store the packets into different buffers. Furthermore, the action field contains the
information necessary to send the packets to the appropriated output. Such information includes the output
port, the wavelength, the time slot and the label, and defines the type of service associated to the packet (i.e.
OCS or OPS).
match actions
out‐port wave time slot label
statisticsFor the out‐going traffic
Figure 2.6: NIC configuration example
In light of the above, the OFPAT_CKT_OUTPUT and OFPAT_CKT_INPUTactions are extended and used in
LIGHTNESS to support the adaptation of packets into circuit ports and vice versa (i.e. extracting packets from
OCS or OPS connections).Specifically, for OFPAT_CKT_INPUT, the packets extracted from the OCS/OPS flow
can be matched to the packet flow table entries. Table 2.8 shows the OF extensions necessary to support the
NIC configuration.
Structure Attribute Description Extensions
ofp_flow_mod
match Define packet matching fields Source/destination MAC and VLAN id is supported
command add/modify/delete
Hard_timeout
priority Priority level of flow entry
action OFPAT_CKT_OUTPUT
adaptation OFPCAT_*
cport Indicate the output port number
wavelength Indicate wavelength will be used
tstart Starting time slot If it is a OPS, this would be
used to show the time slot
label Indicate the label to be used If it is a OPS, this would be
extended to set the label
ofp_cflow_mod
command drop
Hard_timeout
connect
25
action OFPAT_CKT_INPUT
adaptation OFPCAT_*
cport Indicate the output port number
Use virtual port number, the drop port can be defined here
wavelength Indicate wavelength will be used
tsignal
tstart Starting time slot
Table 2.8:OF protocol extensions to support NIC configuration
2.2.2.3. AoD
The AoD needs to be configured when establishing both OPS and OCS connections. This configuration is
conducted by means of the OFPT_CFLOW_MOD message, which needs to be extended similar to the previous
cases. In particular, Table 2.9 depicts the extensions needed to support flex grid connections, which have
been introduced in the TOR case.
Besides, similar to the TOR, Table 2.7 illustrates the port configuration extensions needed to support the
additional capabilities of the components of the optical domain.
Structure Attribute Description Extensions
ofp_cflow_mod
command The operation to be implemented.
cross connection setup, teardown, modify is supported: one of the OFPFC_* commands
hard_timeout
If the duration of flow is pre‐determined, this filed is used to set the exact duration time. If it is zero, this connection will be permanent.
ofp_connect
wildcards
identifies which field in the ofp_connect should be ignored when looking to cross connect an in‐coming to the out‐going port, e.g. , FIBRE TDM, WAVE ports
for flex grid connection, WAVE connection is used.
num_components Number of cross connections
ofp_wave_port
wport Identifies the port for cross connections
Wavelength (lower 32)
Flags and experimental fields Bit 0 for wave port, Bit 1, 6 and 7 denotes channel spacing.
Central frequency & bandwidth (upper 32)
Flexigrid central frequency and & bandwidth slots values
Lower and upper 16 bits for each value
Table 2.9:OF protocol extensions to support AoD configuration
26
2.2.2.4. OPS
As illustrated in [del‐d43], the configuration of the OPS switch basically consists of adding, modifying or
deleting entries in the LUT. By doing this, along with a synchronized NIC, TOR and AoD configuration, OPS‐
based connectivity services (i.e. OPS‐based data flows) can be established in the data plane.
From the OF protocol perspective, the configuration of such flows can be achieved in the OPS by means of the
standard FLOW_MOD message. Note here that, although being optical, OPS does not use the CFLOW_MOD
message of the OCS supported version of the OF protocol. However, as detailed in previous subsection, OPS
specific actions (namely OFPAT_SET_OPS_LABEL andOFPAT_SET_OPS_LOAD) need to be defined since the
flow configuration parameters of an OPS flow are different from the ones used in an electrical packet
switching flow. These two actions are aimed to set the label and load parameters of the flow. Given that OPS
flows are identified by the label, the matching of an OPS‐enabled OF message is done by means of this label.
This makes necessary to extend the OF protocol by adding the label as a new matching field. Furthermore, the
label and load attributes need to be added to the FLOW_MOD. In this way, the label and load values are
conveyed in the message and used to:
• Install a new OPS flow: A new data flow with a certain label and load is set up in the OPS switch by
adding a new entry in the LUT with such label and load values. The output port of the flow is set as
well.
• Modify an OPS flow: The parameters of an existing flow (load, output port), which is identified by the
conveyed label, are modified. This is translated into a modification of the LUT entry corresponding to
such flow.
• Uninstall an OPS flow: The existing flow, which is identified by the conveyed label, is removed. Thus,
the LUT entry associated to this flow is removed.
In brief, the OF extensions needed to support the OPS switch configuration are outlined in the table below.
Structure Attributes Description Extensions
ofp_match ops_label Contains the matching
fields of the flow
Add the OPS label as a new matching field
ofp_flow_mod ‐ The FLOW_MOD
message structure
Must support new OPS label matching and the new
OFPAT_SET_OPS_LABEL, OFPAT_SET_OPS_LOAD actions
Table 2.10:OF protocol extensions to support OPS switch configuration
2.2.3. Collection of statistics
The controller, through the standard STATS_REQUEST/STATS_REPLY message pair, collects the statistics of the
different devices of the optical data plane. In particular, the controller sends to the device a STATS_REQUEST
message, that may request different kinds of statistics depending on the device and, in turn, the device sends
back a STATS_REPLY containing the appropriated type of counters according to the received request.
27
2.2.3.1. TOR
[del‐d43] highlights the statistics associated to optical connection that need to be collected by the optical TOR
and sent to the controller. First, the connection statistics in time domain are helpful for the control plane to
get knowledge about flows and connections characteristics and, thus, for resource usage optimization.
Table 2.11 shows the extensions needed in the statistics request and reply messages to support the new
requirements. In particular, the type field determines how the body of the message has to be processed.
Furthermore, Table 2.12 details the extensions related to the flow statistics gathering.
Structure Attribute Description Extensions
ofp_stats_request ofp_stats_reply
type Indicate the types of flow reply message (e.g., flow, port, queue)
One of the OFPST_* flags, flow statistics is supported
flag OFPSF_REPLY_* flags
body Collecting connection/flow statistics will be extended
Table 2.11:OF protocol extensions to support TOR statistics collection
Structure Extensions
ofp_flow_stats Supportflow statistics for each connection/flows, individual and aggregate
Table 2.12:Potential OF protocol extensions to support flow statistics collection from the TOR
2.2.3.2. NIC
This section describes the OF protocol extensions to implement statistics collection from the NIC. First of all,
port statistics accounting the number of received and transmitted packets by the NIC are needed. In addition,
a counter of the packets belonging to specific flows or connections is maintained in the NIC, so the SDN
controller can collect it. Finally, optional queue statistics, which can be used to different purposes such as to
provide QoS, have been defined in [del‐d43]. In this regard, the OF protocol extensions to cope with
additional queue statistics collection have been left open.
Like in the previous case, stats request and reply messages need to be extended. Given that such extensions
are the same as in the TOR case, the reader is referred to Table 2.11. Besides this, Table 2.13 highlights the
structures affected by current and future statistical information collection at the NIC level.
Structure Extensions
ofp_port_stats Support new NIC port statistics
ofp_flow_stats Supportflow statistics for each entry in LUT of NIC
ofp_queue_stats Would be extended to support new queue statistics in NIC
Table 2.13:Potential OF protocol extensions to support flow statistics collection from the NIC
28
2.2.3.3. AoD
The AoD basic statistical information collection includes individual connection statistics. These statistics
provide the control plane with a view of what pairs of ports support (or have supported) more flows. There
have been also defined some optional functionalities, such as aggregated information collection about AoD
connections (which is helpful for the control plane to measure the traffic from a specific TOR and between a
given TOR pair, respectively).
From the OF protocol extension and development point of view, the extensions required to implement the
above mentioned functionalities are very much in line with the extensions explained in the TOR section, so no
further detail is given here.
2.2.3.4. OPS
As said in [del‐d43], different types of statistics are planned to be collected from the OPS switch. On the one
hand, port statistics accounting the number of received and transmitted packets will be collected. Moreover,
a counter of the contentions occurred in each output port is maintained in the OPS and, thus, can be accessed
by the controller. On the other hand, additional statistics (such as flow and aggregated ones) have been
defined in [del‐d43] as optional since its implementation depends on the OPS switch ability to compute them.
At the current point of development, the standard OF port statistics implementation copes with the
necessities of the controller to collect the required statistical information of the OPS switch. Therefore, no
additional extensions have been defined to this aim. However, the standard OF implementation structures
that would be affected by further extensions have been identified and depicted in the table below.
Structure Attributes Description Extensions
ofp_port_stats To be defined Contains the port statistics
supported by the OF switch
Would be extended to support new
OPS port statistics
ofp_flow_stats To be defined Contains the flow statistics
supported by the OF switch
Would be extended to support new
OPS flow statistics
ofp_aggregate_stats_reply To be defined Contains the aggregated
statistics supported by the OF
switch
Would be extended to support new
aggregated OPS flow statistics
Table 2.14:OF protocol extensions to support extended OPS switch statistics
2.2.4. Network Status Monitoring
Monitoring is key to keep the control plane updated about the actual status of the optical data plane. To
assure this, the OF protocol designs an update mechanism based on asynchronous messages that are sent by
29
the data plane devices to the controller. Furthermore, [of‐ocs] extends this feature by adding a new
asynchronous message to notify the controller about the status of the OCS ports (OFPT_CPORT_STATUS).
In light of the above, this section shows the extensions that have been designed to the OF protocol to cope
with the monitoring requirements defined for the LIGHTNESS in [del‐d43].
2.2.4.1. TOR
According to the requirements established in [del‐d43], the OF protocol extensions in support of the
collection of network status information refer to the actual port utilization (e.g. the number of wavelengths
used by a port in a given moment), and the changes on the status of the ports in the TOR. Moreover, the
monitoring of the ports power level is considered since it contributes to improve the network intelligence (e.g.
for load balancing, restoration, etc).
From the protocol extensions development view, all this information, as well as additional monitoring
requirements that may arise along the project, is collected by means of the port status message. Although, as
said, further extensions may be needed during the project duration, Table 2.15 shows the extensions needed
to cope with the current TOR monitoring requirements.
Structure Attribute Description Extensions
cport_status
reason Port change reason: Add, Delete, Modify
ofp_phy_cport: desc Wavelength, spacing, VoA changes Support the flex grid option, also the VOA monitoring if the port could tune the attenuation
Table 2.15:OF protocol extensions to support extended optical TOR monitoring
2.2.4.2. NIC
The monitoring requirements of the NIC are similar to the ones defined for the TOR with regard to the port
usage and status. Moreover, additional monitoring parameters, such as instant buffer utilization and packet
transmission speed have been defined in [del‐d43]. These requirements have been set as optional since their
availability strongly depends on the available technology (i.e. the FPGA). Notwithstanding, it is clear that these
monitoring parameters are tremendously helpful for the control plane to evaluate the utilization of the
connections. As a practical example, thanks to this kind of monitoring, an underused OCS‐based connection
could be handed over to an OPS‐based one, thus improving the data plane utilization efficiency.
As in the previous case, the port status message is used to collect the above mentioned network status
parameters.
Structure Attribute Description Extensions
cport_status reason
Port change reason: Add,
Delete, Modify
ofp_phy_cport: desc Wavelength Port utilization could be monitored with the
30
ratio of wavelength used and supported
Buffer utilization The buffer utilization (e.g., percentage)
associated with that port
Table 2.16:OF protocol extensions to support extended NIC monitoring
2.2.4.3. AoD
[del‐d43] defines the required and optional monitoring requirements for the AoD node. More specifically,
some devices utilization information and port status have been set as required functions while several optical
domain parameters (like the insertion loss and the power level of each port) have been declared optional.
Furthermore, as stated along this document, the door is left open to include any additional extension that
may be needed along the development of the LIGHTNESS.
Thanks to this monitored network level utilization information, the northbound resource optimization
applications will be able to implement optimal resource allocation. In addition, these optical domain
parameters will be utilized by the northbound enhanced path computation application to implement
impairment aware routing strategies.
Structure Attribute Description Extensions
cport_status
reason Port change reason: Add, Delete, Modify
ofp_phy_cport: desc Wavelength
Optical layer parameters Optical layer characterises that could be
monitored
Port insertion loss will be
extended to support.
Table 2.17:OF protocol extensions to support extended AoD monitoring
2.2.4.4. OPS
[del‐d43] defines the mandatory and optional monitoring requirements for the OPS switch. Since these
requirements are basically related to the ports of the switch, the OFPT_CPORT_STATUS message defined in
[of‐ocs] can be used but, however, some extra attributes need to be added to the message to convey the
specific information of the OPS switch ports. For example, the OFPPR_BW_MODIFY parameter already
defined for the OCS can be used to notify a change on an OPS port bandwidth utilization, as well. Other
parameters such as optical power, loss, cross talk (XT) or polarization dependent loss (PDL) require some
extensions. The table below identifies the structures and attributes that are affected by the extension of the
OF protocol to support OPS monitoring requirements.
31
Structure Attributes Description Extensions
ofp_cport_status ‐ OFPT_CPORT_STATUS message
structure
The message must support
new monitoring requirements
ofp_port_reason OFPPR_PWR, OFPPR_LOSS,
OFPPR_XT, OFPPR_PDL
Contains the parameters that can
be monitored in the port of the
switch
Parameters to be conveyed in
the OFPT_CPORT_STATUS
Other structures to be created
‐ ‐ Additional data structures may
be created to implement other
monitoring messaging
Table 2.18:OF protocol extensions to support extended OPS switch monitoring
32
3. Conclusions
LIGHTNESS consortium has designed a control plane architecture based on Software Defined Networking (SDN)
paradigm to provide dynamic and automated procedures for connectivity services for future data centres.
Moreover, the control plane implements functionalities to provide an abstraction of the LIGHTNESS data
plane devices. The high‐level LIGHTNESS control plane architecture has been reported in the deliverable D4.1,
while deliverable D4.3 reports the definition of the functionalities and procedure of both northbound and
southbound interfaces. This document reports the OpenFlow (OF) protocol extensions required to implement
the communication between the logically centralised SDN controller and the hybrid optical devices, through
the southbound interface. The OF protocol extensions in support of the specific capabilities and features of
the optical data plane devices (AoD, Optical TOR, NIC and OPS) have been defined. As the result, a complete
set of extensions for the collection of the features and attributes of the data plane devices to be exposed to
SDN controller and for the proper configuration of data plane has been achieved. Moreover, the extensions
for carrying out monitoring functionalities as well as collection of statistics have been also considered. As a
consequence, the implementation activities in WP4 will rely on a complete design of the control plane
architecture, including functionalities, procedures, interfaces and enabling protocol extensions.
Finally, it is worth to mention that data centre network virtualization mechanisms and functions are still under
investigation at the time of writing this document. Therefore, the final version of the LIGHTNESS SDN control
plane architecture will be released with deliverable D4.6.
33
4. References
[del‐d31] “Release of the design and early evaluation results of OPS switch, OCS switch, and TOR switch”, LIGHTNESS deliverable D3.1, November 2013.
[del‐d32] “Implementation results of OPS switch, of the OCS switch and the TOR switch”, LIGHTNESS deliverable D3.2, June 2014.
[del‐d41] “The LIGHTNESS network control plane architecture”, LIGHTNESS deliverable D4.1, September 2013.
[del‐d43] “The LIGHTNESS network control plane interfaces and procedures”, LIGHTNESS deliverable D4.3, June 2014.
[ID‐pce‐initiated‐lsp‐00] E. Crabbe, I. Minei, S. Sivabalan, R. Varga, “PCEP Extensions for PCE‐initiated LSP Setup in a Stateful PCE Model”, IETF Draft, work in progress, December 2013.
[ID‐pce‐stateful‐08] E. Crabbe, J. Medved, I. Minei, R. Varga, “PCEP extensions for stateful PCE”, IETF Draft, work in progress, February 2014.
[of‐ocs] OpenFlow circuit switch specification: http://archive.openflow.org/wk/images/8/81/OpenFlow_Circuit_Switch_Specification_v0.3.pdf .
[of‐proto] “OpenFlow Switch Specification”, Open Networking Foundation, https://www.opennetworking.org/sdn‐resources/onf‐specifications/openflow
[onf] Open Networking Foundation: https://www.opennetworking.org/.
[onf‐sdn] ONF, “Software‐Defined Networking: The New Norm for Networks”, Open Networking Foundation white paper, April 2012.
[rfc‐3945] E. Mannie, “Generalized Multi‐Protocol Label Switching (GMPLS) Architecture”, IETF RFC 3945, October 2004.
[rfc‐4655] A. Farrel, J.P. Vasseur, J. Ash, “A Path Computation Element (PCE)‐Based Architecture”, IETF RFC 4655, August 2006.
[rfc‐5440] J. P. Vasseur, J.L. Le Roux, “Path Computation Element (PCE) Communication Protocol (PCEP)”, IETF RFC 5440, March 2009.
[rfc‐5520] R. Bradford, J.P. Vasseur, A. Farrel, “Preserving Topology Confidentiality in Inter‐Domain Path Computation Using a Path‐Key‐Based Mechanism”, IETF RFC 5520, April 2009.
[rfc‐6805] D. King, A. Farrel, “The Application of the Path Computation Element Architecture to the Determination of a Sequence of Domains in MPLS and GMPLS”, IETF RFC 6805, November 2012.
34
5. Acronyms
AoD Architecture on Demand
AWG Arrayed Waveguide Grating
CPU Central Processing Unit
DCN Data Centre Network
DWDM Dense Wavelength Division Multiplexing
FPGA Field Programmable Gate Array
GMPLS Generalised Multiprotocol Label Switching
IETF Internet Engineering Task Force
LUT Look‐up‐Table
MAC Media Access Control
NIC Network Interface Card
OCS Optical Circuit Switching
OF Open Flow
ONF Optical Networking Forum
OPS Optical Packet Switching
OXC Optical Cross Connect
PCE Path Computation Element
PCEP PCE Communication Protocol (PCEP)
SDH Synchronous Digital Hierarchy
SDN Software Defined Networking
SONET Synchronous Optical Network
TDM Time Division Multiplexing
TOR Top of the Rack
VLAN Virtual Local Area Networks
WSS Wavelength Selective Switch