danny goderis alcatel · • research institutes ... voip, moip per class service differentiation...

92
Premium IP – Cluster Meeting & Review 21-22 Nov, 2001, Dresden 1 www.ist-tequila.org Danny Goderis Alcatel

Upload: letram

Post on 29-Apr-2018

218 views

Category:

Documents


2 download

TRANSCRIPT

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 1

www.ist-tequila.org

Danny GoderisAlcatel

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 2

Tequila Consortium

• Industrial Partners– Alcatel, Belgium– Algosystems S.A., Greece– France Telecom-R&D, France – Global Crossing, UK

• Universities– NTUA - National Technical University Athens,

Greece– UCL - University College London, UK– UniS - The University of Surrey, UK

• Research Institutes– IMEC, Belgium– TERENA, Netherlands

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 3

TEQUILA Presentations

• The TEQUILA rationale for QoS delivery– D. Goderis (Alcatel)

• Traffic engineering the multi-service Internet– G. Pavlou (UniS)

• QoS-aware monitoring and measurement– R. Egan (Global Crossing)

• QoS routing over the Internet: a BGP-based approach– C. Jacquenet (France Telecom)

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 4

Danny GoderisAlcatel

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 5

Outline

• Introduction• A DiffServ layered service model• Service provisioning & admission control• The TEQUILA model illustrated

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 6

The IETF IP QoS Debate...

IP ToS-byteRFC 791

ATM-Forum created (‘91)

IETF IntServRSVP protocol

IETF DiffServPHB

Best-Effort datano guarantees

Multiservice networkper-flow guarantees

1981 1991 1995 1998 2001

Next Generation Network

VoIP, MoIPper class service

differentiation

ETSI-TIPHONVoIP (‘98)

IETF DiffServPDB

3GPPUMTS (‘99)

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 7

… and the IETF IP QoS Key Issues

• IntServ: scalability problem due to per-flow processing & admission

• DiffServ tackles scalability by per-class processing– But only Edge-to-Edge guarantees for aggregate packet

streams...• no hard per-flow guarantees

– ...and missing standards• Traffic Contracts - Service Level Specifications TEQUILA

proposalconciliate

Scalability & per-flow QoSdefine & map

IP services & network QoS

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 8

TEQUILA Key Concepts

PSTN TEQUILA

Technology Circuit-switching

IP DiffServ

Granularity 64 kbps DSCP, PHB

Service Voice call Service Level Specifications

Dimensioning Erlang-B Resource Provisioning Cycle

Allocation CAC 2-level Admission Control

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 9

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 10

From SLA to Packets

Service Level Agreement (SLA)IP Transport Service (VPN, VLL)

Service Level Specification (SLS)

QoS classPer Domain Behaviour (PDB)

Per Hop Behaviour (PHB)Traffic Conditioning Block

Scheduler (e.g. WFQ)Algorithmic Dropper (e.g. RED)

- vendor & product specific implementation

- Non-technical terms & conditions- technical parameters :{SLS}-set

- IP service traffic characteristics- offered network QoS guarantees

- Network QoS capabilities - DiffServ edge-to-edge aggregates

- Generic Router QoS capabilities - DiffServ core & edge routers

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 11

TEQUILA SLSs

Parameter Group DescriptionCustomer-user Id Identifies the customer

Flow descriptor Packet stream (DSCP, IP addresses, etc)

Service Scope Geographical region (ingress–egress)

Service Schedule Specifies when the contract is applicable

Traffic descriptor Traffic envelop (e.g. a token bucket)

QoS Parameters QoS guarantees (delay, jitter, packet loss)

Excess Treatment Traffic conditioning (dropping, remarking)

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 12

Tequila QoS Classes ~ PDBs

• QoS class = [OA | delay | loss ]– Ordered Aggregate ~ PHB scheduling class

• EF, AFx, BE– delay

• edge-to-edge maximum delay• worst case or probabilistic (percentile)• delay classes (min-max intervals)

– loss• edge-to-edge packet loss• probability

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 13

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 14

PSTN Dimensioning

1/µ

1/µ

1/µ

λ

λ

λλ

N subscribers BHCA = λ, MCD = 1/µ Central Office

n circuits

ερ

ρ

�=

n

i

i

n

i

n

0 !

!

ρ = Nλ/µ

Erlang B

Call blocking probability =

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 15

Tequila Approach for IP QoS Delivery

ServiceManagement

TrafficEngineering

Monitoring

Policy Management

Data Plane

QoSclasses

TEQUILASLS

CustomerIP services

demand

availability

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 16

Service Management

NetworkDimensioning

ServiceSubscription

ServiceInvocation

TrafficForecast

TrafficConditioning

ServiceSubscription

Service Invocation

Data Transmission

“Management Plane”

“Data Plane”

“Control Plane”

Customer Network provider

Dynamic RouteManagement

SLS-aware

Dynamic ResourceManagement

Routing PHB

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 17

Network Dimensioning

TrafficForecast

ServiceSubscription

Edge-to-Edge ResourceAvailability Matrix

Traffic MatrixSLS

Subscriptions

Resource Provisioning Cycle

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 18

Estimating the Traffic Matrix

Service mappingalgorithm

SLS subscription

SLS monitoring

Traffic Forecast

QoS-class | ingress-egress

Aggregationalgorithm

Forecastalgorithm

over-subscriptionpolicy

QoS-class | ingr-egr |min demand | max-demand

Traffic Matrix

Historical data

[QoS class | ingress-egress | min-demand - max-demand]

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 19

Generating the Resource Availability Matrix

network optimisationalgorithm

Trafficmatrix

Network Dimensioning

PHB parameters, QoS routes

availabilityalgorithm

Rmin0 | | | | | | | | | | | | | | Rmax | || | |

minimum available

maximum available

| | | |

Resource Availability Matrix

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 20

Two-level Admission Control

To Maximise Resources UsageNot To Overwhelm the Network

n

Network Information

|

time

admitted load

Local Information

|

time

Anticipated demand

Local Information

0 | | | | | | | | | | | | | | | Rmax| |Rmin | | |

Resource Availability Matrix

Subscription

SLS sub

Negotiations via SrNP

Traffic Mgt Actions

SLS inv Invocation

Control Subscriptions[future offered load]

Regulate actual offered load

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 21

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 22

Multiplexing Multimedia in a VLL

IP

IP NMS

Virtual LL

VPN Manager

{SLS} Interface

CPE CPE

Multimediaclients

Multimediaserver

IntServ/DiffServedge router

RSVP Reserve Message RSVP Path Message

CLI, SNMP, COPSSrNP protocol

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 23

Multiplexing Multimedia in a VLL

IP

IP NMS

Virtual LL

VPN Manager

SLS Interface

CPE CPE

Multimediaclients

Multimediaserver

IntServ/DiffServedge router

RSVP Reserve Message RSVP Path Message

CLI, SNMP, COPSSrNP protocol

Network Dimensioning

TrafficForecast

ServiceSubscription

I perform per-flowadmission control

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 24

Service Negotiation Protocol -SrNP

• Client-server based• Form-fill oriented• Messaging is content-

independent• Protocol stacks

ClientClient ServerServer

Proposal

Revision

ProposalOnHold

Proposal

AcceptToHold

AgreedProposal

SessionInit

Accept

Accept

TCP/IP

HTTP,SMTP,IIOPXMLSrNP

TCP/IP

SrNP

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 25

Connecting Trunking Gateways

IP

Megaco

Trunking Gateway

Media Gateway Controller

CLI, SNMP, COPS

{SLA} - SrNP

IP NMS

DiffServ Edge router

DiffServ Virtual Wire

SS7

Signaling Gatewaycall signaling & control

ISUP/Sigtran

SS7

ISDN

PSTN

ISDN

PSTN

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 26

Connecting Trunking Gateways

IP

Megaco

Trunking Gateway

Media Gateway Controller

CLI, SNMP, COPS

{SLA} - SrNP

IP NMS

DiffServ Edge router

DiffServ Virtual Wire

SS7

Signaling Gatewaycall signaling & control

ISUP/Sigtran

SS7

ISDN

PSTN

ISDN

PSTN

I perform per-flowadmission control

I perform VPNadmission control

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 27

Conclusions

• Clear separation of service & resource management– service system: only edge-to-edge view on the network– resource system: only QoS class aware (no SLS-awareness)

• Two-level admission control– long-term IP aggregates based on resource provisioning

cycle– short-term flows based on long-term guidelines

ServiceManagement

ResourceManagement

QoSclasses

SLS

subscription

invocation

Traffic Forecast

Network Dimensioning

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 28

Prof. George PavlouCentre for Communication Systems Research

University of Surrey, [email protected]

TEQUILA Consortium

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 29

• Internet: the global multi-service network• Need for scalable Quality of Service (QoS) solutions• Differentiated Services (DiffServ)

– Classify, mark and police at the edges– Specified “per-hop behaviours” (PHBs) to traffic aggregates– Scalability compared to per-flow reservation approaches

• Traffic Engineering– Control the manner traffic is mapped to and treated by network to

achieve specific performance objectives– Of paramount importance in DiffServ since there are no explicit

resource reservations to flows within the network • Key problem: how to traffic engineer a DiffServ domain to meet

edge-to-edge QoS requirements as dictated by contracted SLSs

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 30

Service Management

Monitoring

Policy Management

Data Plane

Traffic Engineering

SLSs

Cus

tom

er/U

ser

QoS-class awareSLS aware

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 31

• Traffic Trunk (TT)– Ingress, set of egresses

• pipe (1:1) or hose (1:N) model– General case the hose model

• results in a tree with logically associated “tree bandwidth”

1

3

2

6

6

1

3

2

6

6

1

3

2

6

6

1

3

2

5

420Mbps

20Mbps

20Mbps1

3

2

6

6

1

3

2

5

4

20Mbps

20Mbps

40Mbps

Two pipe TTs A hose TT

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 32

NetworkPlanningTraffic

ForecastNetwork

Dimensioning

Dynamic Route

Management

Dynamic Resource

Management

Routing

service subscriptions

service invocations

time-dependent TEstate-dependent TE

Off-line routing and resource (PHB) requirements

Dynamic routing and resource (PHB) management

SLSSubscription

SLSInvocation

PHBEnforcement

“edge to edge” configuration

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 33

• Traffic Forecast is the “glue” between the customer-oriented (SM) and resource-oriented (TE) parts

• Estimates expected traffic demand (Traffic Matrix)– Derived from service contracts (SLSs) and other information (SLS

usage, business policies, forecast/projection)– A traffic matrix for every provisioning cycle

• Aggregation required for scalability– Maximum entries per edge node N*2N-1*Q for N edge nodes and Q

QoS classes

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 34

• MPLS-based– Explicit routing through LSPs with alternative LSPs for source-

destination combinations for load balancing– LSPs are not explicitly associated with bandwidth

• IP-based– Hop-by-hop routing, OSPF-based with Equal Cost Multi Path

(ECMP) for load balancing– Assignment of link-weights

• Loss and delay constraints are translated to route hop-count constraints (PHBs are associated with delay and loss bounds)

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 35

Network Dimensioning

(ND)

centralisedMap expected traffic to network resources – produce TT trees and bandwidth

Dynamic Route Management

(DRtM)

distributed,one per

edge router

Dynamic Resource Management

(DRsM)

distributed,one perrouter

Traffic Matrix

TT paths/trees and associated

bandwidth

PHB performance targets and expected loadalarms

alarms

Set LSPs and load-balancing on LSPs

Manage dynamic link partitioning (PHB configuration) to meet local performance targets

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 36

IP TE Management(IPTEM)

(ND+DRtM)

centralised Map expected traffic to network resources and determine link weights

Dynamic Resource Management

(DRsM)

distributed,one per router

Routing(OSPF-based)

Populate routing and forwarding tables according to link weights and link status

distributed,one per router

Link costs (per PHB)

PHB performance targets and expected load

alarms

Manage dynamic link partitioning (PHB configuration) to meet local performance targets

notifications

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 37

Policy StoringService

PolicyConsumer

NetworkDimensioning

PolicyConsumer

Dynamic RouteManagement

PolicyConsumer

Dynamic ResourceManagement

High-Level Spec.Conflict DetectionRefinement

O-O FormatLDAP Schema

Policy Management Tool

Policy-scripts Execution of Policies ND Policies

DRtM/DRsM Policies

High-level Policies may result in the introduction of related policies at lower layers, mirroring the system’s hierarchy:

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 38

Repository Client

Script generator

Policy Interpreter

(TCL)

Policy-based Component (C++)ND, DRtM, DRsM

Remote Triggering Component

PolicyConsumer

Policy StoringService

MOs

MOs

Policies are seen as:

• a means to achieve programmability in the Tequila System

• logic (scripts) downloaded, interpreted and executed on the fly

Target:

• a TEQUILA system able to sustain requirement changes and evolve through policies without changing its initial “hard-wired” logic

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 39

• Setting initial ND parameters e.g. maximum number of alternative trees, cost function, dimensioning period

• Influencing the capacity allocation and LSP creation e.g. allocation of network/link allocated bandwidth per class of service, explicit setup of LSPs, treatment of spare/over-provisioned capacity

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 40

• Satisfy the QoS requirements of traffic trunks– Avoid overloading parts of the network (I)– Minimise overall network utilisation (II)

• Assuming a cost function f(x) per PHB depending on the current load and F(x) the derived cost function per link

– minimise satisfies (I)

– minimise satisfies (II)

• Combined objective function

– minimise satisfies both: (II) for n=1 and (I) for n->

)maxn( FlEl∈

�∈ El

lFn

�∈ El

n

F l)(n ∞

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 41

• Solution-based on an iterative procedure– Start with an initial TT allocation– Improve gradually on this solution by moving capacity to “better”

paths• At each step, a constrained shortest path algorithm is used for

pipe trunks and a constrained Steiner tree algorithm for hose trunks– The algorithms are constrained due to delay & loss bounds,

translated to a hop count constraint • Iterative algorithm works well and converges quickly for pipe

trunks– Results are presented next

• The constrained Steiner tree algorithm also works well, in the next step we will be integrating the two

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 42

• 10 node network – 4 edge nodes { 0, 2, 4, 5 }

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 43

link load distribution lightly (40%) loaded network

0

20

40

60

80

100

120

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34

links

load

(%)

step 0 (SP)final step

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 44

link load distribution heavy (70%) loaded network

0

20

40

60

80

100

120

140

160

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34

links

load

(%)

step 0 (SPF)final step

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 45

0

5

10

15

20

25

30

35

40

45

50

1 2 3 4 5 6 7

iteration number

stan

dard

dev

iati

on

heavy load (70%)light load (40%)

cost function exponent n=2

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 46

30

40

50

60

70

80

90

1 2 3 4 5 6

cost function exponent (n)

max

imum

link

load

heavy load (70%)light load (40%)

20

25

30

35

40

45

50

55

60

1 2 3 4 5 6

cost function exponent (n)av

erag

e lin

k lo

ad

heavy load (70%)light load (40%)

minimise �∈ El

n

F l)(n

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 47

Steiner-tree Algorithm Evaluation

1050100150200

3035

4045

5055

6065

70 75 80

0

1000

2000

3000

4000

5000

6000

7000

treeweight

number of nodes

number of Steiner points (%)

6000-70005000-60004000-50003000-40002000-30001000-20000-1000

1050

100150

200

3035

4045

5055

6065

7075

80

0

5000

10000

15000

20000

25000

running time (in msecs)

number of nodesnumber of Steiner points

(%)

20000-2500015000-2000010000-150005000-100000-5000

Since this experimentation, we have improved the above resultsby approximately 50% (tree weight/cost) and 20% (running time)

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 48

• Service driven traffic-engineering• Two-level TE approach

– long-term --> guidelines for network operation– short-term --> handling traffic fluctuations

• Policy-driven TE operation– graceful evolution to requirements changes

• Interim results prove the feasibility of the approach

Srv Mgt TE

subscription

invocation

Traffic Forecast

Network Dimensioning

DRtM DRsM

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 49

• Network Dimensioning– Input: network topology, traffic matrix, policies, alarms – Objective: optimisation problem

• Maintain low link cost while satisfying QoS objectives– Output in the form of configuration directives:

• Explicitly routed paths (MPLS-based TE) – via DRtM• Values for the link cost metrics (IP-based TE) – directly configuring

routers through combined DRtM and ND (IPTEM)• Configuration of PHB partitioning per link – via DRsM

• Dynamic Route Management (DRtM)– Multi-path load distribution at ingress nodes for load balancing– Explicit component in MPLS, implicit in OSPF/ECMP

• Dynamic Resource Management (DRsM)– Re-configuration of PHBs within allowed bounds i.e. dynamic link

(re-)partitioning among service classes– Same functionality in both MPLS and IP-based TE

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 50

• Information Model based on PCIM and PCIMe and corresponding LDAP schema

• High-level policy language syntax:[Policy ID] [Group ID] [time period condition] [if {condition

[and] [or]}] then {action [and]}• Example :

PolicyRule1: From Time=0800 to Time=1800 If (OA == EF) then allocateNwBw > 30%

Description: At least 30% of Network Bw should be available for EF traffic between 08:00 and 18:00”

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 51

• Explicit component only in MPLS-base TE– One instance in every edge Label Switch Router (LSR),

uses monitoring facilities within that LSR• Static part gets as input from ND TT trees (and paths) with

logically associated bandwidth• Creates LSPs to support paths and trees

– For trees, one LSP per egress leaf is created (unique path)– LSPs are created via LDP from the ingress nose with the full

explicit path – No bandwidth is associated to LSPs but the edge LSR knows the

logically associated bandwidth with each LSP for a path or set of LSPs realising a tree

• Maps “address spaces” from collected prior statistics (and SLS information) to LSPs according to their bandwidth– Configures LSR forwarding tables according to this assignment

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 52

• Re-maps address spaces to alternative LSPs for load balancing according to traffic fluctuations– Difficult issue since existing micro-flows should be left to terminate

• Sends TT over-utilisation alarms to ND• Sends alarms to SLS Invocation that traffic capacity to particular

egress nodes is saturated– These act as “Congestion Notifications” to admission control

• Learns about critical PHB QoS performance at nodes crossed by its LSPs in order to take proactive measures

• Learns about critical end-to-end LSP QoS performance in order to take reactive measures– Knowledge about PHB and LSP performance is obtained from

monitoring– Critical performance means “persevering conditions” and not

instantaneous inability to deliver the specified QoS

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 53

• Common in both MPLS and IP-based TE– one instance in every router, uses monitoring facilities within

that router• Static part configures PHB parameters

– queue type – Drop tail, RED, etc.– queue parameters – buffer size, precedence/drop levels– scheduling parameters – RR, WRR, PRI, etc.

• Also configures performance targets set by ND per PHB– bandwidth, max loss probability, max delay– allowed bounds for dynamic (state-dependent) operation

• Dynamic part manages PHB resources in real-time– re-distributes (within allowed bounds) bandwidth, buffer and

scheduling resources to meet dynamic traffic fluctuations– sends over-utilisation alarms to ND

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 54

Richard EganHamid Asgari

Steven Van den Berghe

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 55

Presentation Outline

• Role of Monitoring• Architecture• Design• Results • Scalability Features• Outcomes• Conclusions

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 56

Role of Monitoring - Current

RR

R

R

R

R

TrafficEngineering

TrafficMatrix

NetworkMonitoring

TE Methodology• establish LSPs• monitor LSP usage• create Traffic Matrix• establish LSPs with BW constraint• update TM via Monitoring

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 57

Role of Monitoring - TEQUILA

RR

R

R

R

R

TrafficForecast

SLS Mgt Traffic Eng

SLSs

NetworkMonitoring

SLSMonitoring

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 58

Network Monitoring

• To assist Dynamic TE to adapt to:– Congestion / under-utilisation

• Two primary components– Node Monitor

• contains the active/passive measurement agents• performs all edge2edge measurements

– Network Monitor• builds a physical & logical network view• derives path/network measurements from hop by hop results

• Relevant Metrics – OneWayLoss, OneWayDelay– PHB Bandwidth Usage, PHB Packet Discard – Throughput

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 59

Monitoring Architecture

NodeMonitor

AMAPMA

NetworkMonitor

NodeMonitor

AMAPMA

NodeMonitor

AMAPMA

NodeMonitor

AMAPMA

I/E C C I/E

SLSMonitor

MonitoringRepository

MonitoringGUI

PMA Passive Monitoring Agent

AMA Active Monitoring Agent

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 60

SLS Monitoring

• In-service verification of customer services• Provides SLS usage information to Traffic

Forecasting• SLS Monitoring is a client of Network Monitoring• SLS Monitoring is a centralised Component• Relevant metrics

– OneWayLoss – OneWayDelay– Throughput– Offered Load

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 61

SLS Monitoring

SLSInvocation

Data Collect/Aggregate

SLS Manager/ Contract Checker

Management Report

Customer Report

Report Generator

Configuration

I/E Node Monitoring

Network Monitoring

MonitoringRepository

SLS Monitor

SLSRepository

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 62

Design

• Common Interface for Node & Network Monitoring– Register– Configure– Execute– Report Results

• A Monitor is created to measure a particular metric (e.g. throughput at a particular node).

• A MonitorJob is created to specify conditions (e.g. threshold crossings) under which notification events are generated.

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 63

Creating a Node Monitor

Client

MonitoringFactory

Get

NodeMonitorSourceFactory

Get

NodeMonitor

MonitorJob[XMLSpec]

NodeMonitorSource

AddMonitor[XMLSpec]

Filter

Event Channel

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 64

Creating an Active Monitor

NodeMonitorReg

ActiveMonitoring

Agent Get Port

SyntheticSource

Te

Create

SyntheticReporter

MonitorHolder

MonitorHolder

MonitorHolder

R R

Result

Result

T

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 65

Configuring Node Monitoring

• State-of-the-art:– Passive Monitoring configured through SNMP, (emerging)

COPS feedback reports, proprietary polling– Active Monitoring

– Through signaling (One Way Delay Protocol)– “Top-Down”: through SNMP (emerging from IETF RMON wg)

• Tequila Approach: non-signaling– Same configuration methodology for active and passive– Extra features in OWDP-protocol (e.g. inter-domain related) are

no requirement for intra-domain – Same configuration methodology as other low-level Tequila

components (e.g. tunnel establishment and PHB configuration)

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 66

Basic Entities

SyntheticSource

SyntheticReporter

PassiveReporter

NE Object(PHB, LSP)

Get Stats

Receiver

Get Stats

Sender

Install

Connection determined by IP-addresses andDestination UDP address

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 67

Node 1 Node 2

Test Set-up

SmartBits: • Generates traffic between 1 and 50 Mb/s (step 10Mb/s)

• no congestion, just functional observation• Test time: 60s per step

BW

t

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 68

Traffic Set-up

EF

BE

BE

Divided into 3 Streams

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 69

Active Measurement

EF

BE

BE

SSrc SRep

BE

2 packets/sec Average Delayover 2 secs

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 70

Active Measurement

0

5

10

15

20

25

30

35

40

45

50

0 1 2 3 4 5 6 7 8 9

Number of samples in readout inte rva l

Freq

uenc

y

Exponential Inter-arrival

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 71

Active Measurement

02000400060008000

100001200014000160001800020000

48:0

1.0

48:1

6.9

48:3

3.0

48:4

9.1

49:0

5.1

49:2

1.2

49:3

7.3

49:5

3.4

50:0

9.5

50:2

5.6

50:4

1.7

50:5

7.8

51:1

3.9

51:3

0.1

51:4

6.4

52:0

2.5

52:1

8.6

52:3

5.1

52:5

1.2

time

us

MonitorSmartbits MedianSmarbits Minimum

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 72

Passive Measurement

EF

BE

BE

PRep Passive Measurement • Load per PHB• Readout = 2 sec

PRep

Queues: 2 x FIFO Scheduler: Priority

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 73

Passive Measurement

BE : 66.63% EF: 33.37%

0

1000000

2000000

3000000

4000000

5000000

6000000

7000000

8000000

47:5

5.9

48:2

2.0

48:4

8.1

49:1

4.3

49:4

0.4

50:0

6.7

50:3

3.0

50:5

9.5

51:2

6.2

51:5

3.2

52:2

0.4

52:4

8.3

53:1

6.4

53:4

5.2

54:1

4.0

Time

Byt

es BEEF

Final Ratio

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 74

Scalability (1)

• SLS Monitor uses Network Aggregate Measurements– Per LSP monitoring is more scalable than per SLS– Combine with per-SLS Ingress/Egress measurements

• throughput / offered load

• Use Hop by Hop Measurements– Reduce volume of synthetic traffic– Aggregate hop measurements to get E2E measurement

AM: Active MeasurementPM: Passive Measurement

Edge-to Edge

Hop-by-HopHop-by-Hop

PM

AM

IP Route/LSP, SLS scope(Link, PHB) scope (Link, PHB) scope

Node Monitor

Node Monitor

Node Monitor

AMAM PM PM

I/E R I/E

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 75

Scalability (2)

• Distributed Probes– Event notification– Client interface enables shared use of probes– (near) Real time response time

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 76

Observations

• Intra-domain only– Sufficient for most business services (VPN, VLL…)– OWDP Control Signalling not needed

• Access Network not addressed– a limitation as AN adds delay and loss– out of scope for TEQUILA

• Node Monitor Agents not integrated in routers• Burden on Ingress/Egress routers

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 77

Publications

• A Monitoring and Measurement Architecture for Traffic Engineered IP Networks– IST 2001, Tehran, Sept 1-3, 2001

• A Framework for Internet Traffic Engineering Measurement– <draft-ietf-tewg-measure-01.txt>– Official TE WG document– Last version before ‘Final Call’ for Informational RFC

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 78

Conclusions

• Solution for– Dynamic TE– SLS Monitoring

• Design Features– Common interface for Node & Network Monitoring– Common configuration approach for Active & Passive Agents– Separation of Monitor & MonitorJob

• Scalability Features– Flexible SLS Monitoring based on E2E or Hop by Hop– Network Monitoring based on event notification

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 79

QoS routing over the Internet: a BGP4-based approach

Christian JACQUENETFrance Telecom R & D

[email protected]

Dresden Nov. 21, 2001

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 80

QoS routing over the Internet

• Agenda:– Motivation and requirements– Proposal– Simulation work and preliminary results– Issues– Ongoing work– Conclusion

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 81

Motivation and requirements

• QoS policy enforcement is currently restricted to the scope of an AS

• QoS information needs to be exchanged between domains– Existing BGP4 attributes can help in providing some kind of « QoS

indication » • E.g. the « PREFER_ME » and « AVOID_ME » global values of the

COMMUNITIES attribute– But a finer granularity would be useful

• Allow for a smooth migration– Gradual deployment of QoS route computation over the Internet

• Keep the approach scalable

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 82

Proposal

• Use the BGP4 protocol for conveying QoS-related information between domains to:– Enable QoS-based route selection processes– Enhance peering agreements for the deployment of value-

added IP services across domains– Contribute to the enforcement of end-to-end QoS policies

• Introduce a new optional transitive attribute:– The QOS_NLRI attribute

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 83

The QOS_NLRI attribute

• Advertise « QoS routes », i.e.routes that can be depicted with specific QoS information– E.g. “route to network N1

experiences a 100 ms one-way transit delay”

• Provide QoS information associated to the destination prefixes – E.g. “EF-marked datagrams

may use this route to network N2”

+---------------------------------------------------------+| QoS Information Code (1 octet) |+---------------------------------------------------------+| QoS Information Sub-code (1 octet) |+---------------------------------------------------------+| QoS Information Value (2 octets) |+---------------------------------------------------------+| QoS Information Origin (1 octet) |+---------------------------------------------------------+| Address Family Identifier (2 octets) |+---------------------------------------------------------+| Subsequent Address Family Identifier (1 octet) |+---------------------------------------------------------+| Network Address of Next Hop (4 octets) |+---------------------------------------------------------+| Network Layer Reachability Information (variable) |+---------------------------------------------------------+

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 84

Network modeling

• The network model:– A mixed environment

• QOS_NLRI enabled and “classical” BGP peers

• CPE/PE/P node taxonomy– Multiple domains

BGP peer

Access node

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 85

Example of route selection

Delay 1

Delay 16

Delay 3Dela

y 13

Delay 12

Delay 2

Delay 11

Delay 17

Delay 20

NLRI: 192.0.20.0Delay: 20 msPartial bit unset

Partial bit setDelay: 35Delay: 35

Delay: 37

Delay: 36

Delay: 23Delay: 23

Delay: 23

192.0.20.0

Delay : 40

Route reflector

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 86

Example of route selection

Delay 1

Delay 16

Delay 3Dela

y 13

Delay 12

Delay 2

Delay 11

Delay 17

Delay 20

NLRI: 192.0.20.0Delay: 20 msPartial bit unset

Partial bit setDelay: 35Delay: 35

Delay: 37

Delay: 36

Delay: 23Delay: 23

Delay: 23

192.0.20.0

Delay : 40

Route reflector

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 87

Simulated network

6

1

42

53

5

1

11

1

1

1

5

1

1

1

1

1

1

1 1

1

1 1 1

1

1

1

1

1

1

11

1

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 88

Simulation parameters

• Percentage of BGP speakers that are QOS_NLRI capable– 0%: reference network (as a collection of autonomous systems)– 0%<x%<100%: x% of the BGP peers are QOS_NLRI-enabled

• Delay requirements for traffic between a source and a destination– Strongest (lowest-delay) requirements have less chance to be

satisfied• Transit delays on the links

– The higher the delay on the links, the lower the percentage of serviced SLSs

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 89

Preliminary simulation results

• Satisfying delay requirements:

010

2030

405060

7080

90100

0 5 10 15 20 25

Delay requirement (simulation units)

Perc

enta

ge s

atis

fied

No QOS_NLRI capability50 % QOS_NRLI capable peers100 % QOS_NLRI capable peers

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 90

Issues

• Scalability:– How frequently should UPDATE messages be sent, according to

changes of the « bandwidth conditions »?– Aggregation capabilities:

• How to provide QoS indication with aggregated routes, and what would be the aggregation criteria?

• Stability:– PHB Id. and delay-related information should not yield flapping

conditions• Dynamics of bandwidth information remains an issue

• Confidentiality of QoS information:– Already made publicly available by most ISPs

• By means of looking glasses, for example

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 91

• Convey additional QoS information• Update draft for the next IETF meeting

– See current draft-jacquenet-qos-nlri-03.txt/pdf on TEQUILA web site

• Ongoing prototype development – Based upon Zebra’s code (www.zebra.org)

• Additional simulation results by Q1 2002– Submit an applicability draft to the IETF

Ongoing work

Premium IP – Cluster Meeting & Review21-22 Nov, 2001, Dresden 92

Conclusion

• On the approach:– NO modification of the BGP4 protocol

• On the simulation:– Preliminary results are encouraging

• On the remaining issues:– Technical feasibility has been demonstrated– Further simulation planned to investigate the scalability

aspects