· d3.4: final system evaluation page 1 of 204 tequila consortium – october 2002 project number...

204
D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Service in the Internet, at Large Scale D3.4: Final System Evaluation Part A: Tests and Results CEC Deliverable Nr : 304/Algonet/b1 Deliverable Type: Report Deliverable Nature: Public Contractual date: October 30, 2002 Actual date: October 30, 2002 Editor : Takis Damilatis Contributors : Alcatel: Danny Goderis, Geoffry Crystallo Algosystems: P. Georgatsos, T. Damilatis, M.Megalooikonomou FT-R&D: C. Jacquenet, M.Boucadair IMEC: S. Van Den Berghe, Pim Van Heuven NTUA: Eleni Mykoniati, D. Giannakopoulos, M. Maurogiorgis Global Crossing (Thales): H. Asgari, M. Irons, R. Egan UCL: D. Griffin, M. Feng, J. Griem UniS: P. Trimintzios, P.Flegkas, G.Pavlou Workpackage(s) : WP3 Abstract : This Deliverable includes the results and conclusions obtained by the integration, component performance, and system performance tests carried out in testbeds and simulators. Keyword List : DiffServ, Traffic Engineering, Monitoring, Measurement, Router, Service Level Specification, Policy Management, Test Suite, Test Purpose, Experimentation.

Upload: others

Post on 24-Sep-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 1 of 204

TEQUILA Consortium – October 2002

Project Number : IST-1999-11253-TEQUILA

Project Title : Traffic Engineering for Quality of Service in the Internet, at Large Scale

D3.4: Final System Evaluation

Part A: Tests and Results

CEC Deliverable Nr : 304/Algonet/b1

Deliverable Type: Report

Deliverable Nature: Public

Contractual date: October 30, 2002

Actual date: October 30, 2002

Editor : Takis Damilatis

Contributors :

Alcatel: Danny Goderis, Geoffry Crystallo Algosystems: P. Georgatsos, T. Damilatis, M.Megalooikonomou FT-R&D: C. Jacquenet, M.Boucadair IMEC: S. Van Den Berghe, Pim Van Heuven NTUA: Eleni Mykoniati, D. Giannakopoulos, M. Maurogiorgis Global Crossing (Thales): H. Asgari, M. Irons, R. Egan UCL: D. Griffin, M. Feng, J. Griem UniS: P. Trimintzios, P.Flegkas, G.Pavlou

Workpackage(s) : WP3

Abstract : This Deliverable includes the results and conclusions obtained by the integration, component performance, and system performance tests carried out in testbeds and simulators.

Keyword List : DiffServ, Traffic Engineering, Monitoring, Measurement, Router, Service Level Specification, Policy Management, Test Suite, Test Purpose, Experimentation.

Page 2:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 2 of 204

TEQUILA Consortium – October 2002

Project Number : IST-1999-11253-TEQUILA

Project Title : Traffic Engineering for Quality of Service in the Internet, at Large Scale

D3.4: Final System Evaluation

Part A: Tests and Results

Editor: Takis Damilatis

Contributors:

Alcatel: Danny Goderis, Geoffry Crystallo Algosystems: P. Georgatsos, T. Damilatis, M.Megalooikonomou FT-R&D: C. Jacquenet, M.Boucadair IMEC: S. Van Den Berghe, Pim Van Heuven NTUA: Eleni Mykoniati, D. Giannakopoulos Global Crossing (Thales): H. Asgari, M. Irons, R. Egan UCL: D. Griffin, M. Feng, J. Griem UniS: P. Trimintzios, P.Flegkas, G.Pavlou

Version: Final

Date: October 30, 2002

Distribution: WP3

Copyright by the TEQUILA Consortium

The TEQUILA Consortium consists of:

Alcatel Coordinator Belgium Algonet S.A. Principal Contractor Greece FT-CNET Principal Contractor France IMEC Principal Contractor Belgium NTUA Principal Contractor Greece Global Crossing (Thales Research) Principal Contractor United Kingdom UCL Principal Contractor United Kingdom TERENA Assistant Contractor The Netherlands UniS Principal Contractor United Kingdom

Page 3:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 3 of 204

TEQUILA Consortium – October 2002

Executive Summary

The TEQUILA project studied, specified and validated through prototype implementation a complete solution for providing QoS-based services in MPLS/Diffserv IP networks. Departing from the TEQUILA SLS template, a standard template to describe QoS-based IP connectivity services, the solution combines service management, traffic engineering and monitoring functions, all interworking together in a hierarchical architecture, in different timescales and levels of abstraction. Distinguishing between service subscriptions and invocations, service management functions cater for the establishment and admission of SLS-based service requests, with the purpose not to overload the network. A two-level traffic engineering approach is adopted. First, by means of offline traffic engineering functions the network is appropriately dimensioned based on estimates of anticipated traffic demand. Subsequently, dynamic traffic engineering functions opt for optimising resource utilisation while meeting the QoS guarantees of the transported traffic, using as guidelines the network dimensioning output. The impact of different policy settings onto the service management and traffic engineering functions has also been investigated, and an appropriate architecture for conveying policies has been designed.

This Deliverable is the final Deliverable of the TEQUILA project presenting the main findings of the project. It consists of two parts:

• Part-A presents the tests undertaken and the yielded results. Conclusions regarding the performance of the specified functionality drawn from experimentation results and design experience are also presented.

• Part-B presents the functional architecture of the TEQUILA system, and the algorithms of the identified functional components. This part is a refinement of Deliverable D1.4.

• Part-C presents a theoretical analysis of system scalability and stability.

Experimentation was carried out in either physical testbeds or simulated networks (described in Deliverable D3.1) and covered functional validation and performance assessment aspects in terms of cost/benefit, scalability, stability and usability assessment.

The results of the tests undertaken prove the validity and scalability of the TEQUILA functional model and the proposed algorithms/schemes. Data requirements and response times of the various functions of the system grow linearly with the external entities influencing the behaviour of the system. The measurements obtained follow the expectations of the theoretical scalability analysis. Finally, the results indicate that the specified service admission and traffic engineering functions improve network performance as compared to ad-hoc configurations; the specified functions yield favourable performance compared to alternative schemes wherever possible.

Page 4:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 4 of 204

TEQUILA Consortium – October 2002

Table of Contents 1 INTRODUCTION.........................................................................................................................................................10

1.1 TEQUILA PROJECT OBJECTIVES..............................................................................................................................10 1.2 SCOPE AND ORGANISATION OF THE DELIVERABLE...............................................................................................10

2 TESTING FRAMEWORK..........................................................................................................................................11 2.1 INTRODUCTION........................................................................................................................................................11 2.2 INTEGRATION TESTS ...............................................................................................................................................11 2.3 ALGORITHM PERFORMANCE ASSESSMENT TESTS.................................................................................................11 2.4 SYSTEM PERFORMANCE ASSESSMENT TESTS........................................................................................................11

3 INTEGRATION TESTS AND RESULTS ................................................................................................................12 3.1 TEST-SUITES ...........................................................................................................................................................12 3.2 SLS MANAGEMENT INTEGRATION TESTS AND RESULTS .....................................................................................12 3.3 MPLS-TE INTEGRATION TESTS AND RESULTS.....................................................................................................14 3.4 MONITORING INTEGRATION TESTS AND RESULTS ................................................................................................15 3.5 IP-TE DYNAMIC PROVISIONING INTEGRATION TESTS AND RESULTS .................................................................17 3.6 IP-TE OSPF ENHANCEMENTS INTEGRATION TESTS AND RESULTS ....................................................................18 3.7 POLICY MANAGEMENT INTEGRATION TESTS ........................................................................................................23 3.8 IFT-BASED MPLS/DIFFSERV INTEGRATION TESTS AND RESULTS ......................................................................24

3.8.1 IFT LSP set-up and traffic mapping with Linux tools.....................................................................................25 3.8.2 Un-map traffic and tear down an IFT LSP with Linux tools..........................................................................28

3.9 NS INTEGRATION TESTS AND RESULTS.................................................................................................................29 4 ALGORITHM PERFORMANCE ASSESSMENT TESTS AND RESULTS....................................................31

4.1 TEST-SUITES ...........................................................................................................................................................31 4.2 SLS MANAGEMENT ALGORITHM PERFORMANCE TESTS AND RESULTS ..............................................................31

4.2.1 Performance Assessment of the Traffic Aggregation Algorithm....................................................................31 4.3 MPLS-TE ALGORITHM PERFORMANCE TESTS AND RESULTS .............................................................................35

4.3.1 Theoretical Background: Constrained Steiner Tree Algorithms....................................................................37 4.3.2 Steiner Tree Performance Testing: Simple Examples.....................................................................................41 4.3.3 Constrained Steiner Tree Algorithms: Simulation Environment....................................................................48 4.3.4 Steiner Tree Performance Assessment: Final Tree’s Total Cost ...................................................................51 4.3.5 Steiner Tree Performance Assessment: Execution Times...............................................................................58 4.3.6 Steiner Tree Performance Assessment: Maximum End-to end Delay............................................................66 4.3.7 Steiner Tree Assessment: Conclusions.............................................................................................................70 4.3.8 Network Dimensioning Experimentation: Simulation Environment..............................................................71 4.3.9 Network Dimensioning Performance Assessment: Link Load Distributions.................................................74 4.3.10 Network Dimensioning Performance Assessment: Scalability ..................................................................74 4.3.11 Network Dimensioning Performance Assessment: Sensitivity to the Cost Function................................76 4.3.12 Network Dimensioning Performance Assessment: Average Delay...........................................................77 4.3.13 Network Dimensioning Performance Assessment: Conclusions...............................................................78 4.3.14 Dynamic Resource Management Experimentation: Simulation Environment..........................................79 4.3.15 Examples of DRsM Operation .....................................................................................................................81 4.3.16 DRsM Policy Parameter Experiments.........................................................................................................84 4.3.17 DRsM Comparative Experiments: DRsM vs. Static Bandwidth Allocation..............................................91

5 SYSTEM LEVEL PERFORMANCE ASSESSMENT TESTS AND RESULTS ...............................................95 5.1 TEST-SUITES ...........................................................................................................................................................95 5.2 SLS MANAGEMENT PERFORMANCE TESTS AND RESULTS ...................................................................................95

5.2.1 Scalability of Subscription Management – SUITE3_1/NTUA/SLSM/SCAL/1...............................................96 5.2.2 Benefit/Cost Experimental Set-up...................................................................................................................104 5.2.3 Benefit/Cost of Invocation Admission Control – SUITE3_1/NTUA/SLSM/BC/1........................................105 5.2.4 Benefit/Cost of Subscription Admission Control – SUITE3_1/NTUA/SLSM/BC/2.....................................125

5.3 MPLS-TE PERFORMANCE TESTS AND RESULTS ................................................................................................134 5.3.1 Experimentation Set-up...................................................................................................................................134 5.3.2 Assessing Network Configuration for QoS - SUITE3_2/GC/MPLSTE/ BC/1.............................................136

Page 5:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 5 of 204

TEQUILA Consortium – October 2002

5.3.3 Comparing TEQUILA Traffic Engineering - SUITE3_2/GC/MPLSTE/BC/2.............................................138 5.3.4 Assessing QoS with Minimum Demand-based PHB Configuration - SUITE3_2/GC/MPLSTE/BC/3 ......142 5.3.5 Assessing QoS with Availability-based PHB Configuration - SUITE3_2/GC/MPLSTE/BC/4 ..................144 5.3.6 Hoses vs. Pipes - SUITE3_2/GC/MPLSTE/BC/5 ..........................................................................................148

5.4 MONITORING PERFORMANCE TESTS AND RESULTS ............................................................................................151 5.4.1 Experimentation Set-up...................................................................................................................................151 5.4.2 Delay/Loss Accuracy – SUITE3_2/GC/Mon/BC/1........................................................................................152 5.4.3 Edge-to-edge vs. Hop-by-hop Accuracy - SUITE3_2/GC/Mon/BC/2..........................................................154 5.4.4 Benefits/Cost – SUITE3_2/GC/Mon/BC/3 and SUITE3_2/GC/Mon/BC/4..................................................156 5.4.5 Scalability– SUITE3_2/GC/Mon/SCAL/1,2,3,4.............................................................................................158 5.4.6 Stability – SUITE3_2/GC/MON/STAB/1 .......................................................................................................159

5.5 IP-TE PERFORMANCE TESTS AND RESULTS ........................................................................................................160 5.5.1 COPS-based Configuration Results ...............................................................................................................160

5.6 POLICY MANAGEMENT PERFORMANCE TESTS AND RESULTS ............................................................................161 5.6.1 Benefits/Cost - SUITE3_5/UniS/POL/BC/1...................................................................................................161 5.6.2 Scalability - SUITE3_5/UniS/POL/SCAL/1...................................................................................................162 5.6.3 Stability - SUITE3_5/UniS/POL/STAB/1.......................................................................................................163 5.6.4 Usability - SUITE3_4/UniS/POL/USAB/1.....................................................................................................163

5.7 IFT PERFORMANCE TESTS AND RESULTS ............................................................................................................164 5.7.1 ATM Interfaces Performance – Suite3_6/FTR&D/MPLSTE/IFT/PERF/00 ...............................................165 5.7.2 Network Performance with Looped IFTs – Suite3_6/FTR&D/MPLSTE/IFT/FERF/01.............................166 5.7.3 Network Performance with Operational IFTs- Suite3_6/FTR&D/MPLSTE/IFT/FERF/02.......................168 5.7.4 Network Performance with Congestion -Suite3_6/FTR&D/MPLSTE/IFT/FERF/03.................................169 5.7.5 Performance of Linux Data Path - Suite3_6/FTR&D/MPLSTE/IFT/COMP/01.........................................172 5.7.6 Performance of IFT Dta Path - Suite3_6/FTR&D/MPLSTE/IFT/COMP/02..............................................174

6 CONCLUSIONS..........................................................................................................................................................177 6.1 OVERVIEW.............................................................................................................................................................177 6.2 SERVICE MANAGEMENT CONCLUSIONS ..............................................................................................................177 6.3 MPLS-BASED TRAFFIC ENGINEERING CONCLUSIONS ........................................................................................178 6.4 MONITORING CONCLUSIONS ................................................................................................................................180 6.5 IP-BASED TRAFFIC ENGINEERING CONCLUSIONS ...............................................................................................182 6.6 POLICY MANAGEMENT CONCLUSIONS ................................................................................................................182 6.7 IFT-BASED NETWORK ELEMENT CONCLUSIONS.................................................................................................182

7 REFERENCES ............................................................................................................................................................184

8 APPENDIX A TESTBED PLATFORM ENVIRONMENT................................................................................185 8.1 UK TESTBED SPECIFICATION ...............................................................................................................................185

8.1.1 Hardware .........................................................................................................................................................185 8.1.2 Operating Systems...........................................................................................................................................186 8.1.3 Essential Software Packages ..........................................................................................................................186 8.1.4 Optional Network Services .............................................................................................................................187 8.1.5 Cisco Router Functionality Limitation...........................................................................................................187

8.2 FRENCH TESTBED SPECIFICATION ...........................................................................................................190 8.2.1 TEQUILA Network..........................................................................................................................................190 8.2.2 COPS Testing Set-up.......................................................................................................................................190 8.2.3 Common Open Policy Service (COPS) ..........................................................................................................192

8.3 FRENCH IFT TESTBED SPECIFICATION ................................................................................................................194 8.3.1 Hardware architecture....................................................................................................................................194 8.3.2 Software architecture......................................................................................................................................194 8.3.3 Experimental Platform Set-up ........................................................................................................................197 8.3.4 Measurement tools...........................................................................................................................................198 8.3.5 Configuration for Integration Tests ...............................................................................................................200 8.3.6 Configuration for Performance Tests ............................................................................................................200

8.4 GREEK DEVELOPMENT PLATFORM SPECIFICATION ............................................................................................201 8.4.1 PCs and Operating Systems............................................................................................................................201 8.4.2 Software Packages...........................................................................................................................................201 8.4.3 Traffic and Network Emulation Tool .............................................................................................................202

Page 6:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 6 of 204

TEQUILA Consortium – October 2002

List of Figures Figure 4-1: Traffic Aggregation results (test 1)......................................................................................................................33 Figure 4-2: Traffic Aggregation results (test 2)......................................................................................................................33 Figure 4-3: Traffic Aggregation results (test 3)......................................................................................................................34 Figure 4-4: Effect of TTs in Traffic Aggregation performance. ...........................................................................................34 Figure 4-5: Effect of number of SLSs per SSS in Traffic Aggregation performance..........................................................35 Figure 4-6: Steiner tree example 1 - network with cost in every link. ..................................................................................42 Figure 4-7: Steiner tree example 1 – multicast tree of MWHCT algorithm (both versions)...............................................42 Figure 4-8: Steiner tree example 2 - network with link costs ................................................................................................43 Figure 4-9: Steiner tree example 2 - multicast tree of the MWHCT algorithm-version 1...................................................43 Figure 4-10: Steiner tree example 2 - multicast tree of the MWHCT algorithm-version 2. ...............................................44 Figure 4-11: Steiner tree example 3 - network with link costs and multicast tree. ..............................................................44 Figure 4-12 Example 4 - Network with link costs and multicast tree...................................................................................45 Figure 4-13: Steiner tree example 5 – network with asymmetric links ...............................................................................46 Figure 4-14: Steiner tree example 5 – network with reversed links.....................................................................................47 Figure 4-15: Example 5 – Steiner tree of the MWHCT algorithm – version 1 ....................................................................47 Figure 4-16: Steiner tree example 5 – Steiner tree of the MWHCT algorithm –version 2.................................................48 Figure 4-17: Total cost of MC tree, 20 nodes, Bmin=45 Mbps, Bmax=85 Mbps. ..............................................................53 Figure 4-18: Total cost of MC tree, 20 nodes, Bmin=5 Mbps, Bmax=125 Mbps. ..............................................................54 Figure 4-19: Total cost of MC tree, 50 nodes, Bmin=45 Mbps, Bmax=85 Mbps. .............................................................55 Figure 4-20: Total cost of MC tree, 50 nodes, Bmin=5 Mbps, Bmax=125 Mbps. ..............................................................56 Figure 4-21: Total cost of MC tree, 100 nodes, Bmin=45 Mbps, Bmax=85 Mbps. ............................................................57 Figure 4-22: Total cost of MC tree, 100 nodes, Bmin=5 Mbps, Bmax=125 Mbps. ............................................................58 Figure 4-23: Execution time of algorithms (including COPT), 20 nodes, Bmin=45 Mbps, Bmax=85 Mbps. ..................59 Figure 4-24: Execution time, 20 nodes, Bmin=45 Mbps, Bmax=85 Mbps. .........................................................................61 Figure 4-25:Execution time, 20 nodes, Bmin=5 Mbps, Bmax=125 Mbps. ..........................................................................62 Figure 4-26: Execution times, 50 nodes, Bmin=45 Mbps, Bmax=85 Mbps. .......................................................................63 Figure 4-27: Execution times, 50 nodes, Bmin=5 Mbps, Bmax=125 Mbps. .......................................................................64 Figure 4-28: Execution time, 100 nodes, Bmin=45 Mbps, Bmax=85 Mbps........................................................................65 Figure 4-29: Execution time, 100 nodes, Bmin=5 Mbps, Bmax=125 Mbps........................................................................66 Figure 4-30: Maximum end-to-end delay (hops), 20 nodes, Bmin=45 Mbps, Bmax=85 Mbps. ........................................68 Figure 4-31: Maximum end-to-end delay (hops), 50 nodes, Bmin=45 Mbps, Bmax=85 Mbps. ........................................69 Figure 4-32: Maximum end-to end delay (hops), 100 nodes, Bmin=45 Mbps, Bmax=85 Mbps. ......................................70 Figure 4-33: Fixed topology used for experimentation..........................................................................................................72 Figure 4-34 Illustration of the first hop capacity and total throughput of the simulated network. ....................................73 Figure 4-35: Link load distribution of after the first and the last step ..................................................................................74 Figure 4-36: Average and standard deviation of link load per iteration...............................................................................74 Figure 4-37: Maximum, average and standard deviation of link utilisation for various network sizes..............................75 Figure 4-38: The effect of the exponent for 10, 50 and 100 node network..........................................................................77 Figure 4-39: Average overall network delay..........................................................................................................................78 Figure 4-40 Topology used for DRsM experimentation........................................................................................................79 Figure 4-41: simple scenario case traffic model for DRsM experiments. ............................................................................80 Figure 4-42: Complex case traffic model for DRsM experiments. .......................................................................................81 Figure 4-43: Random VBR traffic model example for DRsM experiments.........................................................................81 Figure 5-1: Total subscription processing time (test 1). ........................................................................................................97 Figure 5-2: Subscription admission logic response time (test 1). .........................................................................................98 Figure 5-3: Total subscription processing time (test 2). ........................................................................................................98 Figure 5-4: Subscription admission logic response time (test 2)...........................................................................................99 Figure 5-5: Total subscription processing time (test 3). ........................................................................................................99 Figure 5-6: Subscription admission logic response time (test 3). .......................................................................................100 Figure 5-7: Total subscription processing time (test 4). ......................................................................................................101 Figure 5-8: Total subscription processing time (test 4). ......................................................................................................101 Figure 5-9: Effect of TTs in total subscription processing time..........................................................................................102 Figure 5-10: Effect of TTs in subscription admission logic. ...............................................................................................102 Figure 5-11: Effect of number of SLSs per SSS on total subscription processing time. ...................................................103 Figure 5-12: Effect of h/w platform on total subscription processing time........................................................................103 Figure 5-13: Effect of implementation on total subscription processing time. ..................................................................104 Figure 5-14: Emulation Topology.........................................................................................................................................105 Figure 5-15: Test Case TC-R-1 results .................................................................................................................................109

Page 7:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 7 of 204

TEQUILA Consortium – October 2002

Figure 5-16: Test Case TC-R-2 / Test-0 results ...................................................................................................................110 Figure 5-17: Test Case TC-R-2 / Test-1 results ...................................................................................................................111 Figure 5-18: Test Case TC-R-2 / Test-2 results ...................................................................................................................112 Figure 5-19: Test Case TC-R-2 / Test-3 results ...................................................................................................................113 Figure 5-20: Test Case TC-R-2 comparative results............................................................................................................114 Figure 5-21: Test Case TC-R-3 / Test-0 results ...................................................................................................................115 Figure 5-22: Test Case TC-R-3 / Test-1 results ...................................................................................................................116 Figure 5-23: Test Case TC-R-3 / Test-2 results ...................................................................................................................117 Figure 5-24: Test Case TC-R-3 / Test-3 results ...................................................................................................................118 Figure 5-25: Test Case TC-R-3 comparative results............................................................................................................119 Figure 5-26: Test Case TC-R-4 / Test-0 results ...................................................................................................................120 Figure 5-27: Test Case TC-R-4 / Test-1 results ...................................................................................................................121 Figure 5-28: Test Case TC-R-4 / Test-2 results ...................................................................................................................122 Figure 5-29: Test Case TC-R-4 / Test-3 results ...................................................................................................................123 Figure 5-30: Test Case TC-R-4 comparative results............................................................................................................124 Figure 5-31: Test Case TC-S-1 / Test-2 results ....................................................................................................................126 Figure 5-32: Test Case TC-S-2 / Test-1 results ....................................................................................................................127 Figure 5-33: Test Case TC-S-2 / Test-2 results ....................................................................................................................128 Figure 5-34: Test Case TC-S-3 / Test-1 results ....................................................................................................................129 Figure 5-35: Test Case TC-S-3 / Test-2 results ....................................................................................................................130 Figure 5-36: Test Case TC-S-4 / Test-1 results....................................................................................................................131 Figure 5-37: Test Case TC-S-4 / Test-2 results ....................................................................................................................132 Figure 5-38: SUITE3_0/Ntua/SLSM/BC/2 Comparative results........................................................................................133 Figure 5-39: One-way delay experienced by EF, AF1, and BE traffic. ..............................................................................137 Figure 5-40: One-way delay experienced by EF, AF1, and BE traffic when 155ms of delay was introduced in link 5

(P1-P3). ..........................................................................................................................................................................138 Figure 5-41: One-way delay experienced by EF traffic in TEQUILA and Cisco MPLS-TE approches. ........................140 Figure 5-42: One-way delay experienced by EF, AF1, and BE traffic carried by three LSPs (no forecast)....................143 Figure 5-43: One way delay for EF, AF1, and BE PHBs in Hop2 (P1-P2) - no forecast..................................................143 Figure 5-44: Symetrical bandwidth reserved for PHBs (EF, AF1, BE) of the outgoing interfaces connected to each end

of network links (in both cases: no forecast and with forecast). ................................................................................145 Figure 5-45: One-way delay experienced by LSPs carried EF, AF1, and BE traffic (with forecast). ..............................146 Figure 5-46: Packet losses experienced by LSPs carried EF, AF1, and BE traffic (with forecast). .................................147 Figure 5-47: One way delay for EF, AF1, and BE PHBs in Hop2 (P1-P2) - with forecast. .............................................148 Figure 5-48: Bandwidth reserved for EF PHBs of the outgoing interfaces in Hose and Pipe (no forecast). ...................150 Figure 5-49: Testbed configuration for Monitoring sub-system tests. ................................................................................152 Figure 5-50: One-way delay accuracy result. .......................................................................................................................153 Figure 5-51: One-way packet loss accuracy result...............................................................................................................154 Figure 5-52: Edge-to-edge and hop-by-hop one-way delay accuracy results. ...................................................................155 Figure 5-53: Edge-to-edge and hop-by-hop one-way packet loss accuracy result.............................................................156 Figure 5-54: One-way observed on three different LSPs. ...................................................................................................157 Figure 5-55: Throughput measured on three different LSPs. ..............................................................................................157 Figure 5-56: Scalability of hop-by-hop versus edge-to-edge methods. ..............................................................................158 Figure 5-57: Memory allocation with regards to number of policies .................................................................................162 Figure 5-58: Effect of the cost function exponent on the maximum link load utilisation.................................................164 Figure 5-59: TE-GUI snapshots (a) before and (b) after the enforcement of policies P1 and P2.....................................164 Figure 5-60: Throughput by flows (IFT test perf/00). .........................................................................................................166 Figure 5-61: Throughput by flows (IFTtest perf/01). ..........................................................................................................167 Figure 5-62: Throughput by flows (IFT test/perf/02). .........................................................................................................169 Figure 5-63: Throughput by flows (IFT test/perf/03); Additional SmartBits flow of 62,5Mbit/s with low priority (BE)

........................................................................................................................................................................................170 Figure 5-64: Throughput by flows (IFT test/perf/03); Additional SmartBits flow of 62,5Mbit/s with medium priority

(AF12)............................................................................................................................................................................171 Figure 5-65: Throughput by flows (IFT tests/perf/03); Additional SmartBits flow of 62,5 Mbit/s with high priority

(AF11)............................................................................................................................................................................171 Figure 5-66: Throughput by flows (IFT test/perf/03); Additional SmartBits flow of 62,5 Mbit/s with very high priority

(EF).................................................................................................................................................................................172 Figure 5-67: Ethernet packet loss test. ..................................................................................................................................173 Figure 5-68: Ethernet packet loss results (IFT test/comp/01). ............................................................................................174 Figure 5-69: IFT packet loss test. ..........................................................................................................................................175

Page 8:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 8 of 204

TEQUILA Consortium – October 2002

Figure 5-70: IFT packet loss results (IFT test/comp/02). ....................................................................................................175 Figure 5-71: IFT Packet loss comparison. ............................................................................................................................176 Figure 8-1: TEQUILA UK Testbed. .....................................................................................................................................189 Figure 8-2: French TEQUILA Testbed for IP TE experiments...........................................................................................190 Figure 8-3: The ospf command mode in IPINFUSION.......................................................................................................191 Figure 8-4: The Zebos command mode in IPINFUSION....................................................................................................191 Figure 8-5: COPS test set-up. ................................................................................................................................................192 Figure 8-6: The COPS packet encapsulated in an IP packet. ..............................................................................................192 Figure 8-7: COPS Header. .....................................................................................................................................................192 Figure 8-8: A COPS object. ...................................................................................................................................................193 Figure 8-9: Overview of IFT hardware architecture. ...........................................................................................................194 Figure 8-10: Overview of TEQUILA IFT software architecture. .......................................................................................195 Figure 8-11: Overview of the experimental IFT tested........................................................................................................197 Figure 8-12: Original and modified UDP data frame example. ..........................................................................................199 Figure 8-13: Overview of IFT integration tests configuration. ...........................................................................................200 Figure 8-14: Overview of IFT performance tests configuration. ........................................................................................200 Figure 8-15: Traffic and Network Emulation Platform Architecture. ................................................................................202

Page 9:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 9 of 204

TEQUILA Consortium – October 2002

List of Tables

Table 4-1: Performance Tests of the Traffic Aggregation algorithm....................................................................................32 Table 4-2: Loading profiles for each SLS type ......................................................................................................................73 Table 4-3: Average running times in seconds for the various network sizes. ......................................................................75 Table 4-4: Physical queue configuration for DRsM experiments. ........................................................................................79 Table 4-5: Default Policy setting for DRsM experiments. ....................................................................................................80 Table 4-6: Non-DRsM related parameter setting. .................................................................................................................80 Table 5-1: Subscription Generation Settings ........................................................................................................................105 Table 5-2: RSIM Cost/Benefit Test Cases............................................................................................................................106 Table 5-3: Customer subscriptions and their bandwidth demands in Pipe model. ............................................................135 Table 5-4: LLQ-CBWFQ configuration parameters. ...........................................................................................................135 Table 5-5: Experimental scenarios for MPLS-TE tests. ......................................................................................................136 Table 5-6: Customers' average bandwidth demand for different type of traffic. ...............................................................136 Table 5-7: Explicit routes for LSP tunnels calculated by TEQUILA system.....................................................................136 Table 5-8: Subscription demand and PHB bandwidth allocations in Kbps........................................................................137 Table 5-9: Explicit LSP tunnels calculated by TEQUILA when 155 ms of delay introduced in link 5...........................138 Table 5-10: Explicit and dynamic routes for LSP tunnels calculated by TEQUILA and Cisco MPLS-TE approaches

(155 ms of delay was introduced in link 5). ................................................................................................................139 Table 5-11: Average bandwidth demand of customer for different type of traffic. ..........................................................140 Table 5-12: Explicit routes for LSP tunnels calculated by TEQUILA system. .................................................................141 Table 5-13: Subscription demands and PHB bandwidth allocations in Kbps. ...................................................................141 Table 5-14: LSP tunnels calculated by Cisco. ......................................................................................................................142 Table 5-15: Traffic demands and PHB bandwidth allocations in Kbps. ............................................................................144 Table 5-16: Customer subscriptions and their bandwidth demands in Hose model. .........................................................149 Table 5-17: Traffic demands and PHB bandwidth allocations in Kbps. ............................................................................149 Table 5-18: Experimental scenarios for Monitoring sub-system tests. ...............................................................................152 Table 5-19: COPS configuration times. ................................................................................................................................161 Table 5-20: SmartBits Interface looped ................................................................................................................................176 Table 5-21: IFT packet loss results, 125 rules......................................................................................................................176 Table 5-22: IFT packet loss results, 250 rules......................................................................................................................176 Table 8-1: Values of the OpCode field. ................................................................................................................................193 Table 8-2: COPS Classes. ......................................................................................................................................................193 Table 8-3: Tree fields. ............................................................................................................................................................196 Table 8-4: Set-up phases of an IFT LSP...............................................................................................................................196 Table 8-5: DSCP to EXP mapping........................................................................................................................................196 Table 8-6: Service Types and Configuration Parameters ....................................................................................................203

Page 10:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 10 of 204

TEQUILA Consortium – October 2002

1 INTRODUCTION

1.1 Tequila Project Objectives The overall objective of TEQUILA is to study, specify, implement and validate service definition and traffic engineering tools for the Internet. The TEQUILA system should provide qualitative and close to quantitative service guarantees through planning, dimensioning and dynamic control of qualitative traffic management techniques based on Diffserv. TEQUILA addresses static and dynamic intra-domain Service Level Specifications (SLSs) for both fixed and nomadic users and the protocols and mechanisms for negotiating, monitoring and enforcing them. The other main dimension of the project studies intra-domain traffic engineering schemes to ensure that the network can cope with the contracted SLSs.

The overall work in the TEQUILA project is split over 3 workpackages (WPs), and follows a phased approach: a theoretical phase followed by a design/implementation phase and then an experimentation and dissemination phase.

WP1 - Functional Architecture and Algorithms - specifies the system architecture and related protocols and algorithms. WP2 - System Design and Implementation - develops the system components and simulators. WP3 - Integration Validation, Assessment and Experimentation - configures the testbed and conducts experiments on the TEQUILA system through the testbed prototypes and the simulators.

1.2 Scope and Organisation of the Deliverable This part of Deliverable D3.4 is organised as follows. Chapter 2 presents the principles underlined our testing approach. Chapter 3 presents the integration tests and Chapters 4 and 5 the performance assessment tests at algorithmic and system levels respectively. Chapter 6 presents the conclusions based on the undertaken tests and results yielded. Last, Chapter 8 is Appendix A describing the testing platforms used.

Page 11:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 11 of 204

TEQUILA Consortium – October 2002

2 TESTING FRAMEWORK

2.1 Introduction The tests carried out are divided into the following types:

• Integration Tests

• Algorithm Performance Assessment Tests

• System Performance Assessment Tests

The tests undertaken have been guided from the results and insight gained by an analysis on system scalability and stability (presented in Part C of this Deliverable). The specifications of the functionality under test are presented in Part B of this Deliverable.

2.2 Integration Tests Software components delivered from Work Package 2 are integrated in Work Package 3 and the resulting combinations are subjected to Integration Tests to ensure that the functionality complies with the TEQUILA Functional Architecture. The emphasis of the Integration Tests is proving the functionality of the components under test; the performance of the components is not addressed, and this is discussed further in this document.

2.3 Algorithm Performance Assessment Tests Specific algorithms within the TEQUILA system are subjected to Algorithm Performance Assessment Tests. These tests examined the efficacy of the algorithms, measured the resources consumed by the algorithm during its execution, etc.

The following TEQUILA algorithms were subjected to Algorithm Performance Assessment Tests:

• Traffic Aggregation

• Network Dimensioning

2.4 System Performance Assessment Tests The purpose of the TEQUILA system level tests is to determine whether the objectives of the TEQUILA system have been realised. The System Performance Assessment Tests have been carried out in the following groups:

• SLS Management

• Traffic Engineering (MPLS-TE and IP-TE)

• Monitoring

• Policy

• IFT-based network elements

Additionally, the tests carried out in these test groups are classified into four categories, depending on the aspect to be assessed: tests for assessing the benefits/costs, scalability, stability, and usability of the entity (algorithm or sub-system) under test.

Page 12:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 12 of 204

TEQUILA Consortium – October 2002

3 INTEGRATION TESTS AND RESULTS

3.1 Test-Suites Integration Tests have been organised and carried out per sub-system of the overall TEQUILA architecture. They comprise the following test-suites.

Test Suite Purpose SUITE1_1/NTUA/SLSM Validate the correct behaviour of the integrated SLS Management

sub-system components and their interactions with the rest of the TEQUILA sub-system components.

SUITE1_2/GC/MPLSTE Validate the correct behaviour of the integrated MPLS-based TE sub-system components and their interactions with the rest of the TEQUILA sub-system components.

SUITE1_3/GC/Mon Validate the correct behaviour of the integrated Monitoring sub-system components.

SUITE1_4/FTR&D/IPTE/DP Validate the COPS-PR- (Common Open Policy Service for Provisioning purposes, [COPS-PR]) based dynamic provisioning capabilities of the IP-based TE sub-system, for providing the routers with the appropriate configuration information, as far as the enforcement of an IP TE policy is concerned.

SUITE1_5/FTR&D/IPTE/OSPF Validate the IP-based TE capabilities of the TEQUILA system as far as the traffic engineering extensions of the OSPF (Open Shortest Path First, [OSPF]) protocol are concerned.

SUITE1_6/UniS/POL Validate the correct behaviour and interactions of the components of the Policy Management sub-system.

SUITE1_7/FTR&D/MPLSTE/IFT Validate the MPLS/Diffserv capabilities of an IFT-based network.

SUITE1_8/UniS/NS Validate the correct behaviour of the components and enhancements implemented in the NS simulation platform for supporting the experimentation needs of the TEQUILA functionality.

It should be noted that the above integration tests also validate the correct behaviour of the Router sub-system components: the Generic Adaptation Layer providing a uniform, vendor-independent interface to MPLS/Diffserv network elements, the related interface drivers to the considered routers (Cisco and Linux-based routers) and the TE enhancements to RSVP for Linux routers.

The individual tests carried out within each of the above test-suites and the yielded results are presented in the following sections.

3.2 SLS Management Integration Tests and Results Test Id. Purpose Platform Results

SUITE1_1/NTUA/SLSM/NetMon/1 This test is to integrate the RSIM component, incorporating the invocation admission logic and its dynamic management, with Network Monitoring and validate that the latter indeed produces and forwards to RSIM the GREEN and RED network state alarms per TT as appropriate to actual network conditions.

Greek Dev.

Platform (see A.4),

UK Testbed (see A.1)

Passed (see note 3.2.1)

Page 13:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 13 of 204

TEQUILA Consortium – October 2002

SUITE1_1/NTUA/SLSM/NetMon/2 This test is to integrate RSIM with Network Monitoring and validate that the latter indeed produces and forwards to RSIM measures for the traffic injected to the network for a particular TT every reporting time period, should RSIM has subscribed to such a service.

Greek Dev.

Platform (see A.4),

UK Testbed

(see A.1.)

Passed (see notes 3.2.1, 3.2.2)

SUITE1_1/NTUA/SLSM/SSRep/1 This test is to integrate SSM, incorporating the subscription management logic, with SSRep and validate that once a subscription is accepted the appropriate SLSs are extracted and stored in the SSRep.

Greek Dev.

Platform (see A.4)

Passed

SUITE1_1/NTUA/SLSM/SSRep/2 This test is to integrate RSIM with SSRep and validate that RSIM retrieves the appropriate information, on the event of an explicit invocation inline with an existing SSS.

Greek Dev.

Platform (see A.4)

Passed

SUITE1_1/NTUA/SLSM/D-Plane/1 This test is to integrate RSIM with GAL and validate the appropriate traffic conditioning elements are installed in the router subsystem, after a configuration request from RSIM, based on the SIS invoked, or after a traffic regulation action.

Greek Dev.

Platform (see A.4),

UK Testbed

(see A.1), Belgian

Dev. Platform

Passed (see note 3.2.3)

SUITE1_1/NTUA/SLSM/D-Plane/2 This test is to integrate SSRep with GAL and validate the appropriate traffic conditioning elements are installed in the router subsystem, after a configuration request from SSRep, based on the implicitly invoked SLS requirements.

Greek Dev.

Platform (see A.4),

UK Testbed, Belgian

Dev. Platform

Passed (see notes 3.2.3, 3.2.4)

Note 3.2.1: This test has been performed using the dummy GAL and it has been verified that the events artificially generated by it, are successfully forwarded to RSIM through monitoring. The integration of Monitoring and GAL has been tested separately (see section 3.4).

Note 3.2.2: Since NetMon does not support the adjustment of the reporting period RSIM receives load updates every default reporting period.

Note 3.2.3: This test has been performed with GAL controlling Cisco routers (UK Testbed) and Linux routers (Belgian Dev. Platform).

Note 3.2.4: It has been verified that SSRep delegates the enforcement of an implicitly invoked service to the involved RSIM instances at the edge routers where the service is to be provided. The RSIM instances in turn establish through the controlling GAL instances the appropriate traffic conditioning elements to the routers.

Page 14:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 14 of 204

TEQUILA Consortium – October 2002

3.3 MPLS-TE Integration Tests and Results Test Id. Purpose Platform Results

SUITE1_2/GC/MPLSTE/Comp/1 To integrate ND and DRtM and to verify and validate their inter-related functions. This test needs to provide the resulted logical topology to the Network Repository (NetRep) regarding the explicitly routed LSPs so that to preserve the QoS of the traffic to be carried and receives as input a Traffic Matrix. The test is to verify • ND's access to NetRep to fetch the physical network

topology. • Passing the "Tree_Update" operation to DRtM that

includes the computed optimum trees and related information (bandwidth, PHB treatment of tree traffic, e2e delay and loss bounds). This command is issued during the initialisation phase and after ND computation in calculating traffic trees. On receipt of this command, DrtM produces LSPs and the mapping of flows as specified in the SLSs to the LSPs.

• The "Alarm_trigger" command when either ND or DRtM is unable to satisfy the traffic demand.

• Storing the LSP and flow mapping information into NetRep.

UK testbed

(see A.1)

Passed

SUITE1_2/GC/MPLSTE/Comp/2 This test is to integrate ND and DRsM and to verify and validate their interactions. This test needs to provide a logical topology to Network Repository (NetRep) regarding the setting of the PHB scheduling (service rate) parameters and receives as input a Traffic Matrix. The test is to verify • ND's access to NetRep to fetch the physical topology. • Passing the scheduling parameters to DRsM by ND using

"Set-Resource_Configuration" operation. This command is issued after ND computation on resource configuration information.

• The "Notify_Resource_Configuration_Error" command issued by DRsM to ND when DRsM is unable to satisfy the ND's configuration settings.

UK testbed

(see A.1)

Passed

SUITE1_2/GC/MPLSTE/SLSM/1 This test is to integrate the Traffic Forecast function and ND and to verify and validate their interactions. The test is to verify that the interface between Traffic Forecast and Network Dimensioning supports the “Traffic_Matrix_Push” operation.

UK testbed

(see A.1)

Passed

SUITE1_2/GC/ MPLSTE/SLSM/2 This test verifies the interaction of Network Dimensioning and SLS Management functions by checking that the Resource Availability Matrix as calculated by ND is passed to SSM and is interpreted correctly.

UK testbed

(see A.1)

Passed

SUITE1_2/GC/ MPLSTE/D-Plane/1 This test is to integrate the static part of DRtM and GAL and to verify the router is configured according to DRtM configuration requests. The test is to verify that GAL creates LSPs and maps flows to the LSPs as instructed by DRtM.

UK testbed

(see A.1)

Passed (see note 3.3.1.1)

SUITE1_2/GC/ MPLSTE/D-Plane/2 This test is to integrate the static part DRsM and GAL and to verify the router is configured according to DRsM configuration requests. The test is to verify that GAL sets the scheduling management parameters for the queues serving PHBs as instructed by DRsM.

UK testbed

(see A.1)

Passed (see note 3.3.1.2)

Note 3.3.1.1: This test has been performed for the static part of DRtM and GAL with Cisco drivers.

Note 3.3.1.2: This test has been performed for the static part of DRsM and GAL with Cisco drivers.

Page 15:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 15 of 204

TEQUILA Consortium – October 2002

3.4 Monitoring Integration Tests and Results Test Id. Purpose Platform Results

SUITE1_3/GC/Mon/NodeMon/1 This test is to integrate NodeMon, Monitoring Agents for

Passive Measurements (PM), GAL and Data Plane, and to verify and validate their inter-related functions. This test requires to emulate a client request for a creation of a NodeMon Monitor for sampling a metric (throughput, load, etc.) and to add a Monitor Job. The test is to verify and validate the following sequence of dependent actions: • The Monitor request to a Monitoring Agent on performing

PM • Monitoring Agent interaction with GAL by requesting

"get-statistics" • Interaction of GAL with network element by using CLI or

SNMP to poll statistics from MIBs/PIBs, etc. • Passing the statistics (data) to the Monitoring Agent • Passing the data to the Monitor Job via the Monitor • Pushing the data into NodeMon Event Channel by Monitor

Job via the Monitor.

UK testbed

(see A.1)

Passed (see notes 3.4.1.1a, 3.4.1.1b)

SUITE1_3/GC/Mon/NodeMon/2 This test is to integrate NodeMon, Monitoring Agents for Active Measurements (AM), GAL, and Data Plane, and to verify and validate their inter-related functions. This test requires to emulate a client request for a creation of a NodeMon Monitor for sampling a metric and add a Monitor Job. The test is to verify and validate the following sequence of dependent actions: • The Monitor request to a Monitoring Agent for performing

AM (loss, delay) • Monitoring Agent interaction with GAL in providing LSP-

id to GAL and getting the destination end-point IP address of LSP from GAL

• Creation of a Recipient Agent at the receiver side of test traffic by the Monitoring Agent

• Interaction of GAL with network element to set-up a classifier for test traffic (Monitoring Agent provides source & destination IP addresses and port number to GAL)

• Injecting test traffic into the network by the Monitoring Agent, calculating the measurement results by the Recipient Agent

• Passing measurement results back to the Monitor Job through the Monitor

• Pushing the results into NodeMon Event Channel by Monitor Job through the Monitor.

UK testbed (see A.1)

Passed (see notes 3.4.1.2a, 3.4.1.2b)

SUITE1_3/GC/Mon/ NodeMon/3 This test is to integrate the NodeMon with the rest of Monitoring components including NetMon, SLSMon, and MonRep. This is to verify and validate the following sequence of dependent actions: • The NetMon/SLSMon (client) request for creation of a

Monitor for sampling a single metric and adding a Monitor Job

• Performing the monitoring task (PM/AM) by NodeMon, pushing the measurement results to the client by the Notification Service

• Storing the measurement results into the Monitoring Repository (MonRep) by the Monitor.

UK testbed (see A.1)

Passed

Page 16:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 16 of 204

TEQUILA Consortium – October 2002

SUITE1_3/GC/Mon/ NodeMon/4 This test is to integrate the NodeMon with the rest of TEQUILA components (clients) including DRtM and DRsM. This is to verify and validate the following sequence of dependent actions: • A client request for creation of a Monitor for sampling a

single metric and adding a Monitor Job • Performing the monitoring task (PM/AM) by NodeMon • Pushing the measurement results to the clients by the

Notification Service.

U UK testbed (see A.1)

Passed (see note 3.4.1.3)

SUITE1_3/GC/Mon/ NodeMon/5 This is to verify and validate the correct functioning of Release Monitor action issued by the TEQUILA components (clients) including SLSMon and NetMon to the NodeMon. The test is to provide a client request for release of the Monitor and subsequently no relevant measurement results must be provided to the client via the Notification Service.

UK testbed (see A.1)

Passed (see note 3.4.1.4)

SUITE1_3/GC/Mon/NetMon/1 This test is to integrate the NetMon with NodeMon, MonRep, and the external components for performing LSP hop-by-hop measurement. This is to verify and validate the following sequence of dependent actions: • A client request for creation of a LSP hop-by-hop Monitor

for sampling a single metric and adding a NetMon Monitor Job

• Accessing to the Network Repository for getting edge-to-edge route information and relevant PHBs

• Establishing client requests by NetMon for NodeMons attached to the hops along the route

• Performing the monitoring task (PM/AM) by NodeMons along the route

• Pushing the measurement results to the NetMon by Monitor Jobs through the Monitors at NodeMons

• Performing mathematical sum of the discrete measurement results to get e2e measurement value by NetMon, pushing the e2e measurement value to the NetMon Event Channel

• Pushing the e2e measurement value to the client by the Notification Service

• Storing it into the Monitoring Repository by the NetMon Monitor.

UK testbed (see A.1)

Passed

SUITE1_3/GC/Mon/NetMon/2 This is to verify and validate the correct functioning of Release Monitor action issued by the TEQUILA components (clients) to the NetMon. The test is to provide a client request for release of the Monitor and subsequently no relevant measurement results must be provided to the clients via the Notification Service.

UK testbed (see A.1)

Passed (see note 3.4.1.5)

Note 3.4.1.1a: The accuracy of retrieved statistics depends on how accurate the router provides the statistics.

Note 3.4.1.1b: The GAL only interacts with Cisco routers through CLI. This has been verified and validated.

Note 3.4.1.2a: The DRtM populates the NetRep and monitoring agent gets the LSP-id from NetRep.

Note 3.4.1.2b: The transmitting Agent requests for creation a recipient port at the receiver side of test traffic. This has been verified and validated.

Note 3.4.1.3: This is performed for RSIM interaction with monitoring.

Note 3.4.1.4: This test has been performed for NetMon and DRsM.

Note 3.4.1.5: This is performed with interaction with GUI. No other component is asked for NetMon information.

Page 17:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 17 of 204

TEQUILA Consortium – October 2002

3.5 IP-TE Dynamic Provisioning Integration Tests and Results Test Id. Purpose Platform Results

SUITE1_4/FTR&D/IPTE/DP/COPS/1 Verify that when the PEP and the PDP are mutually authenticated, the PEP can send an OPN messages.

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/COPS/2 Verify that the port used is 3288. French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/COPS/3 Verify that if SUITE1_6/FTR&D/DP/COPS/CAP/1 is valid, an OPN message must contain the address of the PDP recently used and for which the PEP continues to keep decision in its cache.

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/COPS/4 Verify that if SUITE1_6/FTR&D/DP/COPS/CAP/1 is valid and when no decision is received from any PDP, LastPDP must appear in OPN messages.

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/COPS/5 Verify that if SUITE1_6/FTR&D/DP/COPS/CAP/1 is valid, the OPN message must contain at least a client-type number supported by the PEP.

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/COPS/6 Verify that if SUITE1_6/FTR&D/DP/COPS/CAP/1 is valid, the PDP must answer to each received request of the client-type supported by the PEP and the PDP.

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/COPS/7 Verify that if SUITE1_6/FTR&D/DP/COPS/CAP/1 is valid and if a client-type is not supported, a CC message must be sent.

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/COPS/8 Verify that if SUITE1.0/FTRD/COPS/GENERAL-COPS/1 and client-type valid, the PDP sends a CAT message and specifies the timer KA messages.

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/COPS/9 Verify that CAT message contains a timer specifying the minimum period to use for accounting purposes.

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/COPS/10 Verify that in the case of reception of a malformed CAT, the PEP must send a CC message specifying the error that has occurred.

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/COPS/12 Verify that CAT message contains a timer specifying the minimum period to use for accounting purposes.

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/COPS/13 Verify that CAT messages could be generated only after the reception of an OPN message

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/COPS/14 Check that the PEP sends notification message to the PDP in the case of change of a state.

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/COPS/15 Verify that if the PDP wants to inform the PEP of a change of configuration or if a part of configuration's information sent is not usable any more, the PDP sends a DEC message containing a specific flag.

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/COPS/16 Verify that the PEP sends a RPT after the application of a decision.

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/COPS/17 Check that a RPT message could be used for periodic updates in an accounting context.

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/COPS/18

Verify that RPT messages are sent in the same order as the order of reception of DEC messages.

French testbed (see A.2)

Passed

Page 18:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 18 of 204

TEQUILA Consortium – October 2002

SUITE1_4/FTR&D/IPTE/DP/COPS/19 Verify that if a KA message is not exchanged between the PEP and the PDP, the connection must be stopped

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/COPS/20 Verify that if a PDP is Out, a CC message must be sent to each connected client-type.

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/ PIB/1 Verify that the format of the PRID is in conformance with the [RFC3084].

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/ PIB/2 Verify that the format of the PPRID is in conformance with the [RFC3084].

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/ PIB/3 Verify that the format of the EPD is in conformance with the [RFC3084].

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/ PIB/4 Verify that the format of the GPERR is in conformance with the [RFC3084].

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/ PIB/5 Verify that the format of the CPERR is in conformance with the [RFC3084].

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/ PIB/6 Verify that the PRC's exchanged between an IPTE PEP and a PDP are conform to those defined in [IPTEPIB]

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/COPSPR/1 Verify that after the establishment of a COPS connection, the PEP sends a REQ containing configuration's information.

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/COPSPR/2 Verify that after the reception of REQ configuration, the PDP sends decisions related to this PEP.

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/COPSPR/3 Verify if some changes occur on the PDP side, it will inform the PEP by the sending new DECs.

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/COPSPR /4 Verify that REQ message sent by PEP contains the resources it has.

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/ COPSPR /5 Verify that DEC messages sent by the PDP are compatible with the formats of the objects defined for the IP TE client type.

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/ COPSPR /6 Verify that COPS objects: IN-Int, Out-Int and LPDPDecision are not included in a COPS-PR REQ.

French testbed (see A.2)

Passed

SUITE1_4/FTR&D/IPTE/DP/COPSPR /7 Check that the flag 0x02 involves the sending of a new REQ message with new handle.

French testbed (see A.2)

Passed

3.6 IP-TE OSPF Enhancements Integration Tests and Results Test Id. Purpose Platform Result

SUITE1_5/FTR&D/IPTE/OSPF/OPA/1 Verify that a Type-9 Opaque LSA message generated

by a router attached to the backbone area and connected to a specific IP sub-network is received by the DUT, where the DUT is an OSPF neighbour attached to the same area, AND which is connected to the above-mentioned IP sub-network.

French testbed (see A.2)

Passed

Page 19:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 19 of 204

TEQUILA Consortium – October 2002

SUITE1_5/FTR&D/IPTE/OSPF/OPA/2 Verify that a Type-9 Opaque LSA message generated by a router attached to the backbone area and connected to a specific IP sub-network is received by the DUT, where the DUT is an OSPF neighbour attached to the same area, AND which is NOT connected to the above-mentioned IP sub-network.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/OPA/3 Verify that a Type-9 Opaque LSA message generated by a router attached to a given area (different from the backbone area) and connected to a specific IP sub-network, is received by the DUT, where the DUT is an OSPF neighbour attached to the same area, and which is connected to the above-mentioned IP sub-network.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/OPA/4 Verify that a Type-9 Opaque LSA message generated by a router attached to a given area (different from the backbone area) and connected to a specific IP sub-network, is rejected by the DUT, where the DUT is an OSPF neighbour attached to the same area, and which is not connected to the above-mentioned IP sub-network.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/OPA/5 Verify that a Type-9 Opaque LSA message generated by a router attached to the backbone area and connected to a specific IP sub-network, is rejected by the DUT, where the DUT is an OSPF neighbour attached to a different area, and which is connected to the above-mentioned IP sub-network.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/OPA/6 Verify that a Type-9 Opaque LSA message generated by a router attached to the backbone area and connected to a specific IP sub-network, is rejected by the DUT, where the DUT is an OSPF neighbour attached to a different area, and which is not connected to the above-mentioned IP sub-network.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/OPA/7 Verify that a Type-9 Opaque LSA message generated by a router attached to a given area (different from the backbone area) and connected to a specific IP sub-network, is rejected by the DUT, where the DUT is an OSPF neighbour attached to a different area (than the one where the Type-9 Opaque LSA message has been generated), and which is connected to the above-mentioned IP sub-network.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/TE/ IP/OSPF/OP /8 Verify that a Type-9 Opaque LSA message generated by a router attached to a given area (different from the backbone area) and connected to a specific IP sub-network, is rejected by the DUT, where the DUT is an OSPF neighbour attached to a different area (than the one where the Type-9 Opaque LSA message has been generated), and which is not connected to the above-mentioned IP sub-network.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/OPA/9 Verify that a Type-10 Opaque LSA message generated by a router attached to the backbone area is received by the DUT, where the DUT is an OSPF neighbour of the same area.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/OPA/10 Verify that a Type-10 Opaque LSA message generated by a router attached to a given area (different from the backbone area) is received by the DUT, where the DUT is an OSPF neighbour of the same area.

French testbed (see A.2)

Passed

Page 20:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 20 of 204

TEQUILA Consortium – October 2002

SUITE1_5/FTR&D/IPTE/OSPF/OPA/11 Verify that a Type-10 Opaque LSA message generated by a router attached to the backbone area is rejected by the DUT, where the DUT is an OSPF neighbour attached to a different area.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/OPA/12 Verify that a Type-10 Opaque LSA message generated by a router attached to a given area (different from the backbone area) is rejected by the DUT, where the DUT is an OSPF neighbour attached to a different area than the one where the type-10 Opaque LSA message has been generated.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/OPA/13 Verify that a Type-11 Opaque LSA message generated by a router in the backbone area is discarded and not acknowledged by the DUT, where the DUT is a router of a stub area.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/OPA /14 Verify that the DUT does not generate Type-11 Opaque LSA messages when the DUT is attached to a stub area.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/OPA/15 Verify that a Type-11 Opaque LSA message generated by a router attached to the backbone area is received by the DUT attached to the backbone area, where the DUT is an OSPF neighbour.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/OPA/16 Verify that a Type-11 Opaque LSA message generated by a router attached to a transit area is received by the DUT attached to the backbone area, where the DUT is an OSPF neighbour.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/OPA/17 Verify that a Type-11 Opaque LSA message generated by a router attached to the backbone area is received by the DUT attached to a transit area, where the DUT is an OSPF neighbour.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/OPA/18 Verify that a Type-11 Opaque LSA message generated by a router attached to a transit area is received by the DUT attached to the same transit area, where the DUT is an OSPF neighbour.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/OPA/19 Verify that a Type-11 Opaque LSA message generated by a router attached to a transit area is received by the DUT attached to another transit area, different from the transit area where the LSA message has been generated, where the DUT is an OSPF neighbour.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/OPA/20 Verify that the O-bit in the Options field of the Database Description Packets sent by the DUT is set, when the DUT is an Opaque-LSA-aware OSPF router.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/OPA/21 Verify that the O-bit in the Options field of packets different from the Database Description Packets sent by the DUT is not set, when the DUT is an Opaque-LSA-aware OSPF router.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/OPA/22 Verify that the DUT does not send Type-9 Opaque LSA messages to non-Opaque-LSA-aware OSPF neighbours, when the DUT is an Opaque-LSA-aware OSPF router.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/OPA/23 Verify that the DUT does not send Type-10 Opaque LSA messages to non-Opaque-LSA-aware OSPF neighbours, when the DUT is an Opaque-LSA-aware OSPF router.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/OPA /24 Verify that the DUT does not send Type-11 Opaque LSA messages to non-Opaque-LSA-aware OSPF neighbours, when the DUT is an Opaque-LSA-aware OSPF router.

French testbed (see A.2)

Passed

Page 21:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 21 of 204

TEQUILA Consortium – October 2002

SUITE1_5/FTR&D/IPTE/OSPF/OPA/25 Verify that the DUT discards and does not acknowledge a Type-9 Opaque LSA message when the DUT is a non-Opaque-LSA-aware OSPF neighbour.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/OPA/26 Verify that the DUT discards and does not acknowledge a Type-10 Opaque LSA message when the DUT is a non-Opaque-LSA-aware OSPF neighbour.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/OPA/27 Verify that the DUT discards and does not acknowledge a Type-11 Opaque LSA message when the DUT is a non-Opaque-LSA-aware OSPF neighbour.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/EXT/1 Verify that the DUT sends Type-10 TE LSA messages as described in [KATZ] to its OSPF neighbours, where the DUT is an OSPF router attached to a given area.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/EXT/2 Verify that the DUT discards and does not acknowledge Type-9 TE LSA messages as described in [KATZ], when the DUT is an OSPF neighbour attached to the same area from where the Type-9 TE LSA message mentioned above has been generated.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/EXT/3 Verify that the DUT discards and does not acknowledge Type-11 TE LSA messages as described in [KATZ], when the DUT is an OSPF neighbour attached to the same area from where the Type-11 TE LSA message mentioned above has been generated.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/EXT/4 Verify that TE LSA messages sent by the DUT are Type-10 opaque LSA messages whose type field is valued to 1, where the DUT is an Opaque-LSA-aware OSPF router.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/EXT/5 Verify that the DUT discards and does not acknowledge type-10 TE LSA messages whose type field has an encoded value different from "1", where the DUT is an Opaque-LSA-aware OSPF router.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/EXT/6 Verify that the DUT sends exactly one type-10 TE LSA message whose top-level TLV is valued to "1", where the DUT is an Opaque-LSA-aware OSPF router.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/EXT/7 Verify that the DUT sends exactly one type-10 TE LSA message whose sub-TLV of types 1 and 2 appear exactly once, where the DUT is an Opaque-LSA-aware OSPF router.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/EXT/8 Verify that the DUT discards and does not acknowledge type-10 TE LSA messages that contain more than one (1) top-level TLV, where the DUT is an Opaque-LSA-aware OSPF router.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/EXT/9 Verify that the DUT discards and does not acknowledge a type-10 TE LSA message that contains more than one (1) sub –TLV of type 1, where the DUT is an Opaque-LSA-aware OSPF router.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/EXT/10 Verify that the DUT discards and does not acknowledge a type-10 TE LSA message that contains more than one (1) sub –TLV of type 2, where the DUT is an Opaque-LSA-aware OSPF router.

French testbed (see A.2)

Passed

Page 22:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 22 of 204

TEQUILA Consortium – October 2002

SUITE1_5/FTR&D/IPTE/OSPF/EXT/11 Verify that the DUT sends a type-10 TE LSA message with a sub-TLV of type 5 when a TE metric has been assigned to a physical interface of the DUT that participates in the SPF calculation, and that the value of this TE metric is correctly encoded in the 4-octet Type 5 sub-TLV field, where the DUT is an Opaque-LSA-aware OSPF router.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/EXT/12 Verify that the DUT sends a type-10 TE LSA message with a sub-TLV of type 5 when a TE metric has been assigned to a logical interface of the DUT that participates in the SPF calculation, and that the value of this TE metric is correctly encoded in the 4-octet Type 5 sub-TLV field, where the DUT is an Opaque-LSA-aware OSPF router.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/EXT/13 Verify that the DUT sends a type-10 TE LSA message with a sub-TLV of type 6 when the physical capacity of the link (expressed in bytes per second) the local interface (identified in the sub-TLV of type 3 conveyed in the same TE LSA message) is attached to needs to be notified to its OSPF Opaque-LSA-aware neighbours, and that the value of this sub-TLV is correctly encoded in the 4-octet Type 6 sub-TLV field, where the DUT is an Opaque-LSA-aware OSPF router. Note: this elementary test could be declined on per if Type basis, i.e. verify that, for each kind of physical interface (Ethernet, fast Ethernet, STM-1, STM-4, OC-3, etc.), the DUT sends the appropriate TE LSA message, which will convey both Type 3 and Type 6 sub-TLVs, the latter being encoded according to the maximum physical capacity of the link – e.g. 10 Mbit/s for an Ethernet interface, 100 Mbit/s for a Fast Ethernet interface, etc.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/EXT/14 Verify that the DUT (The DUT is an Opaque-LSA-aware OSPF router) sends a type-10 TE LSA message with a sub-TLV of type-9 when the local interface (identified in the sub-TLV of type-3 conveyed in the same TE LSA message) is restricted to the SPF calculation related to the forwarding of IP datagrams being serviced by a specific class (and possibly identified with a specific DSCP value), and when this information needs to be notified to its OSPF Opaque-LSA-aware neighbours. Verify also and that the value of this sub-TLV is correctly encoded in the 4-octet Type 9 sub-TLV field.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/EXT/15 Verify that the DUT (the DUT is an Opaque-LSA-aware OSPF router) accordingly updates its TE database according to the type-10 TE LSA messages it receives, as specified in section 3 of [KATZ].

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/EXT/16 Verify that, for a given destination prefix N, a TE metric (as advertised in a type 5 sub-TLV of a Type-10 TE LSA message) whose value n is strictly smaller than the value p of the link metric associated to the same physical interface leads the DUT (the DUT is an Opaque-LSA-aware OSPF router) to add an iteration in the candidate list of next hops that may be used to reach destination N from the DUT, according to the Dijsktra calculation algorithm.

French testbed (see A.2)

Passed

Page 23:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 23 of 204

TEQUILA Consortium – October 2002

SUITE1_5/FTR&D/IPTE/OSPF/EXT/17 Verify that, for a given destination prefix N, a TE metric (as advertised in a Type 5 sub-TLV of a Type-10 TE LSA message) whose value n is strictly greater than the value p of the link metric associated to the same physical interface leads the DUT (the DUT is an Opaque-LSA-aware OSPF router) to add an iteration in the candidate list of next hops that may be used to reach destination N from the DUT, according to the Dijsktra calculation algorithm.

French testbed (see A.2)

Passed

SUITE1_5/FTR&D/IPTE/OSPF/EXT/18 Verify that, for a given destination prefix N, different colour metrics (as advertised in a Type 9 sub-TLV of a Type-10 TE LSA message) values leads the DUT (the DUT is an Opaque-LSA-aware OSPF router) to add as many iterations in the candidate list of the next hops that may be used to reach destination N from the DUT, according to the Dijsktra calculation algorithm.

French testbed (see A.2)

Passed

3.7 Policy Management Integration Tests Test Id. Purpose Platform Results

SUITE1_6/UniS/POL/PolMT/1 Validation of the implementation of the part of Policy Management Tool that parses and syntactically checks the policies entered according to a high level format.

Computer-based

Passed (see note 3.6.1)

SUITE1_6/UniS/POL/PolMT/2 Validation of the implementation of the part of Policy Management that checks the validity of policy rules. This is done by checking if it can detect rules that do not make sense.

Computer-based

Passed (see note 3.6.2)

SUITE1_6/UniS/POL/PolMT/3 Validation of implementation of the translation function of the Policy Management Tool. This is done by checking that it can correctly translate the policy rule entered in high-level format to valid LDAP operations to the PolSS.

Computer-based

Passed

SUITE1_6/UniS/POL/PolMT/4 Integration of Policy Management Tool with the Policy Storing Service (LDAP-like server). This is done by adding or retrieving specific policies and making sure that they received exactly as they are in the server or stored as they are intended to be according the policy schema.

Computer-based

Passed

SUITE1_6/UniS/POL/PolMT/5 Integration of the policy management tool with the policy consumer, by making sure that all the notifications are sent correctly to the appropriate policy consumer for the policy just entered.

Computer-based

Passed

SUITE1_6/UniS/POL/PolSS/1 Validation of the implementation of the Policy Storing Service. This is done by checking that it can correctly implement all specified LDAP operations.

Computer-based

Passed

SUITE1_6/UniS/POL/PolSS/2 Integration of the Policy Storing Service with the Policy Management Tool. This is done by requesting specific policies and making sure that they are received exactly as they are in the server.

Computer-based

Passed

SUITE1_6/UniS/POL/PolSS/3 Integration of the Policy Storing Service with the Policy Consumer. This is done by requesting specific policies and making sure that they are received exactly as they are in the server.

Computer-based

Passed

Page 24:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 24 of 204

TEQUILA Consortium – October 2002

SUITE1_6/UniS/POL/PolC/1 Validation of implementation of Policy consumer. This is done by checking the validity of the tcl code produced for simple policy examples.

Computer-based

Passed

SUITE1_6/UniS/POL/PolC/2 Integration of Policy Consumer with the Policy Storing Service (LDAP-like server). This is done by requesting specific policies and making sure that they received exactly as they are in the server.

Computer-based

Passed

SUITE1_6/UniS/POL/PolC/3 Integration of the policy consumer with policy management tool, by making sure that all notifications are received correctly.

Computer-based

Passed

SUITE1_6/UniS/POL/PolC/4 Integration with the static part of the MPLS-based TE system (Network Dimensioning) and validation that the interface between them is valid, and the resulted ND adjustments are the ones required by the policy.

NS Simulator

Passed

Note 3.6.3

SUITE1_6/UniS/ POL/PolC/5 Integration with the dynamic part of the MPLS-based TE system (DRsM/DRtM) and validation that the interface between them is valid, and the resulted DRsM/DRtM adjustments are the ones required by the policy.

NS Simulator

Passed

Note 3.6.1 The implementation of the parser was done using the SableCC compiler-compiler tool.

Note 3.6.2 A specific to our implementation logic has been included in the system that checks whether the combination of conditions and actions in the policy rule makes sense.

Note 3.6.3 A Tcl interface has been implemented to the TE components due to the ease that it interfaces with C/C++ used in implementing the policy influenced components.

3.8 IFT-based MPLS/Diffserv Integration Tests and Results Test Id. Purpose Platform Results

SUITE1_7/FTR&D/MPLSTE/IFT/ Funct/01 Setting an IFT LSP and

mapping traffic on it with Linux tools, using on the enhanced RSVP TE daemon.

French IFT testbed (see A.3)

Passed; IFT path can be set up and traffic mapped on it with Linux tools (see section 3.8.1)

SUITE1_7/FTR&D/MPLSTE/IFT/ Funct/02 Un-map traffic and tear down an LSP with Linux tools, using on the enhanced RSVP TE daemon.

French IFT testbed (see A.3)

Passed; traffic can be unmapped and paths torn down with Linux tools (see section 3.8.2). Restriction is that a pause must be observed between orders for monitor to process the modification of iptables reordering messages. The number of iptables deletion/re-creation messages depends on the rank of the deleted rule: all rules with a greater priority number are deleted and recreated. Monitor currently supports around twenty deletion/recreation per second without loss in the netlink channels. An improvement is to recognize and ignore these messages, which are not useful in IFT

Page 25:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 25 of 204

TEQUILA Consortium – October 2002

3.8.1 IFT LSP set-up and traffic mapping with Linux tools With reference to the topology of the IFT-based French testbed (see A.3, Figure 8-11), the LSP is to be set-up between LER(1)-LSR-LER(2). The correct establishment of the path and traffic mappings are reflected through the statistics provided by IFT. The following list of output verifies the correctness of the test.

1. Routed daemon is enabled for RSVP protocol communication [root#] /etc/init.d/routed start

2. IFT monitor is enabled [root#] /etc/init.d/iftkmon start [root#] ps –ef|grep kmon root 314 1 0 10:22 pts/0 00:00:02 /usr/local/bin/iftkmon -d -v –c

3. Rapi_recvauto tool is enabled (wrapper script) [root#] /etc/init.d/autopath start [root#] ps –ef|grep rapi root 32699 1 0 10:22 pts/0 00:00:42 /usr/local/bin/rapirecv_auto

4. RSVP-TE daemon is enabled (wrapper script). Linux PHB is not set up since data will not be sent over Ethernet interfaces [root#] /etc/init.d/rsvpte start [root#] ps –ef|grep rsvpd root 330 1 0 10:22 ? 00:00:08 /usr/local/bin/rsvpd

On the LER(1) side :

5. Path 102 from LER(1) to LER(2) via LSR is set-up Shared Explicit (SE) with rtest2 tool [root#] /usr/local/bin/rtest2 –f <datafile> >/dev/null 2>>/dev/null & [root#] cat <datafile>

11.102.5.153 11.101.5.175 102 0 11.101.5.130:0 1

6. Map traffic on it, to check syntax and to enable ping from hostA to hostB [root#] /usr/local/bin/tunnel -v -m -p tcp/23:26 -d 192.168.25.2/32 -x 0x2e -l102

[root#] /usr/local/bin/tunnel -v -m -p icmp -d 192.168.25.2/32 -l102

7. Show Linux tunnel statistics [root#] /usr/local/bin/tunnel –L SPID Destination (type label/exp/ viface) 102 11.102.5.153 ( gen 168/ 0/ T168eth1) | Destination DSCP Proto \-> 192.168.25.2 EF tcp \-> 192.168.25.2 BE icmp

8. Dump IFT statistics (wrapper script that sends USR2 signal to iftkmon) [root#] /usr/local/bin/skmon

9. Look at IFT statistics file /var/log/ift/iftkusr2.log to find the egress pattern added at iftkmon start-up for the 192.168.5.0 network (->hostA) and ingress pattern added by tunnel command (extract)

Page 26:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 26 of 204

TEQUILA Consortium – October 2002

n°=00000005 cpt=00000000

[0055/16][00-ff/4][00-ff/4][40/4][50/4][00-ff/4][00-ff/4][00-ff/4][c0a805/24][00-ff/4]

n°=00000009 cpt=00000000

[0053/16][0800/16][40/4][50/4][b8/6][06/8][00-ff/4][c0a81902/32][00-ff/4][00-ff/4][0017-001a/16]

n°=00000010 cpt=00000000

[0053/16][0800/16][40/4][50/4][00-ff/4][01/8][00-ff/4][c0a81902/32][00-ff/4][00-ff/4][00-ff/4]

10. Since source address of the ping command will be the ATM interface of hostA, which is ignored by the monitor, add a new route command to forward frames to the IP interface of hostA. This is to show that route commands are taken in account by iftkmon and could be avoided by translating the source address of the ping command.(NAT) [root#] /sbin/ip route add 11.1.5.0/24 via 192.168.5.167

In statistics file /var/log/ift/iftkusr2.log we find the new egress pattern: n°=00000011 cpt=00000000

[0055/16][00-ff/4][00-ff/4][40/4][50/4][00-ff/4][00-ff/4][00-ff/4][0b0105/24][00-ff/4]

On the LER(2) side :

11. Path 201 from LER(2) to LER(1) via LSR is set up Shared Explicit (SE) with rtest2 tool [root#] /usr/local/bin/rtest2 –f <datafile> >/dev/null 2>>/dev/null & [root#] cat <datafile>

11.101.5.175 11.102.5.153 201 0 11.102.5.130:0 1

12. Map traffic on it to enable ping return from hostB to hostA. Source address will be ATM interface of hostA [root#] /usr/local/bin/tunnel -v -m -a -d 11.1.5.167/32 –l201

13. Show Linux tunnel statistics [root#] /usr/local/bin/tunnel –L LSPID Destination (type label/exp/ viface) 201 11.101.5.175 ( gen 268/ 0/ T268eth2) | Destination DSCP Proto \-> 11.1.5.167 BE nop

14. Dump IFT statistics (wrapper script that sends USR2 signal to iftkmon) [root#] /usr/local/bin/skmon

15. Look at IFT statistics file /var/log/ift/iftkusr2.log to find the egress pattern added at iftkmon start-up for the 192.168.25.0 network (->hostB) and ingress pattern added by tunnel command (extract) n°=00000004 cpt=00000000 [00d5/16][00-ff/4][00-ff/4][40/4][50/4][00-ff/4][00-ff/4][00-ff/4][c0a819/24][00-ff/4] n°=00000007 cpt=00000000 [00d3/16][0800/16][40/4][50/4][00-ff/4][00-ff/4][00-ff/4][0b0105a7/32][00-ff/4][00-ff/4][00-ff/4]

Page 27:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 27 of 204

TEQUILA Consortium – October 2002

On the LSR side:

16. Look at Linux MPLS system files [root#] cat /proc/net/mpls_in 0x4002a000 4/352/0 gen 168 0 5 POP FWD(0x0000000a) 0x40043000 19/1672/0 gen 268 0 20 POP FWD(0x0000000b) [root#] cat /proc/net/mpls_out 0x0000000a 4/336/0 2 EXP2TC( EXP(0)->TC(ffff) EXP(1)->TC(00b8) EXP(2)->TC(0028) EXP(3)->TC(0030) EXP(4)->TC(0048) EXP(5)->TC(0050) EXP(6)->TC(0068) EXP(7)->TC(0070) ) PUSH(gen 268) SET(eth2,11.102.5.153) 0x0000000b 19/1596/0 2 EXP2TC( EXP(0)->TC(ffff) EXP(1)->TC(00b8) EXP(2)->TC(0028) EXP(3)->TC(0030) EXP(4)->TC(0048) EXP(5)->TC(0050) EXP(6)->TC(0068) EXP(7)->TC(0070) ) PUSH(gen 168) SET(eth1,11.101.5.175)

17. Dump IFT statistics (wrapper script that sends USR2 signal to iftkmon) [root#] /usr/local/bin/skmon

18. Look at IFT statistics file /var/log/ift/iftkusr2.log to find the core patterns added in the IFT memory for each path and priority. The output label is part of the action end status and is not visible here. (Extract) n°=00000002 cpt=00000000 [0094/16][000a80/20][00/3] n°=00000004 cpt=00000000 [0094/16][000a80/20][20/3] n°=00000005 cpt=00000000 [0094/16][000a80/20][40/3] n°=00000006 cpt=00000000 [0094/16][000a80/20][60/3] n°=00000007 cpt=00000000 [0094/16][0010c0/20][00/3] n°=00000008 cpt=00000000 [0094/16][0010c0/20][20/3] n°=00000009 cpt=00000000 [0094/16][0010c0/20][40/3] n°=00000010 cpt=00000000 [0094/16][0010c0/20][60/3]

On the hostA side:

19. Forward IP frames to hostB via ATM interface [root#] /sbin/ip route add 192.168.25.0/24 via 11.1.5.1

On the hostB side:

20. Routing of 11.1.5.0/24 via ATM interface is done at boot (extract) [root#] # netstat -r Routing Table: Destination Gateway Flags Ref Use Interface -------------------- -------------------- ----- ----- ------ --------- 139.100.198.0 helios U 3 39 qe0 192.168.25.0 he-hme0 U 6 4 hme0 11.1.5.0 he-ba0 U 2 426 ba0

On the hostA side:

21. Ping hostB [root#] ping 192.168.25.2 PING 192.168.25.2 (192.168.25.2) from 11.1.5.167 : 56(84) bytes of data. Warning: time of day goes back, taking countermeasures. 64 bytes from 192.168.25.2: icmp_seq=0 ttl=255 time=977 usec 64 bytes from 192.168.25.2: icmp_seq=1 ttl=255 time=622 usec 64 bytes from 192.168.25.2: icmp_seq=2 ttl=255 time=620 usec

Page 28:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 28 of 204

TEQUILA Consortium – October 2002

On the 3 nodes supporting the IFT network:

22. Dump IFT statistics (wrapper script that sends USR2 signal to iftkmon) [root#] /usr/local/bin/skmon

23. Look at IFT statistics file /var/log/ift/iftkusr2.log and verify that counters increment.

3.8.2 Un-map traffic and tear down an IFT LSP with Linux tools With reference to the topology of the IFT-based French testbed (see A.3, Figure 8-11), the LSP is to be set-up between LER(1)-LSR-LER(2). The correct establishment of the path and traffic mappings are reflected through the statistics provided by IFT. The following list of output verifies the correctness of the test.

On the LER (1) side:

24. Un-map traffic on path 102, to check syntax .A delay must be respected between several un-mapping commands because iptables reorganizes the order of all its rules and generates as many messages as individual operations and can cause overrun from the monitor point of view. [root#] /usr/local/bin/tunnel -v -u -p tcp/23:26 -d 192.168.25.2/32 -x 0x2e -l102

[root#] /usr/local/bin/tunnel -v -u -p icmp -d 192.168.25.2/32 -l102

25. Show Linux tunnel statistics [root#] /usr/local/bin/tunnel –L SPID Destination (type label/exp/ viface) 102 11.102.5.153 ( gen 168/ 0/ T168eth1)

26. Check main log file /var/ift/iftklog.log to see the iptables deletion messages (extract) DELIPTABLE DelIPT:292, IPv4: MANGLE : OUTPUT : 1 target prot opt source destination MARK tcp-- anywhere helios-hme0 tcp spt:0 dpts:telnet:26 MARK set 0x1 message NOT chained id=275 mid=1 st=6673ffff [int r:0003ffff d:28 m:3 c:0 c:3] g=000 h=1 mask=16 len=02 ( 00 53 ) ATM_VC id=276 mid=2 st=8683ffff [int r:0003ffff d:32 m:3 c:0 c:4] g=004 h=0 mask=16 len=02 ( 08 00 ) SNAP_TYPE id=277 mid=3 st=0687ffff [int r:0003ffff d:33 m:3 c:0 c:0] g=008 h=0 mask=04 len=01 ( 40 ) IPV4_VERS id=278 mid=4 st=068bffff [int r:0003ffff d:34 m:3 c:0 c:0] g=009 h=0 mask=04 len=01 ( 50 ) IPV4_IHL id=279 mid=5 st=06cbffff [int r:0003ffff d:50 m:3 c:0 c:0] g=010 h=0 mask=04 len=01 ( 00 ) ( ff ) IPV4_DSCP id=280 mid=6 st=06e3ffff [int r:0003ffff d:56 m:3 c:0 c:0] g=011 h=0 mask=08 len=01 ( 06 ) IPV4_PROT id=281 mid=7 st=0703ffff [int r:0003ffff d:64 m:3 c:0 c:0] g=062 h=0 mask=04 len=01 ( 00 ) ( ff ) IPV4_SA id=282 mid=8 st=06c3ffff [int r:0003ffff d:48 m:3 c:0 c:0] g=063 h=0 mask=32 len=04 ( c0 a8 19 02 ) IPV4_DA id=283 mid=9 st=0723ffff [int r:0003ffff d:72 m:3 c:0 c:0] g=071 h=0 mask=04 len=01 ( 00 ) ( ff ) IPV4_TTL id=284 mid=10 st=0733ffff [int r:0003ffff d:76 m:3 c:0 c:0] g=072 h=0 mask=04 len=01 ( 00 ) ( ff ) TCP_SPORT id=285 mid=11 st=34400058 [final mpls p:00000058 l:0 s:1 e:0 p:5 c:1] g=073 h=0 mask=16 len=02 ( 00 17 ) ( 00 1a ) TCP_DPORT message processed [0053] -> [final mpls p:00000075 l:368 s:1 e:3 p:5 c:1]

27. Look at IFT statistics file /var/log/ift/iftkusr2.log to find that counters of the related traffic have disappeared

28. Ping hostB and find no communication

29. Find the pid of rtest command and kill the process: this will tear down the path 102

On the LSR side:

30. Check main log file /var/ift/iftklog.log to see that the automatically-generated message of path tear-down has been caught by the monitor (extract) DELXC [auto generated] inet connect incoming (gen lab|key 168|a8 lspace 0 exp 168) with outgoing (gen lab|key 269|10d lspace 0 exp 269)

Page 29:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 29 of 204

TEQUILA Consortium – October 2002

269|10d lspace 0 exp 269)

Look at IFT statistics file /var/log/ift/iftkusr2.log to find that counters of the related path have disappeared.

3.9 NS Integration Tests and Results Test Id. Purpose Platform Results

SUITE1_8/UniS/NS/TE/MPLS/SP/1 Validate the correct behaviour of the Steiner tree algorithm

implemented. Networks so that the Steiner tree can be known in advance will be used.

NS simulator

Passed

SUITE1_8/UniS/NS/TE/MPLS/SP/2 Validate the correct behaviour of the optimisation algorithm implemented for the off-line Network Dimensioning component (cf. section 4.1 in D1.2 and part B of this Deliverable). Appropriate network topologies and number of trunks will be used so that the desired result can be calculated beforehand.

NS simulator

Passed

SUITE1_8/UniS/NS/TE/MPLS/SP/3 Integrate and validate the Topology Generation (TopoGen) tool with the static MPLS system.

NS simulator

Passed

SUITE1_8/UniS/NS/TE/MPLS/SP/4 Integrate and validate the TopoGen tool with the Bulk Service Request Generator (BSRG). This requires to check the interface between them and that the semantics of the topology of the two tools are compatible.

NS simulator

Passed

SUITE1_8/UniS/NS/TE/MPLS/SP/5 Validate the formation of NS configuration commands from Network Dimensioning for: a) topology b) LSP configuration c) DiffServ queue configuration d) overall functions (finish, total time etc).

NS simulator

Passed

SUITE1_8/UniS/NS/TE/MPLS/SP/6 Validate that the traffic sources produced from BSRG in the form of NS commands are correct by running appropriate scripts.

NS simulator

Passed

SUITE1_8/UniS/NS/TE/MPLS/SP/7 Integrate and validate the behaviour with the dynamic part of the MPLS-based TE (see next series of tests). This test will check to see if the tcl commands, which configure the Dynamic Routing and Resource Management are working as required by the implementation in NS.

NS simulator

Passed

SUITE1_8/UniS/NS/TE/MPLS/SP/8 Integrate and validate the behaviour of ND with the appropriate policy consumer. Checking if the tcl code provided by the policy consumer for TE policies results in valid Network Dimensioning behavioural adjustments. This test is related to SUITE1_7/UniS/NS/TE/MPLS/PO/4 from a Network Dimensioning point of view.

NS simulator

Passed

SUITE1_8/UniS/NS/TE/MPLS/DP/1 Validate the correct behaviour of the DSCP-aware routing, by using small topologies and only a few DSCPs so that the result can be calculated beforehand.

NS simulator

Passed

SUITE1_8/UniS/NS/TE/MPLS/DP/2 Validate the correct behaviour of Dynamic Resource Management component in NS. This is done by using a two- node network and operating DRsM on a single link where the load is controlled, and therefore the behaviour can be predicted.

NS simulator

Passed

SUITE1_8/UniS/NS/TE/MPLS/DP/3 Integrate and validate the configuration commands produced by the static TE part. This is related to SUITE1_8/UniS/NS/TE/MPLS/SP/7 test described above, but is the validation required from the point of view of the dynamic components.

NS simulator

Passed

Page 30:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 30 of 204

TEQUILA Consortium – October 2002

SUITE1_8/UniS/NS/TE/MPLS/DP/4 Integrate and validate the behaviour of the dynamic MPLS-based TE components with the appropriate Policy Consumer. Checking if the tcl code provided by the policy consumer for simple policies results in valid DRsM behavioural adjustments. This test is related to SUITE1_8/UniS/NS/TE/MPLS/PO/4, but from the DRsM point of view.

NS simulator

Passed

Page 31:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 31 of 204

TEQUILA Consortium – October 2002

4 ALGORITHM PERFORMANCE ASSESSMENT TESTS AND RESULTS

4.1 Test-Suites Algorithm assessment tests have been carried out to assess aspects of the performance of specific algorithms of the SLS Management and MPLS-TE sub-systems. Algorithm performance is assessed in terms of its configuration parameters and/or uncontrolled variables influencing its performance. They comprise the following test-suites.

Test Suite Purpose SUITE2_1/ALGO/SLSM Assess the performance of the algorithms of the SLS Management sub-system. SUITE2_2/UniS/MPLSTE Assess the performance of the algorithms of the MPLS-based TE sub-system.

4.2 SLS Management Algorithm Performance Tests and Results Test Id. Purpose Platform Results

SUITE2_1/ALGO/SLSM/TraffAggr Performance assessment of the traffic

aggregation algorithm under various network and traffic cases influencing its performance.

Greek Dev. Platform (see A.4)

Section 4.2.1

4.2.1 Performance Assessment of the Traffic Aggregation Algorithm The Traffic Aggregation algorithm is performed within the TEQUILA system at Resource Provisioning Epochs, where the network is to be dimensioned by the off-line TE functions based on anticipated demand estimates. The Traffic Aggregation algorithm calculates the anticipated network demand on the basis of a population of established subscriptions.

The metric used to capture the performance of the Traffic Aggregation algorithm is its execution time.

As resulted from the scalability analysis undertaken (see Part C of this Deliverable), the complexity of the Traffic Aggregation algorithm depends on:

1. The number of established SLSs (|SLS|); note that this number is generally greater than the number of established subscriptions, as a subscription may contain a number of SLSs.

2. The number of QoS-classes or equivalently Ordered Aggregates (OAs) (|H|), corresponding to the PHB capabilities of the network interfaces, supported by the network infrastructure.

3. The number of networks’ TTs (|T|); a TT is a QoS-class in a certain topological scope. Therefore the maximum number of TTs equals H*E*(E-1), where E denotes the number of network edges.

4. The number of service classes (|SC|). A service class denotes a category of services and/or type of customers using these services, for which (category) statistical patterns have been observed as far the usage of the service is concerned. For each service class, multiplexing and aggregation factors are produced, which are assumed as input by the Traffic Aggregation algorithm. Service classes are the outcome of traffic analysis activities, which are outside the scope of the project.

As yielded by the theoretical scalability analysis, the complexity of demand aggregation is O(|T|*|SC|+|SLS|*|H|). The purpose of this test-suite is to verify the theoretical expectations with operational measurements.

4.2.1.1 Experimentation Set-up The following tests were conducted:

Page 32:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 32 of 204

TEQUILA Consortium – October 2002

Test1 – moderate net

Platform : OS:Suse Linux 8.0, DB: Oracle 9.2 DB Server, CPU: Double xeon 2.4 GHz with hyper Threading On, MEMORY: 4 GByte, HD: SCSI Ultra 3 15000 rpm

Customers: 200 customers with 5 sites per each customer

Service Classes: 20

Network: 6 edge routers, 3 core routers, 5 QoS classes, 150 TTs

Subscriptions: Permanent / On-Demand / Managed Bandwidth Peer 2 Peer services, 2 SLSs per SSS

Number of established SLSs: 1000, 10000, 20000

Measurement: Traffic Aggregation execution time

Test2 – large net

Platform: As above

Customers: As above

Service Classes: As above

Network: 100 edge routers, 3 core routers, 5 QoS classes, 49,500 TTs

Subscriptions: As above

Number of established SLSs: 1000, 2000,4000,6000,8000,10000,12000,14000,16000,18000, 20000

Measurement: Traffic Aggregation execution time

Test3 – large net, large SLS/SS

Platform: As above

Customers: As above

Service Classes: As above

Network: As above

Subscriptions: Permanent / On-Demand / Managed Bandwidth Peer 2 Peer services, 20 SLSs per SSS

Number of established SLSs: As above

Measurement: Traffic Aggregation execution time

Effect of TTs - Comparison Test1-Test2

Comparing Traffic Aggregation algorithm execution time for a small network (150 TTs) -test 1- and a large network (49500 TTs) -test 2

Effect of SLSs/SSS -Comparison Test2-Test3

Comparing Traffic Aggregation algorithm execution time for different established subscriptions (2 SLSs per SSS) -test 2- and (20 SLSs per SSS) -test 3

Table 4-1: Performance Tests of the Traffic Aggregation algorithm.

Page 33:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 33 of 204

TEQUILA Consortium – October 2002

4.2.1.2 Moderate Network -Test 1- Results

Demand Aggregation excecution time

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

1000 10000 20000

number of established SLSs

sec

Test1

Figure 4-1: Traffic Aggregation results (test 1).

The comparison between the Traffic Aggregation execution times for different populations of established SLSs -1,000, 10,000 and 20000 SLSs- show a linear increase of the execution time; demonstrating that the algorithm scales with the number of the established SLSs.

4.2.1.3 Large Network -Test 2- Results

Demand Aggregation excecution time

0

20

40

60

80

100

120

1000 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000

number of established SLSs

sec

Test2

Figure 4-2: Traffic Aggregation results (test 2).

Similarly to the previous test, the linear growth of the Traffic Aggregation execution time is verified in higher granularity of established SLSs; from 1,000 to 20,000 increasing by 1,000; which demonstrates the scalability of the algorithm with the number of SLSs.

Page 34:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 34 of 204

TEQUILA Consortium – October 2002

4.2.1.4 Large Network, Large Number of SLSs per SSS - test 3- Results

Demand aggregation excecution time

0

5

10

15

20

25

30

1000 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000

number of estublished SLSs

sec

Test3

Figure 4-3: Traffic Aggregation results (test 3).

The linear growth of Traffic Aggregation execution time with respect to the number of the established SLSs is also verified in this test case.

4.2.1.5 Effect of TTs - Comparing Test 1 and Test 2

0

20

40

60

80

100

120

sec

number of established SLSs

comparison of Demand aggregation execution t imes for different networks

Test1 Test2

Test1 0.109 0.284 0.726

Test2 8.604 57.582 105.019

1000 10000 20000

Figure 4-4: Effect of TTs in Traffic Aggregation performance.

The above results prove that the Traffic Aggregation execution time increases as the number of TTs increase. The increase is observed to be almost linear; which demonstrates the scalability of the Traffic Aggregation algorithm in terms of TTs.

Page 35:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 35 of 204

TEQUILA Consortium – October 2002

Comparing with the results of the previous tests (see Figure 4.1-4.3), the effect of the number of TTs on the execution time is much greater than the effect of the number of established SLSs. It should be noted that even and in the case of a large network (49,500 TTs), the total process execution time is within reasonable levels (100 sec) considering that the algorithm runs off-line.

4.2.1.6 Effect of number of SLSs per SSS - Comparing Test 3 and Test 2

0

20

40

60

80

100

120

sec

number of established SLSs

comparison of Demand aggregation execution times for different SSS

Test3 Test2

Test3 2.607 14.435 28.014

Test2 8.604 57.582 105.019

1000 10000 20000

Figure 4-5: Effect of number of SLSs per SSS in Traffic Aggregation performance.

The results prove show that the Traffic Aggregation algorithm execution time is shorter in the case of more SLSs per SSS. This is expected because in the current implementation we iterate per each established SSS. In any case, this effect depends on implementation.

4.3 MPLS-TE Algorithm Performance Tests and Results The following tests were carried out for assessing the MPLS-TE algorithms. The algorithms assessed were the:

• Network Dimensioning (ND) algorithm, which dimensions the network on the basis of anticipated demand estimates; it runs off-line

• Dynamic Resource Management (DRsM) algorithm, which manages the scheduling parameters of the PHBs according to actual load conditions; it runs in a distributed fashion, residing at each network router.

Test ID Purpose Platform Test Results SUITE2_2/UniS/TE/MPLS/ND/1 Performance assessment of the

Steiner Tree algorithms on simple scenarios/topologies will be tested so that: a) the functionality of our algorithms is demonstrated and b) the validity of our implementation is tested since the simple scenarios can be also verified by hand.

NS Simulator

Sections 4.3.2, & 4.3.1

Page 36:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 36 of 204

TEQUILA Consortium – October 2002

SUITE2_2/UniS/TE/MPLS/ND/2 Performance assessment of constrained Steiner tree algorithms. This test will assess the performance of the alternative algorithms for finding constraint based Steiner Trees in terms of the total cost of the resulted tree. Complex, large random topologies and different Steiner point percentages will be tested in order to evaluate scalability.

NS Simulator

Sections 4.3.4, 4.3.2, 4.3.1

SUITE2_2/UniS/TE/MPLS/ND/3 Performance assessment of constrained Steiner tree algorithms. Complex, large random topologies and different Steiner point percentages will be tested in order to evaluate scalability. This test will assess the performance of the alternative algorithms for finding constraint based Steiner Trees in terms of the execution time of the corresponding algorithms.

NS Simulator

Sections 4.3.5, 4.3.3, 4.3.1

SUITE2_2/UniS/TE/MPLS/ND/4 Performance assessment of constrained Steiner tree algorithms. This test will assess the performance of the alternative algorithms for finding constraint based Steiner Trees in terms of the maximum end-to-end delay (or maximum hop-count) of the corresponding algorithms.

Simulator Sections 4.3.6, 4.3.3, 4.3.1

SUITE2_2/UniS/TE/MPLS/ND/5 Performance assessment of optimisation algorithms. This test assesses the quality of the solution given by the dimensioning algorithms. It demonstrates it’s step-wise functionality on a standard well-known backbone topology, and how iteratively achieves: lowering the maximum link utilisation, balancing the network.

NS Simulator

Sections 4.3.9, 4.3.8

SUITE2_2/UniS/TE/MPLS/ND/6 With this test we check the scalability performance of the Dimensioning algorithm. We check the link load distributions for random large network topologies. Execution times are also another metric used in these experiments.

NS Simulator

4.3.10, 4.3.8

SUITE2_2/UniS/TE/MPLS/ND/7 This test will assess the impact of the performance parameters (e.g. exponent n) of the optimisation algorithm under various topology and load scenarios. The impact will be measured as a function of the link load distribution as well as the ability to satisfy the hop count constraints.

NS Simulator

4.3.11, 4.3.8

Page 37:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 37 of 204

TEQUILA Consortium – October 2002

SUITE2_2/UniS/TE/MPLS/ND/8 Assessment of degree of QoS requirements (average delay) satisfaction measured in a network after applying the configuration of LSPs and PHBs resulted by ND.

NS Simulator

4.3.12, 4.3.8

SUITE2_2/UniS/TE/MPLS/DRsM/1 Simple tests will prove demonstrate the operation of DRsM. These will show that DRsM changes the rate allocation based on the offered load and throughput. Activity plots will be the main tools for these tests.

NS Simulator

4.315, 4.3.14

SUITE2_2/UniS/TE/MPLS/DRsM/2 With this series of tests we try to assess the impact of the various policy parameters which influence the behaviour of the DRsM algorithm.

NS Simulator

4.3.16, 4.3.14

SUITE2_2/UniS/TE/MPLS/DRsM/3 Performance tests for assessing the performance improvement of solutions involving Dynamic Resource Management versus when such behaviour is absent. Total packet loss versus the offered load is the main performance metric with various traffic scenarios (simple, complex, random vbr)

NS Simulator

4.3.17, 4.3.14

4.3.1 Theoretical Background: Constrained Steiner Tree Algorithms Constraint Steiner Tree algorithms construct trees connecting the source to all Steiner points without violating the constraint imposed by an upper bound on the hop-count. In the literature these algorithms have been studied in the context of Multicasting, since Steiner Trees can be directly used as multicast trees, and the hop-count constraint can be seen as QoS guarantee because by constrain the number of hops we pose a limit on the maximum delay. Multicast algorithms that create trees by minimizing the cost of each path from the source to a multicast member node are known as shortest path algorithms, while algorithms whose objective is to minimize the total cost of the multicast tree are called Steiner tree algorithms [Sala97]. The Steiner tree problem is known to be NP complete [Karp72].

Several efficient Steiner tree heuristics have been proposed in literature [Ram96], [KMB81], [Steiner83]. Below we will describe the algorithms we used as the basis for comparison to the TEQUILA Minimum Weight Hop-Count Constraint Tree algorithm [D1.4] (also described bellow for completeness).

4.3.1.1 The TEQUILA Minimum Weight Hop-Count Constraint Tree (MWHCT) algorithm

First of all, the algorithm checks if there exists a feasible solution for the given network and the hop count constraint K by using Dijkstra’s shortest path algorithm [Dijkstra52]. If no tree that satisfies the hop count constraint can be found there is no feasible solution and the algorithm terminates. If there exists a feasible solution, the algorithm calculates initially all the least weight trees spanning from each of the egress nodes and then, at each iteration selects the path from one egress to the tree with the least weight to join the tree. This path must satisfy the hop count constraint and have the least weight among the other paths that do this. If there are more than one such paths, the algorithm selects the one with the minimum hop count and adds it to the tree.

For all the paths from all egress nodes that the algorithm failed in finding a least weight path that satisfies the hop count constraint, the shortest path found at the beginning is considered. So, the algorithm finds the shortest path from the ingress to each of these egress nodes with minimum weight and adds it to the tree.

Page 38:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 38 of 204

TEQUILA Consortium – October 2002

The combination of the partial Steiner tree with the shortest path found at the beginning of the algorithm can possibly result in loops so it is necessary to call a loop-breaking procedure, which will detect and break them. The loop-breaking procedure proposed in [D1.4] finds first the crossing nodes (the ones having more than one incoming links) of the tree and then, starting from the node furthest from the ingress, deletes all excessive links contained in the tree but not in the path. This pruning procedure continues until another egress node is encountered, a node of the shortest path or a non-crossing node that is connected with more than two nodes.

The pseudo-code of this algorithm is presented below [D1.4].

Notation

G=(V, E, W) - Input weighted graph

v ingress⊆ V – Ingress node (root of the tree)

V egress ⊆ V – {v ingress} – Set of egress nodes (leafs of the requested tree)

V e – Set of all egress nodes and ingress node

V T, E T – Nodes and links of the requested tree T

V L – Set of crossing nodes

K – The hop count constraint

K P (u, v) – The number of hops between node u and v along path P

W P (u, v) – Weight of the path from node u to v along path P

nodes (P) – Function that returns the set of nodes that constitute path P

links (P) - Function that returns the set of links that constitute path P degree (v) – Function that returns the degree of node v

predecessor P (v) – Function that returns the predecessor node of v along path P

MWHCT algorithm

1. V T ={v ingress}, E T ={∅}, V′ egress= V egress

2. Compute shortest path tree spanning from v ingress

3. If ( )max { },P ingressv VegressK Kv v∈

> EXIT no feasible solution

4. Compute least weight trees spanning from each v ∈V egress

5. For all v ∈V egress and u ∈V T find path set P={P (v, u)} that satisfies:

( , ) ( , )P v u P u vingressK K K+ ≤ , such that ( , ) ( , ),

minegress T

P v u P i ji V j VW W

∈ ∈=

6. If |P|≥1 then choose P (u, v) ∈ P with ( , )min( )T

P u vu V

K∈

7. Add nodes and links of P (u, v) to the tree T=(VT, ET), i.e.

VT=VT ∪ nodes (P (v, u)-{u}) and ET=ET ∪ links (P (v, u))

8. else if |P|=0

9. P={P (v ingress, v): v ∈V egress}, using the shortest paths from step 1

10. For each P∈P find total weight WP, excluding links in ET

Page 39:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 39 of 204

TEQUILA Consortium – October 2002

11. Find path P (v ingress, v) ∈P such that ( , )min { }ingress

egressP P v vv V

W W∈

=

12. Add nodes and links of P (v ingress, v) to the tree T=(VT, ET), i.e.

VT=VT ∪nodes (P (v ingress, v)-{v ingress}) and ET=ET ∪ links (P (v ingress, v))

13. If loops are detected call Loop-breaking Procedure (P, T)

14. V egress = V egress – {v}

15. If V egress = {∅}, FINISH else GOTO STEP 4

Below we present the pseudo-code of the Loop-breaking Procedure [D1.4] used by MWHCT algorithm in step 12.

Loop-Breaking Procedure (P, T)

1. Find crossing nodes: VL = {v ∈ V T : (u, v) ∈ ET, (w, v) ∈ ET, u≠w}

2. Find the most far from the ingress crossing point v∈ VL i.e. such that , ,( ) ( )max{ }

ingressv ingress jL

P v P vj VK K

∈=

3. VL = VL – {v}

4. Let edge (u, v)∈ ET and (u, v) ∉ P

5. While (u ∉(V e ∩VT)) and (u ∉ nodes (P)) and (degree (u)=2)

6. Prune (u, v)

7. v = u, ( )TEu predecessor u=

8. If VL = {∅} FINISH else GOTO STEP 2

An alternative way to find a path, in case that it is not possible to find a path from the least weight trees of all egress nodes not included in the tree, is also described in [D1.4]. In this case, steps 8 – 10 of the MWHCT algorithm’s pseudo-code are replaced by the following procedure:

First, the shortest path trees according to weights spanning from all remaining egress nodes to all the other nodes are computed. This step needs to be performed only once for all the egress nodes not yet in the tree. From this set of all shortest paths from all v∈Vegress to the nodes of the tree it is possible to calculate the paths that satisfy the hop-count constraint, i.e.

P = {P (v, u): v∈V egress, u ∈VT} such that ( , ) ( , )ingressP v u P u vK K K+ ≤

This set will not be empty as it includes at least all the paths computed at the beginning (when checking if a feasible solution exists). Next, the weight of each of these paths is computed as follows: First, the algorithm adds the weights of all the links in the path but not in the tree and then calls the loop-breaking procedure in order to detect and break possible loops. When this is done, it subtracts from each path’s weight the sum of the links that will be deleted from the tree if this path is included in the tree. Finally, the path with minimum weight is chosen and added it to the tree.

From now on, we will refer to the MWHCT algorithm as described by the pseudo-code above as version 1 of the MWHCT algorithm (MWHCT-v1) and the MWHCT algorithm with the modification of steps 8-10 as version 2 of the MWHCT algorithm (MWHCT-v2).

Page 40:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 40 of 204

TEQUILA Consortium – October 2002

4.3.1.2 Bounded Shortest Multicast Algorithm (BSMA) In [BSMA] a heuristic is presented for constructing minimum-cost multicast trees with delay constraints. The Bounded Shortest Multicast Algorithm (BSMA) consists of two major steps. First, the algorithm constructs an initial tree T0, which is a minimum delay Steiner tree with respect to the multicast source, using Dijkstra’s shortest path algorithm. If the delay bounds are not met even in the minimum delay tree, the algorithm terminates. Otherwise, the second step of the algorithm iteratively refines tree T0 for low cost. The refinement from tree Tj to Tj+1 (initially j=0) is accomplished by an operation called delay-bounded path switching.

During this operation, every superedge of the tree, which is a path between two branching nodes or two multicast group members or a branching node and a multicast member, represents a path for possible path switching. The algorithm iteratively deletes superedges from the tree Tj, resulting in two sub-trees Tj

1 and Tj2

and replaces them with lower cost superedges not in the tree, without violating the delay bound, until the total cost of the tree cannot be further reduced. In order to find a superedge with lower cost BSMA uses a k-shortest path algorithm. The construction of shortest path starts with the first shortest path, then the second and so on; k is not known a priori but is determined when a shortest path is found which has resulted in a delay-bounded tree. The incremental construction of the k-shortest path between Tj

1 and Tj2 consecutively

calls Dijkstra’s shortest path algorithm, which is now modified to construct the shortest path between two sub-trees instead of two nodes. The k-shortest path algorithm always terminates because at least the deleted path is found again. The pseudo-code of BSMA is available in [BSMA].

4.3.1.3 Kompella, Pasquale and Polyzos algorithm (KPP) In [KPP] another heuristic for the constrained Steiner tree problem is proposed by Kompella, Pasquale and Polyzos, which is known as KPP algorithm from the surnames of the authors. KPP assumes that the link delays and the delay bound ∆ are integers. First, the following terms must be defined:

• A constrained cheapest path between nodes u and w is the least cost path from u to w that has delay less than the delay bound. The cost of such a path is PC(u, w) and the delay is PD(u, w).

• A closure graph G on a set of nodes N is a complete graph on the nodes in N with edge cost between nodes u, w∈N equal to PC(u, w) and edge delay PD(u, w).

In order to compute the closure graph G, the algorithm first computes the constrained cheapest paths between all pairs of nodes in the set containing the destination nodes and the source node. The computation is done using Floyd’s shortest path algorithm [Floyd62] and has to loop over all pairs of nodes and over all intermediate nodes, as well as over all possible values from 1 to ∆. If there are multiple cheapest constrained paths, then the one with the least delay is chosen. The second step is to construct a constrained spanning tree of G. In [KPP] two selection functions are proposed to decide whether to include in the tree some edge adjacent to another already included in the tree. The first function uses only cost and therefore tries to construct the cheapest tree possible, provided that the delay bound is met, while the second one uses both cost and delay in its form and consequently chooses low cost edges, but modulates the choice by picking edges that maximize the residual delay. Finally, the third step of the algorithm is to expand the edges of the constrained spanning tree into the constrained cheapest paths they represent and remove any loops created by this expansion. The pseudo-code of KPP is available in [KPP].

4.3.1.4 QoS Dependent Multicast Routing (QDMR) algorithm

In [QDMR] a fast algorithm for generating delay-constrained low-cost multicast trees is presented. QoS Dependent Multicast Routing algorithm (QDMR) is based on the Destination-Driven MultiCasting (DDMC) algorithm proposed by Shaikh and Shin [DDMC]. DDMC tries to make the destination nodes of the multicast group appear as new sources. This way, paths through destination nodes are favoured, as the path to a new node via a destination node is likely to have lower cost than other paths and thus is added to the tree. Consequently, the constructed tree can have some very long branches, which look like a “chain” of destination nodes and this can lead of course to violation of the delay bound [QDMR].

Page 41:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 41 of 204

TEQUILA Consortium – October 2002

In order to obtain delay-bounded trees, QDMR modified the original DDMC algorithm so it dynamically adjusts its construction policy based on how far a destination node is from the delay bound. More specifically, when the total delay of a path is far from reaching the delay bound, QDMR behaves as DDMC and produces long paths containing several destination nodes. However, when the delay constraint is about to be violated QDMR gives less priority to paths through destination nodes and thus reduces the likelihood of violating the delay bound. Finally, QDMR has a “merging phase”, which is used when a feasible low-cost path is not found for a destination node. In this case, the destination node is included in the tree by merging the least-delay path from the source to that node into the partial tree constructed by QDMR. The pseudo-code of QDMR is available in [QDMR].

4.3.1.5 Constrained Dijkstra algorithm (CDKS)

In [CDKS] a heuristic for constructing delay-constrained low-cost multicast trees is proposed, based on Dijkstra’s shortest path algorithm. We will refer to this algorithm as Constrained Dijkstra algorithm (CDKS) [Sala97].

In order to construct the routing tree the algorithm performs the following steps. First, a low-cost spanning tree T1=(V1, E1) rooted at the source node and spanning as many destination nodes as possible is computed using Dijkstra’s shortest path algorithm, but with the delay bound ∆ satisfied for every generated path. Then, a shortest delay path tree T2=(V2, E2) that spans all the destination nodes not included in V1 is computed by using again Dijkstra’s shortest path algorithm. Finally, the two trees T1 and T2 are merged to give the multicast routing tree. Because of the combination of the two trees, loops may appear and can be detected by checking whether the two trees contain nodes in common. If so, loops can be removed by deleting the appropriate edges of tree T1. The pseudo-code of this algorithm is available in [CDKS].

4.3.1.6 Constrained Optimal Minimum Steiner tree algorithm As stated earlier the constraint minimum Steiner Tree problem is NP complete. But for small topologies we can use the optimal minimum Steiner tree algorithm (COPT) [Sala97] since the running time can be acceptable. We used for small topologies only, the constrained optimal minimum Steiner tree algorithm (COPT). We just used this algorithm as the basis for comparing the results of the other algorithms. This algorithm finds always the minimum cost solution for the multicast routing problem, subject to a given delay constraint. However, because this algorithm has very large execution time, we applied only to the smallest of the networks we examined i.e. the one with 20 nodes.

4.3.2 Steiner Tree Performance Testing: Simple Examples Below, we will demonstrate a number of example networks we created in the simulator in order to apply the two Steiner Tree algorithms we proposed. The networks have a small number of nodes, so that we can verify the results (multicast tree) of the algorithms. For simplicity reasons, in the first four examples we assume that all networks have symmetrical links, while in the fifth example we will present a simple network with asymmetric links.

4.3.2.1 Steiner Tree Example 1 In this example we will demonstrate a case where both versions of the MWHCT algorithm construct the same multicast tree.

Page 42:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 42 of 204

TEQUILA Consortium – October 2002

0 1 2 3 9

45

10

7

6

8

1 7 7 7

1

1 1

1

1

1

1

10

destination node

source node

relay node1

Figure 4-6: Steiner tree example 1 - network with cost in every link.

Let’s assume that the hop-count constraint is 4 for this network. It is obvious that all the destination nodes have hop count to the source less or equal to 4, so a feasible solution exists (step 2). First, egress nodes 8 and 5 are added to the tree as their paths to the source 8-7-0 and 5-1-0 respectively, taken from their least weight trees, have the least weight among all other such paths (step 4). Then, egress node 3 joins the tree from the path 3-4-5, taken again from its least weight tree calculated in step 3. Destination nodes 9 and 10 cannot be added to the tree from paths of their least weight trees, as the hop count constraint is not met. For example, according to the least weight tree of node 10, this node can be connected to node 3 but now its distance from the ingress node 0 from path 10-3-4-5-1-0 is 5 hops, which is greater than the constraint. So, both these nodes will be included in the tree from the shortest path to the source. For node 10, this is the path 0-10, while for node 9 it is the path 0-10-3-9. When adding this last path to the tree, a loop is formed between nodes 0-1-5-4-3-10-0, which should be broken with the loop-breaking procedure. The crossing node is now node 3 and the first link to be pruned is (4,3), as it belongs to the tree but not to the shortest path. Then link (5,4) is pruned and the pruning stops there, as node 5 is an egress node already in the tree.

In the second version of the MWHCT algorithm, except of the shortest paths from nodes 9 and 10 to the source, we also consider the shortest paths to all the other nodes of the tree. However, for the case of node 10 the only path that satisfies the hop-count constraint is the one to the source, while for node 9 the shortest path 1-2-3-9 to node 1 is also examined but it is rejected because its weight is greater than the weight of the path to the source. So, both versions of the MWHCT algorithm construct finally the same multicast tree with total cost = 22, shown in figure 4-7.

0 1 2 3 9

45

10

7

6

8

1 7 7 7

1

1 1

1

1

1

1

10

destination node

source node

relay node1

tree edgenon tree edge

Figure 4-7: Steiner tree example 1 – multicast tree of MWHCT algorithm (both versions)

Page 43:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 43 of 204

TEQUILA Consortium – October 2002

4.3.2.2 Steiner Tree Example 2 In this example we will show how the second version of the MWHCT algorithm constructs a different tree with less total cost from the first version.

0 1 2 3 9

45

10

7

6

8

1 1 3 7

1

1 1

1

1

1

1

10

41

destination node

relay node

source node

Figure 4-8: Steiner tree example 2 - network with link costs

Let’s again assume that the hop-count constraint is 4 for this network. It is obvious that all destination nodes can be reached from the source with number of hops less or equal to the hop count constraint, so a feasible solution exists. First, nodes 8 and 5 are added to the tree from the paths 8-7-0 and 5-1-0, taken from their least weight trees calculated in step 3. Then, node 3 is connected to node 5 of the tree from the path 3-4-5, taken again from its least weight tree. Node 10 can also be connected to node 1 of the tree from the path 10-2-1 of its least weight tree. These steps are common in both version of the MWHCT algorithm. Since there exists no path from the least weight tree of egress node 9 that satisfies the hop-count constraint, the shortest paths are examined.

In the first version of the MWHCT algorithm the only path examined is the shortest path from 9 to the source node 0, 0-10-3-9 that has weight = 21. This path is added to the tree and two loops are now formed. The first loop is among nodes 0-10-3-4-5-1-0 and the second one is 0-10-2-1-0. The two crossing nodes are 3 and 10 respectively, with node 3 being the most far from the ingress node. So, links (4,3) and (5,4) are first deleted and the pruning stops because node 5 is an egress node already in the tree and then link (2,10) is deleted and the pruning stops because node 2 has degree=3, larger than 2. Link (1,2) must also be pruned eventually from the tree, as it does not lead to a destination node, so there is no point in keeping it.

The final multicast tree is shown in Figure 4-9 and has total cost = 25.

0 1 2 3 9

45

10

7

6

8

1 1 3 7

1

1

1

1

1

10

41

destination node

source node

relay node

tree edge

non tree edge

1

Figure 4-9: Steiner tree example 2 - multicast tree of the MWHCT algorithm-version 1.

Page 44:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 44 of 204

TEQUILA Consortium – October 2002

In the second version of the algorithm, the shortest path 0-10-3-9 we mentioned above is first examined and after excluding the links of the path that are already in the tree (in this case none) and the loops that must be broken if the path is to be included in the tree (the loops we described above), it is found to have total path weight = 17. However, the shortest path from node 9 to node 2 of the tree, 9-3-2, has less weight than the previous path. This is because the initial weight of this path is 10 and after subtracting the cost of links (4,3) and (5,4) that will be deleted from the loop-breaking procedure, the weight of the path becomes 8 (<17). So, path 9-3-2 is inserted to the tree.

The final multicast tree of this version of the algorithm is shown in figure 4-10 and has total cost = 16, which is less than the cost of tree constructed by the first version of the MWHCT algorithm.

0 1 2 3 9

45

10

7

6

8

1 1 3 7

1

1 1

1

1

1

14

1

10

destination node

source node

relay node

tree edge

non tree edge

Figure 4-10: Steiner tree example 2 - multicast tree of the MWHCT algorithm-version 2.

4.3.2.3 Steiner Tree Example 3 In this example we will demonstrate a network that requires special attention when applying to it the second version of the MWHCT algorithm.

0 1 2

3

4567

1 1

15

1 1 1

1

edge under examination

destination node

source node

relay node

tree edge

non tree edge

7

Figure 4-11: Steiner tree example 3 - network with link costs and multicast tree.

Let’s assume that the hop-count constraint is 5 for this network. Egress node 3 has been already added to the tree from the path 0-1-2-3 of its least weight tree. For egress node 7 there is no path from its least weight tree to any of the nodes of the current multicast tree that satisfies the hop-count constraint. Therefore, the shortest paths from this node are examined. In this simple example, it is obvious that the path from node 7 to node 1 is the best path to add to the tree. However, when version 2 of the MWHCT algorithm starts examining the shortest paths from node 7 to the other nodes of the network, path 7-1-3 will also be considered, as it is the shortest path from node 7 to node 3 and satisfies the hop-count constraint (according to this path node 7 is 5 hops away from the source node 0).

Page 45:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 45 of 204

TEQUILA Consortium – October 2002

The special characteristic of this path is that it contains two nodes already in the tree connected with a link not included in the tree, but the node of the tree we are actually connecting to (node 3) is farther from the ingress node than the other tree-node of the path (node 1). Consequently, there is no point in examining it, as the path from the egress node (node 7) to the closest node to the source (node 1) will have certainly better weight because it will include less links. Moreover, it will have less number of hops to the source, so it will definitely satisfy the hop-count constraint (provided that the “long path” satisfies it, as in our case). However, we should make sure that this path will be rejected without being examined because if not, the loop-breaking procedure will fail to break the loop formed.

If we assume that we examine this path, then at some point we will call the loop-breaking procedure to detect the loops. Node 1 is a crossing node because it has two incoming links: link (0,1) and link (3,1), which has just been added from the path. In the following, link (0,1) will be wrongly pruned from the loop-breaking procedure and this will result in a wrong multicast tree, not connected to the source.

4.3.2.4 Steiner Tree Example 4 This example demonstrates why it is necessary in version 2 of the MWHCT algorithm to check if the hop-count constraint is met after completing the construction of the multicast tree. Here, it is shown that it is possible to insert initially a destination node to the tree from a path that satisfies the hop count constraint but, after the procedure of steps 8-10 of this version of the algorithm takes place, this destination node is reached through another path and as a result the constraint is no longer satisfied.

0 1 2 4

78

1 3 1 1

11

destination node

source node

relay node

tree edgenon tree edge

5

10

911

12

61

1

1

1

1

1

edge to be pruned

3 61

Figure 4-12 Example 4 - Network with link costs and multicast tree

Let’s assume that the hop-count constraint for this network is 4. Egress node 9 is inserted in the tree from the path 9-1-0 of its least weight tree, and egress node 12 is inserted from path 12-11-0 of its least weight tree. Both these paths have the same weight as well as number of nodes from each of the destinations to the source. In the following, node 10 is added to the tree from path 10-2-1-0 of its least weight tree and then node 8 is connected to node 2 of the tree from the path 8-7-2. The path that connects the ingress node 0 to the egress node 8 is 0-1-2-7-8, which satisfies the hop-count constraint (in this case the number of hops along the path is equal to the constraint). For node 5 there is no path from this node to any of the nodes of the tree that meets the hop-count constraint. For example, according to node’s 5 least weight tree, node 5 should be connected to node 2 of the tree from path 5-4-3-2, but the total path from the source node to this destination node 0-1-2-3-4-5 has 5 hops, more than the this network’s constraint. Therefore, according to version 2 of the MWHCT all the shortest paths from node 5 to all the nodes of the tree will be examined and the one with the least weight will be chosen.

Page 46:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 46 of 204

TEQUILA Consortium – October 2002

The first path to be examined is path 5-2, which is the shortest path to node 2. The final weight of this path, after excluding links already in the tree and loops that must be broken if this path is to be included in the tree (in this case none), is 6. Then, the shortest path to node 12, 5-2-12, is considered. The initial weight of this path is 7, which is greater than the first path’s weight. However, when adding this path to the tree a loop is formed between nodes 0-1-2-12-11-0. The crossing node is now node 2 and the link that must be pruned is link (1,2) with cost 3, which belongs to the tree but not to the path. So, the final weight of this path is 7-3=4, which is less than the first path’s weight. Therefore, and as there exists no other path to examine, node 5 is connected to node 12 from the path 5-2-12 and link (1,2) is deleted. However, node 8, which was initially connected to the source from path 0-1-2-7-8 that satisfied the hop-count constraint, is now connected from the path 0-11-12-2-7-8, which does not meet the constraint! Consequently, the algorithm should check, after the completion of the tree, if the constraint from the ingress node to all destination nodes is satisfied, and if not, as in this case, the multicast tree should be rejected.

The first version of the MWHCT algorithm does not have to do this check, because the path added from steps 8-10 of the algorithm will be the shortest path to the source 0-1-2-5, which cannot cause such problems. Finally, we should also mention that this case happens very rarely.

4.3.2.5 Steiner Tree Example 5 In this example, we will demonstrate a simple network with asymmetric links and explain how both versions of this algorithm work.

0 1 2 3 4

7

6 8

1 1 1 1

destination node

source node

relay node

51

1

5

15

5

5 5 5 5 5

5 15

15

3

25

Figure 4-13: Steiner tree example 5 – network with asymmetric links

Let’s assume that the hop-count constraint for this network is 3. Initially, both versions of the MWHCT algorithm construct the shortest path tree using Dijkstra’s shortest path algorithm in order to check if the hop-count constraint is satisfied. For this simple network, it is obvious that the constraint is met for both egress nodes 4, 5. For the first version of the MWHCT algorithm, it is absolutely necessary to keep the shortest path tree constructed at this step, as well as the total weight of the paths form the ingress to each of the destinations nodes, because the network’s links will be reversed. These weights will be used in steps 8-10 of the first version of the algorithm. In the following, we call the appropriate function and reverse the links of the network. The new network seen now by the algorithm is shown in Figure 4-14.

Page 47:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 47 of 204

TEQUILA Consortium – October 2002

0 1 2 3 4

7

6 8

5 5 5 5

source node

relay node

55

1

5 5

15

1 1 1 1 1

1 51

53

5

52

destination node

Figure 4-14: Steiner tree example 5 – network with reversed links

Both versions of the MWHCT algorithm construct now the least weight trees from each egress node (4, 5) to all the other nodes of the network. Then, the paths from each egress node to all the nodes already in the multicast tree, according to these trees, are examined. Node 4 is first added to the tree from path 4-3-6-0, with total weight equal to the sum of the costs of links (4,3), (3,6) and (6,0), which is 3. Because of the fact that the link costs have now been reversed, the weight of this path is equal to the weight of the reversed path, through which the source will be actually connected to node 4. If the network had not been reversed (figure 5.3-8), the weight of this path would have been 15, thus different to the actual path’s weight. If we assume that there was another egress node under examination, then this could have led to wrong decisions about which node should be first included in the tree.

In the following, egress node 5 is examined but since it cannot be connected to the partial tree from a path of its least weight tree that satisfies the hop-count constraint, steps 8-10 of the algorithm take place. In the first version, the shortest path from the source to the destination node 5, 0-7-5, which has been already found at the beginning of the algorithm, is added to the tree. The weight of this path is 16 and, as we mentioned earlier, is computed and kept from step 1 of the algorithm. This is because, if we try now to calculate the path’s weight, we will wrongly find it 10, because the link costs have been reversed. The total cost of the final multicast tree, shown in figure 5.3-10, is 19.

0 1 2 3 4

7

6 8

1 1 1 1

destination node

source node

relay node

51

1

5

15

5

5 5 5 5 5

5 15

15

3

25

tree edge

non tree edge

Figure 4-15: Example 5 – Steiner tree of the MWHCT algorithm – version 1

Page 48:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 48 of 204

TEQUILA Consortium – October 2002

In the second version of the algorithm, the shortest path tree from the remaining egress node 5 not included in the tree is computed using Dijkstra’s shortest path algorithm. Since the tree computed at this step has node 5 as the source and spans all the nodes of the network, the weights of the tree’s paths will be correct and equal to the weights of the reversed paths of the initial network, through which node 5 will be actually connected to the tree. There exists two shortest paths we should examine now: the shortest path from node 5 to node 0, 5-7-0, with weight 16, and the path from node 5 to node 6 of the tree, 5-8-6, with weight 5. The path with the least weight is obviously the second one, which is added to the tree. If the network had not been reversed (figure 5.3-8), both paths would have weight equal to 10, so the first path could have been chosen wrongly as the one to add to the tree. The multicast tree of this version of the MWHCT algorithm, shown in Figure 4-16 has total cost 8, which less than the weight of tree constructed by the first version of the algorithm.

0 1 2 3 4

7

6 8

1 1 1 1

destination node

source node

relay node

51

15

15

5

5 5 5 5 5

5 15

15

3

25

tree edge

non tree edge

Figure 4-16: Steiner tree example 5 – Steiner tree of the MWHCT algorithm –version 2

4.3.3 Constrained Steiner Tree Algorithms: Simulation Environment The implementation and performance evaluation of the multicast routing algorithms was performed by extensions to a multicast network simulator designed at the Center for Advanced Computing and Communication, North Carolina State University [MCRSIM]. In the following we will describe briefly the most important features of the simulator, and then illustrate the key steps in the implementation of the algorithms that described in the previous section. Finally, we will present some example networks we created in the simulator in order to test the validity of our implementation of the algorithms and demonstrate how the algorithms work.

4.3.3.1 The simulator environment

The simulator was implemented as a package for creating and editing computer network graphs, applying multicast algorithms on these graphs and simulating the flow of cells on the resulting trees in order to measure costs and delays. It is written in C++ and uses Motif and X library functions to implement the graphical user interface. However, simulations can be also performed without need for the graphical interface (batch simulations) and this is the way we worked.

4.3.3.1.1 Nodes

Each node is characterized by its id (number) and x and y coordinates, which are specified when the node is created as part of a network. Moreover, the degree of the node and its adjacent nodes in the network are defined. Each node keeps a list with the sources connected to it, its adjacent nodes as well as a look up table for routing. Each entry of this table is characterized by the address of a multicast group and the name of the source transmitting to this group and provides the corresponding destination nodes from the node under examination for the given multicast group. So, each time a node is added to a multicast tree, the correct entry of the routing table of the node that the new node is going to connect to needs to be updated.

Page 49:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 49 of 204

TEQUILA Consortium – October 2002

4.3.3.1.2 Multicast groups

The simulator allows the creation of multicast (MC) groups by defining the number of nodes in the group, their id, the source node and the address of the group, which characterizes the group uniquely. It is possible to modify a multicast group by adding and removing nodes and also generate a random multicast group, which consists of nodes of the network chosen at random. In this case, the user needs to specify the number of nodes of the multicast group.

4.3.3.1.3 Traffic Sources

The simulator allows the use of voice and video traffic sources, which both have variable bit rate suitable for multimedia applications. For each traffic source the peak rate of the traffic, the average to peak ratio and the mean burst size need to be specified. The user can choose the node to which to attach the source as well as the multicast group to which the source will transmit.

It also supports random uniformly distributed background traffic to the network, which will define the amount of bandwidth reserved on each link (cost of the link). The peak rate, the average to peak ratio and the mean burst size of the background traffic on each link are uniformly distributed between the maximum and minimum values that the user specifies. Background traffic can be symmetric, meaning that the two directed links existing in opposite directions between two nodes will have the same bandwidth utilization, or asymmetric.

4.3.3.1.4 Networks

Networks are described according to a special network description file format so that the program can read the input network file and translate it successfully. In the network description file, the user designs the network by first specifying the number of nodes of the network and for each node its id and x, y coordinates, which will define the distance of the node from the others. Moreover, the user specifies the number of outgoing links for each node and for each link, the adjacent node, the capacity and the peak and average utilization. In this file, the user can also specify the multicast groups and the traffic sources of the network, by defining appropriately the parameters mentioned above.

We generate random links, based on the Waxman Model [Zegura96], between the nodes of an input network. By calling the random link generator function, the previous existing links are deleted and new links are generated. The output of this function is a connected network in which each node’s degree is equal or greater than 2, so the resulting network has at least two paths between any pair of nodes. By adjusting accordingly the parameters of this function, the user can select the average node degree of the resulting network as well as increase or decrease the probability of short links relative to long ones.

4.3.3.2 Experimental Set-up In the simulations we performed we used networks with 20, 50 and 100 nodes. All these networks were full duplex, with homogeneous link capacities of 155 Mbps (OC3). The links that connect the nodes of the networks were generated using the random link generator function based on the model defined by Zegura et. al. in [Zegura].

In all the experiments we conducted, our purpose was to evaluate the algorithms proposed in [D1.4, see also Part B of this Deliverable]. These algorithms constitute a part of the traffic-engineering algorithm proposed in [D1.4] as this algorithm at some point needs to find a minimum weight Steiner tree with a given hop-count constraint. However, as we have already mentioned, this is an off-line algorithm implemented by the Network Dimensioning. Therefore, there are no sources actually transmitting information to the multicast members of the group, instead the expected traffic is known a priori and provided to the ND module by the traffic forecast module. The algorithm proposed in [D1.4] tries to find a cost-effective allocation of the resources, and consequently the algorithms proposed in [D1.4] have to find the minimum weight constrained tree subject to the given cost of the links.

Page 50:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 50 of 204

TEQUILA Consortium – October 2002

As a result, all the networks we simulated were initially loaded with traffic, so that every link has a cost, the link’s average bandwidth utilization. This cost corresponds to the first derivative of the link cost function as calculated by ND (see [D1.4]). The multicast algorithms try to find the constrained multicast tree with the least cost subject to the given cost of the links. The sources of the multicast groups do not send extra traffic to the destination nodes, but are only used in order to determine the root of the multicast tree.

In order to define the weight of the links, which normally would have been given by ND as the first derivatives of the link cost function, we added asymmetric background traffic using the appropriate function. The bandwidth of each link’s background traffic was a random variable distributed between Bmin and Bmax values. As the difference between Bmin and Bmax increases, the range of link loads increases and consequently the asymmetry of the network’s links, because the load on a link is independent to the load on the link existing in the opposite direction. For each of the networks with 20, 50 and 100 nodes we performed two experiments, the first one with Bmin=45 Mbps and Bmax=85 Mbps and the second one with Bmin=5 Mbps and Bmax=125 Mbps. This is because, as the range of link loads increases, an efficient algorithm should be able to find links with lower costs to include in the multicast tree, therefore the difference in the total cost of the trees will increase.

In each simulation, we provided as input a network with fixed nodes, deleted the existing links and generated random links. Then, we generated random background traffic and created random multicast groups with random sources and variable sizes. Each experiment was run repeatedly until confidence intervals of 5%, using 95% confidence level, were achieved for the total cost of the multicast tree. We repeated the experiments for multicast groups with size between 20% and 80% of the number of nodes of the input network.

All the algorithms we examined are delay-constrained algorithms and in their implementation the delay of the links from the source to each destination node is measured and compared to the constraint. The delay of a link is dominated by the propagation delay, while the queuing delay is neglected. For the propagation delay the distance of the nodes is calculated according to the nodes’ x and y coordinates and multiplied to the propagation speed taken to be two thirds the speed of light. However, the constraint in the algorithms we implemented refers to the number of hops from the source to each destination node. In order to compare their performance to the other algorithms, we modified the delay of the links to be always equal to 1. So, when measuring the delay we are actually measuring the number of nodes from the source to the destination node. However, this introduced some modifications in some of the algorithms’ code [KPP, QDMR].

In general, the hop-count constraint is determined according to the diameter of the network. However, since all the networks we simulated were generated randomly, we could not know a priori their diameter. For our experiments, we tried to find reasonable hop-count constraints, according to the number of hops of the networks. For the 20 and 50 node networks, we repeated the experiments with hop-count constraints 5, 8 and 15, while for the networks with 100 nodes we run simulations with constraints 8 and 15, as for these large networks it is almost impossible to have all the nodes of the multicast group satisfying a constraint of 5 hops, especially as the size of the multicast group increases. The reason we repeated the experiments with different hop-count constraints is that this gives us the opportunity to examine how a tight or easy to meet constraint influences the algorithms’ performance when constructing a minimum cost tree.

In order to evaluate the algorithms, we measured the following quantities:

• The total cost of the multicast tree constructed by an algorithm, which reveals the ability of the algorithm to locate and use low - cost links.

• The average execution time of the algorithm, which indicates how fast the algorithm can construct a multicast tree.

• The maximum end-to-end delay from the source to any node-member of the multicast group. This indicates the ability of the algorithm to satisfy the constraint imposed.

In the following, we will present the results of the experiments and analyse them.

Page 51:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 51 of 204

TEQUILA Consortium – October 2002

In the first part of this section, we will show the total costs of the multicast trees constructed by the algorithms and comment on their ability to find low-cost trees. In the second part, we will demonstrate the execution times of the algorithms and finally, in the third part, we will present the results concerning the maximum end-to-end delays, measured as the maximum number of hops from the source to the destination nodes. In all parts, we will first provide the analysis of the results and, at the end of each part we present the corresponding graphs.

4.3.4 Steiner Tree Performance Assessment: Final Tree’s Total Cost The results for the network with 20 nodes, which is the smallest of the networks we examined, can be seen in figures 4-17 and 4-18, for link costs 45-85 Mbps and 5-125 Mbps respectively. In this case, as we mentioned earlier, we applied the optimal constraint algorithm (COPT) to all the networks we simulated, so we calculated the excess cost of the multicast trees constructed by the all the other algorithms relatively to the cost of the optimal algorithm. Because of the very large execution time of COPT this was not possible for the networks with 50 and 100 nodes, therefore the excess total cost of the trees was calculated relatively to BSMA, which had the best cost performance in most of the cases. The results for the network of 50 nodes and for link costs 45-85 Mbps and 5-125 Mbps are shown in figures 4-19 and 4-20 respectively, while for the network of 100 nodes and for the same link costs as above, in figures 4-21 and 4-22. In each of these figures, the excess costs of the trees are presented for all the different hop-count constraints that we applied on the networks, as described in section 4.3.1. From the graphs, it is obvious that, in all the networks, the difference in performance between the algorithms increases as the range of link loads increases, that is why the results are more clear when Bmin = 5 Mbps and Bmax = 125 Mbps.

CDKS has the worst performance of all the other algorithms in all the networks and for all the hop-count constraints. For the network with 20 nodes, we can see that CDKS constructs trees with excess cost up to approximately 25% more than COPT when Bmin =45 Mbps and Bmax=85 Mbps (figure 6.2.1-1) and up to 35% when Bmin =5 Mbps and Bmax =125 Mbps (figure 4-18). For the networks with 50 and 100 nodes, we can see that CDKS generates trees with excess costs approximately 25% more than BSMA.

In all the cases, the difference between the performance of CDKS and the other algorithms is quite large. This difference becomes smaller in large networks with small range of link loads and when the number of nodes in the multicast tree is large, as shown for example figure 4-17. This can be explained if we consider that as the size of the multicast group increases, more destination nodes have to be added to the tree from the shortest paths to the source. However, as the range of the link loads increases, this cannot be seen anymore and CDKS performance remains almost the same for all the sizes of multicast groups.

The only case that the performance of CDKS becomes comparable to the other algorithms’ performance is when the hop-count constraint is stringent for the network and especially as the size of the multicast groups increases. This can been seen from figure 4-18. A constraint of 5 hops is quite tight for the network of 50 nodes; therefore it is likely that many nodes will be included to the tree from the shortest path to the source, especially if the multicast group is large. CDKS performance is closer to MWHCT-v1 than to any other algorithm, as MWHCT-v1 includes also the nodes that do not satisfy the constraint from the shortest path to the source according to Dijkstra’s shortest path algorithm. So if a lot of nodes do not satisfy the constraint, the weight of the two multicast trees will be similar.

The remaining algorithms perform very close to each other. From all of them, we can see that BSMA has the best performance in most of the cases. For the network of 20 nodes, its trees are less than 5% more expensive than optimal when Bmin =45 Mbps and Bmax =85 Mbps (Figure 4-17) and less than 7% in the second case (figure 6.2.1-2). QDMR and KPP construct more expensive trees than BSMA, although their performance is closer to BSMA when it is easier to satisfy the hop-count constraint. In general, KPP constructs less expensive trees than QDMR, as illustrated in Figures 4-17 and 4-18, but QDMR can have better performance than KPP when the size of the network and the size of the multicast group increases. This can be seen in figures 4-21a or 4-22a.

Page 52:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 52 of 204

TEQUILA Consortium – October 2002

The two versions of MWHCT algorithm yield also cost performance very close to BSMA. In general, we can say that when the number of nodes of the multicast tree is up to 50% of the network’s nodes and the constraint is not too stringent, both versions of the MWHCT algorithm have better performance than QDMR and KPP. In particular, they can perform even better than BSMA (especially the second version of the MWHCT algorithm), as shown in figures 4-17b/c, 4-19c and 4-21c, or almost equally good, as shown in figures 6.2.1-2b/c and 6.2.1-4c. On the other hand, as the multicast group becomes larger and especially when the bound on the number of hops cannot be easily satisfied, their performance deteriorates and can be equal or a bit worse than the performance of KPP and QDMR, as shown for example in figures 4-21a and 4-22a.

From these figures, we can also notice the difference in the cost performance of the two versions of the MWHCT algorithm. This difference becomes more obvious when the hop-count constraint is tight, as for example when the constraint is 5 for the network of 50 nodes shown in figure 4-20a. The second version of the algorithm generates trees with less cost than the first version because it uses an alternative, more efficient procedure in steps 8-10 in order to add to the tree nodes that cannot be added from paths of their least weight trees. On the other hand, when the constraint is loose we can see, from figure 4-18c for example, that both versions of the MWHCT algorithm have the same cost performance, as most of the destination nodes are added to the multicast tree from the procedure of steps 3-6 that is the same for both versions.

The experimental results concerning the cost of the multicast trees are presented below.

Page 53:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 53 of 204

TEQUILA Consortium – October 2002

a) 20 nodes, K=5, Bmin=45 Mbps, Bmax=85 Mbps

0

5

10

15

20

25

30

20 30 40 50 60 70 80% nodes in MC group relative to network nodes

% e

xces

s co

st r

elat

ive

to

CO

PT

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

b) 20 nodes, K=8, Bmin=45Mbps, Bmax=85 Mbps

0

5

10

15

20

25

30

20 30 40 50 60 70 80

% nodes in MC group relative to network nodes

% e

xces

s co

st r

elat

ive

to C

OP

T MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

c) 20 nodes, K=15, Bmin=45 Mbps, Bmax=85 Mbps

0

5

10

15

20

25

30

20 30 40 50 60 70 80

% nodes in Mc group relative to network nodes

% e

xces

s co

st r

elat

ive

to C

OP

T MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

Figure 4-17: Total cost of MC tree, 20 nodes, Bmin=45 Mbps, Bmax=85 Mbps.

Page 54:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 54 of 204

TEQUILA Consortium – October 2002

a) 20 nodes, K=5, Bmin=5 Mbps, Bmax=125 Mbps

0

5

10

15

20

25

30

20 30 40 50 60 70 80% nodes in MC group relative to network nodes

% e

xces

s co

st r

elat

ive

to

CO

PT

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

b) 20 nodes, K=8, Bmin=5 Mbps, Bmax=125 Mbps

0

5

10

15

20

25

30

35

40

20 30 40 50 60 70 80

% nodes in MC group relative to network nodes

% e

cxes

s co

st r

elat

ive

to C

OP

T MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

c) 20 nodes, K=15, Bmin=5 Mbps, Bmax=125 Mbps

0

5

10

15

20

25

30

35

40

20 30 40 50 60 70 80

% nodes in MC group relative to network nodes

% e

xces

s co

st r

elat

ive

to C

OP

T MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

Figure 4-18: Total cost of MC tree, 20 nodes, Bmin=5 Mbps, Bmax=125 Mbps.

Page 55:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 55 of 204

TEQUILA Consortium – October 2002

a) 50 nodes, K=5, Bmin=45 Mbps, Bmax=85 Mbps

0

5

10

15

20

25

20 30 40 50 60 70 80

% nodes in MC group relative to network nodes

% e

xces

s co

st r

elat

ive

BS

MA MWHCT-v1

MWHCT-v2

QDMR

KPP

CDKS

b) 50 nodes, K=8, Bmin=45 Mbps, Bmax=85 Mbps

0

5

10

15

20

25

30

20 30 40 50 60 70 80

% nodes in MC group relative to network nodes

% e

xces

s co

st r

elat

ive

BS

MA MWHCT-v1

MWHCT-v2

QDMR

KPP

CDKS

c) 50 nodes, K=15, Bmin=45 Mbps, Bmax=85 Mbps

-5

0

5

10

15

20

25

30

20 30 40 50 60 70 80

% nodes in MC group relative to network nodes

% e

xces

s co

st r

elat

ive

BS

MA MWHCT-v1

MWHCT-v2

QDMR

KPP

CDKS

Figure 4-19: Total cost of MC tree, 50 nodes, Bmin=45 Mbps, Bmax=85 Mbps.

Page 56:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 56 of 204

TEQUILA Consortium – October 2002

a) 50 nodes, K=5, Bmin=5 Mbps, Bmax=125 Mbps

02468

10

1214161820

20 30 40 50 60 70 80% nodes in MC group relative to network nodes

% e

xces

s co

st r

elat

ive

BS

MA MWHCT-v1

MWHCT-v2

QDMR

KPP

CDKS

b) 50 nodes, K=8, Bmin=5 Mbps, Bmax=125 Mbps

0

5

10

15

20

25

30

20 30 40 50 60 70 80% nodes in MC group relative to network nodes

% e

xces

s co

st r

elat

ive

BS

MA MWHCT-v1

MWHCT-v2

QDMR

KPP

CDKS

c) 50 nodes, K=15, Bmin=5 Mbps, Bmax=125 Mbps

-5

0

5

10

15

20

25

30

35

40

20 30 40 50 60 70 80

% nodes in MC group relative to network nodes

% e

xces

s co

st r

elat

ive

BS

MA MWHCT-v1

MWHCT-v2

QDMR

KPP

CDKS

Figure 4-20: Total cost of MC tree, 50 nodes, Bmin=5 Mbps, Bmax=125 Mbps.

Page 57:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 57 of 204

TEQUILA Consortium – October 2002

a) 100 nodes, K=8, Bmin=45 Mbps, Bmax=85 Mbps

0

5

10

15

20

25

30

20 30 40 50 60 70 80% nodes in MC group relative to network nodes

% e

xces

s co

st r

elat

ive

to B

SM

AMWHCT-v1

MWHCT-v2

QDMR

KPP

CDKS

b) 100 nodes, K=15, Bmin=45 Mbps, Bmax=85 Mbps

-5

0

5

10

15

20

25

30

35

20 30 40 50 60 70 80

% nodes in MC group relative to network nodes

% e

xces

s co

st r

elat

ive

to B

SM

A

MWHCT-v1

MWHCT-v2

QDMR

KPP

CDKS

Figure 4-21: Total cost of MC tree, 100 nodes, Bmin=45 Mbps, Bmax=85 Mbps.

Page 58:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 58 of 204

TEQUILA Consortium – October 2002

a) 100 nodes, K=8, Bmin=5 Mbps, Bmax=125 Mbps

0

5

10

15

20

25

30

20 30 40 50 60 70 80

% nodes in MC group relative to network nodes

% e

xces

s co

st r

elat

ive

to B

SM

AMWHCT-v1

MWHCT-v2

QDMR

KPP

CDKS

b) 100 nodes, K=15, Bmin=5 Mbps, Bax=125 Mbps

0

5

10

15

20

25

30

35

40

20 30 40 50 60 70 80

% nodes in MC group relative to network nodes

% e

xces

s co

st r

elat

ive

to B

SM

A

MWHCT-v1

MWHCT-v2

QDMR

KPP

CDKS

Figure 4-22: Total cost of MC tree, 100 nodes, Bmin=5 Mbps, Bmax=125 Mbps.

4.3.5 Steiner Tree Performance Assessment: Execution Times The optimal constraint algorithm has very large execution times, as stated earlier. In Figure 4-23 we demonstrate an example of the difference between the execution time of COPT and the other algorithms, for network with 20 nodes and hop-count constraint 5. We can see that COPT’s execution time is up to 2.5 seconds while all the other algorithms have execution times less than 0.1 sec. This difference in execution times was always large, for all the constraints applied to the network of 20 nodes.

Page 59:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 59 of 204

TEQUILA Consortium – October 2002

20 nodes, K=5, Bmin=45 MBps, Bmax=85 Mbps

0

0.5

1

1.5

2

2.5

3

20 30 40 50 60 70 80

% nodes in MC group relative to network nodes

exec

utio

n tim

e (s

ec)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

COPT

Figure 4-23: Execution time of algorithms (including COPT), 20 nodes, Bmin=45 Mbps, Bmax=85 Mbps.

COPT’s execution time is not included in Figure 4-24 and Figure 4-25, which demonstrate the execution times of the algorithms for the network of 20 nodes and for the different link loads, as this allows us to examine better the difference between the execution times of all other algorithms. The execution times for the networks with 50 nodes are shown in Figure 4-26 and Figure 4-27 for Bmin=45 Mbps, Bmax=85 Mbps and Bmin=5 Mbps, Bmax=125 Mbps respectively, and for the 100 node networks in Figure 4-28 and Figure 4-29. From these figures we can see that the qualitative performance of the algorithms is almost the same for the two different link loads. There exists of course some small differences because the range of the link loads can affect, together with other parameters, the searches that the algorithms perform in order to detect links or paths with smaller weights.

From the execution times of the algorithms in all the networks, we can see that QDMR and CDKS are both very fast algorithms, with QDMR slightly faster than CDKS. This is very important because QDMR has also a good cost performance as explained in section 4.3.1, which means that this algorithm scales well for large networks. On the other hand, CDKS may be faster than the other algorithms but it is still slower than QDMR and generates the most expensive trees of all algorithms.

KPP has quite large running time compared to the other algorithms only for small networks, in our case with 20 nodes, as shown in Figure 4-24 and Figure 4-25. More specifically, we can see that KPP requires more time than BSMA in these networks for all the constraints, while for the constraints of 5 and 8 hops it is slower from MWHCT algorithm (both versions) for multicast groups with sizes of up to 70% of the network’s size. When the hop-count constraint is 15, KPP has a very large execution time, larger than all the other algorithms. This increase on the running time of KPP with the constraint can be seen in all the networks and is due to the fact that KPP has to loop over all possible values of the delay from 1 to the constraint, when computing the constrained cheapest paths among all nodes. From Figure 4-26 and Figure 4-27 (or Figure 4-28 and Figure 4-29), we can see that KKP becomes the third fastest algorithm after QDMR and CDKS, as the network size increases. In these cases, KKP is always much faster than BSMA and faster than MWHCT algorithm for multicast groups with size up to approximately 45% of the network’s size for the network with 50 nodes, or up to 30% for the one with 100 nodes, depending in each case on the hop-count constraint.

Page 60:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 60 of 204

TEQUILA Consortium – October 2002

The execution times of the two versions of the MWHCT algorithms are very close, with the second version being a little slower than the first one, especially when the hop-count constraint is tighter and the multicast group larger. This can be seen clearly in Figure 4-25a or Figure 4-26a. The fact that the second version requires more time than the first one in these cases can be explained if we consider their difference in adding a node that does not satisfy the hop-count constraint to the tree from steps 8-10. The procedure followed from the second version of the MWHCT algorithm yields better results in terms of the total cost of the tree, but is more complicated and time consuming. The running times of both versions of the MWHCT algorithm increase as the size of the multicast group increases and can be quite large for networks and groups with many nodes, compared to the other algorithms’ execution times (Figure 4-28).

On the other hand the execution time of BSMA has the opposite behaviour. For small and medium multicast groups, BSMA has worst execution time than MWHCT, while for large ones better. For small multicast groups, the number of nodes outside the multicast tree is large and BSMA has to search all the alternative paths in order to replace superedges, according to the k-shortest path algorithm. However, for large multicast groups the number of these alternative paths is small, as most of the network nodes are included in the tree, so the k-shortest path algorithm is fast. The execution time of the MWHCT becomes larger as the size of the multicast group increases mainly because the number of least weight trees computed in step 3 increases and the algorithm has to search for least weight paths in all of these trees. What is more, the chances of adding an egress node to the multicast tree from the shortest path of steps 8-10 increase also and in that case the time-consuming loop-breaking procedure has to be called.

The difference in the execution time of these algorithms depends also on the network size. For the network of 20 nodes, BSMA has running time more or equal to MWHCT algorithm for multicast groups with number of nodes 50% of the number of network nodes (Figure 4-24). This percentage can increase up to approximately 65% for the networks with 50 and 100 nodes, as illustrated in Figure 4-25a and Figure 4-27a. In particular, BSMA requires significantly more time than MWHCT algorithm for multicast groups with members 30-40% of the network nodes. We should also note that the MWHCT algorithm has to call for all the networks the reverse link procedure, which requires more time in large networks than in small ones.

The fact that the running time of the MWHCT algorithm is less than BSMA for small to medium multicast groups can be useful if we consider that, for these sizes of multicast groups, MWHCT algorithm has cost performance very close to BSMA and depending on the bound on the number of hops, its performance can be equal or even better.

Finally, we should mention that the execution times of BSMA and both versions of the MWHCT depend also on the hop-count constraint. This is shown better in the case of the networks with 100 nodes (Figure 4-28a/b). When the hop-count constraint is stringent, BSMA is slower as it has to perform more searches in order to find superedges that have less cost than the ones already in the tree and do not violate the bound on the number of hops. In contrast, when the constraint is loose BSMA may be faster, but MWHCT algorithm’s running times are slightly increased, because now there are more paths from the least cost trees computed in step 3 that satisfy the constraint and must be examined and compared in order to add to the tree the one with the least cost.

The experimental results for the execution times of the algorithms are shown in the figures below.

Page 61:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 61 of 204

TEQUILA Consortium – October 2002

a) 20 nodes, K=5, Bmin=45 MBps, Bmax=85 Mbps

0

0.01

0.02

0.03

0.04

0.05

0.06

20 30 40 50 60 70 80% nodes in MC group relative to network nodes

exec

utio

n tim

e (s

ec)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

b) 20 nodes, K=8, Bmin=45 Mbps, Bmax=85 Mbps

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

20 30 40 50 60 70 80% nodes in MC group relative to network nodes

exec

utio

n tim

e (s

ec)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

c) 20 nodes, K=15, Bmin=45 Mbps, Bmax=85 Mbps

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

0.1

20 30 40 50 60 70 80% nodes in MC group relative to network nodes

exec

utio

n tim

e (s

ec)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

Figure 4-24: Execution time, 20 nodes, Bmin=45 Mbps, Bmax=85 Mbps.

Page 62:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 62 of 204

TEQUILA Consortium – October 2002

a) 20 nodes, K=5, Bmin=5 Mbps, Bmax=125 Mbps

0

0.01

0.02

0.03

0.04

0.05

0.06

20 30 40 50 60 70 80% nodes in MC group relative to network nodes

exec

utio

n tim

e (s

ec)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

b) 20 nodes, K=8, Bmin=5 Mbps, Bmax=125 Mbps

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

20 30 40 50 60 70 80% nodes in MC group relative to network nodes

exec

utio

n tim

e (s

ec)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

c) 20 nodes, K=15, Bmin=5 Mbps, Bmax=125 Mbps

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

0.1

20 30 40 50 60 70 80% nodes in MC group relative to network nodes

exec

utio

n tim

e (s

ec)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

Figure 4-25:Execution time, 20 nodes, Bmin=5 Mbps, Bmax=125 Mbps.

Page 63:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 63 of 204

TEQUILA Consortium – October 2002

a) 50 nodes, K=5, Bmin=45 Mbps, Bmax=85 Mbps

0.00.2

0.40.60.8

1.01.21.4

1.61.82.0

20 30 40 50 60 70 80

% nodes in MC group relative to network nodes

exec

uti

on

tim

es (

sec)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

b) 50 nodes, K=8, Bmin=45 Mbps, Bmax=85 Mbps

0.0

0.5

1.0

1.5

2.0

2.5

3.0

20 30 40 50 60 70 80% nodes in MC group relative to network nodes

exec

uti

on

tim

es (

sec)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

c) 50 nodes, K=15, Bmin=45 Mbps, Bmax=85 Mbps

0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

20 30 40 50 60 70 80% nodes in MC group relative to network nodes

exec

uti

on

tim

es (

sec)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

Figure 4-26: Execution times, 50 nodes, Bmin=45 Mbps, Bmax=85 Mbps.

Page 64:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 64 of 204

TEQUILA Consortium – October 2002

a) 50 nodes, K=5, Bmin=5 Mbps, Bmax=125 Mbps

0.00.20.40.60.81.01.21.41.61.82.0

20 30 40 50 60 70 80

% nodes in MC group relative to network nodes

exec

uti

on

tim

es (

sec)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

b) 50 nodes, K=8, Bmin=5 Mbps, Bmax=125 Mbps

0.0

0.5

1.0

1.5

2.0

2.5

20 30 40 50 60 70 80

% nodes in MC group relative to network nodes

exec

uti

on

tim

es (

sec)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

c) 50 nodes, K=15, Bmin=5 Mbps, Bmax=125 Mbps

0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

20 30 40 50 60 70 80

% nodes in MC group relative to network nodes

exec

uti

on

tim

es (

sec)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

Figure 4-27: Execution times, 50 nodes, Bmin=5 Mbps, Bmax=125 Mbps.

Page 65:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 65 of 204

TEQUILA Consortium – October 2002

a) 100 nodes, K=8, Bmin=45 Mbps, Bmax=85 Mbps

0

10

20

30

40

50

60

70

80

90

20 30 40 50 60 70 80% nodes in MC group relative to network nodes

exec

uti

on

tim

e (s

ec)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

b) 100 nodes, K=15, Bmin=45 Mbps, Bmax=85 Mbps

0

20

40

60

80

100

120

20 30 40 50 60 70 80

% nodes in MC group relative to network nodes

exec

utio

n tim

e (s

ec)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

Figure 4-28: Execution time, 100 nodes, Bmin=45 Mbps, Bmax=85 Mbps.

Page 66:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 66 of 204

TEQUILA Consortium – October 2002

a) 100 nodes, K=8, Bmin=5 Mbps, Bmax=125 Mbps

0

10

20

30

40

50

60

70

80

20 30 40 50 60 70 80% nodes in MC group relative to network nodes

exec

uti

on

tim

e (s

ec)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

b) 100 nodes, K=15, Bmin=5 Mbps, Bmax=125 Mbps

0

20

40

60

80

100

120

20 30 40 50 60 70 80% nodes in MC group relative to network nodes

exec

utio

n tim

e (s

ec)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

Figure 4-29: Execution time, 100 nodes, Bmin=5 Mbps, Bmax=125 Mbps.

4.3.6 Steiner Tree Performance Assessment: Maximum End-to end Delay We measured the maximum end-to-end delay of the multicast trees constructed by the algorithms as the maximum number of hops from the source node to the destinations. All these constrained algorithms meet the requirement on the number of hop-counts. In general, it is sufficient enough to simply satisfy the constraint imposed; therefore it is not a big advantage for an algorithm to provide smaller end-to-end delay than the constraint, especially if because of this, it does not manage to construct low-cost multicast trees.

The performance of the algorithms relative to each other, in respect to the maximum end-to-end delay, was similar for the two experiments we conducted with different link loads, so we will provide here the results for link loads with Bmin=45 Mbps and Bmax=85 Mbps. The maximum end-to-end delay of all the algorithms for the networks with 20 nodes is shown in figure 6.2.3-1 (including COPT), for the networks with 50 nodes in Figure 4-31 and for the networks with 100 nodes in Figure 4-32.

From all the figures, we can see that CDKS gives always the least end-to-end delay from all the other algorithms. This is mainly because it replaces the paths of the least weight tree that do not meet the constraint with entire paths taken from the shortest-path tree to the source.

Page 67:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 67 of 204

TEQUILA Consortium – October 2002

QDMR generates also trees with significantly smaller end-to-end delay than the other algorithms, except of course CDKS, especially when the hop-count constraint is tight. This can be seen for example in Figure 4-30a/b and Figure 4-32a. QDMR dynamically adjusts its construction policy according to how far an egress node is from the constraint. Consequently, when the constraint is strict, QDMR searches for paths to the destination node with small number of nodes so that the constraint will be satisfied. On the other hand, when the constraint is loose, QDMR tries to include destinations nodes in the tree from paths going through other destination nodes and this can result in long paths. The maximum end-to-end delay therefore is increased, as shown in Figure 4-30b and Figure 4-31b. This change on the construction policy of QDMR has a direct impact on its cost performance too, which is better in the second case.

The maximum end-to-end delay for all the other algorithms is very close to each other and to COPT for the network of 20 nodes, as illustrated in Figure 4-30. For the network of 50 nodes and the hop-count constraint of 5, which is quite strict, all algorithms (except CDKS) have again approximately the same performance that gives end-to-end delay almost equal to the constraint. This is shown in Figure 4-31a.

However, when the upper bound on the number of hops can be more easily satisfied, BSMA constructs trees with slightly less end-to-end delay than KPP and MWHCT algorithm, as we can see from Figure 4-31b/c and Figure 4-32a/b. For large hop-count constraints (Figure 4-31c and Figure 4-32c) we notice that KPP generates trees with slightly bigger end-to-end delay than MWHCT algorithms. This is because KPP uses an objective function that tends to maximize the residual delay, as mentioned in section 4.3.2. Finally, we should mention that both version of the MWHCT algorithm have almost identical performance in terms of the maximum end-to-end delay. The second version presents in some cases slightly bigger end-to-end delay than the first one, as shown in Figure 4-30a/b because the paths inserted by steps 8-10 of the algorithm are not always the shortest paths from the source, as in version 1.

In the following, we provide the results concerning the maximum end-to-end delay of the multicast trees generated by the algorithms.

Page 68:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 68 of 204

TEQUILA Consortium – October 2002

a) 20 nodes, K=5, Bmin=45 Mbps, Bmax=85 Mbps

3.003.203.403.603.804.004.204.404.604.805.00

20 30 40 50 60 70 80% nodes in MC group relative to network nodes

max

imu

m e

nd

-to

-en

d d

elay

(h

ops)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

COPT

b) 20 nodes, K=8, Bmin=45 Mbps, Bmax=85 Mbps

3.00

3.50

4.00

4.50

5.00

5.50

6.00

6.50

7.00

7.50

8.00

20 30 40 50 60 70 80

% nodes in MC group relative to network nodes

max

imu

m e

nd

-to

-en

d d

elay

(h

ops)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

COPT

c) 20 nodes, K=15, Bmin=45 Mbps, Bmax=85 Mbps

3.0

5.0

7.0

9.0

11.0

13.0

15.0

20 30 40 50 60 70 80

% nodes in MC group relative to network nodes

max

imu

m e

nd

-to

-en

d d

elay

(h

ops)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

COPT

Figure 4-30: Maximum end-to-end delay (hops), 20 nodes, Bmin=45 Mbps, Bmax=85 Mbps.

Page 69:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 69 of 204

TEQUILA Consortium – October 2002

a) 50 nodes, K=5, Bmin=45 Mbps, Bmax=85 Mbps

4.0

4.2

4.4

4.6

4.8

5.0

20 30 40 50 60 70 80% nodes in MC group relative to network nodes

max

imu

m e

nd

-to

-en

d d

elay

(h

ops)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

b) 50 nodes, K=8, Bmin=45 Mbps, Bmax=85 Mbps

4

5

6

7

8

9

20 30 40 50 60 70 80

% nodes in MC group relative to network nodes

max

imu

m e

nd

-to

-en

d d

elay

(h

ops)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

c) 50 nodes, K=15, Bmin=45 Mbps, Bmax=85 Mbps

3

5

7

9

11

13

15

20 30 40 50 60 70 80

% nodes in MC group relative to network nodes

max

imu

m e

nd

-to

-en

d d

elay

(h

ops)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

Figure 4-31: Maximum end-to-end delay (hops), 50 nodes, Bmin=45 Mbps, Bmax=85 Mbps.

Page 70:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 70 of 204

TEQUILA Consortium – October 2002

a) 100 nodes, K=8, Bmin=45 Mbps, Bmax=85 Mbps

4

5

6

7

8

9

20 30 40 50 60 70 80% nodes in MC group relative to network nodes

max

imum

end

-to-

end

dela

y (h

ops)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

b) 100 nodes, K=15, Bmin=45 Mbps, Bmax=85 Mbps

5

6

7

8

9

10

11

12

13

14

15

20 30 40 50 60 70 80

% nodes in MC group relative to network nodes

max

imu

m e

nd

-to

-en

d d

elay

(h

op

s)

MWHCT-v1

MWHCT-v2

QDMR

BSMA

KPP

CDKS

Figure 4-32: Maximum end-to end delay (hops), 100 nodes, Bmin=45 Mbps, Bmax=85 Mbps.

4.3.7 Steiner Tree Assessment: Conclusions The purpose of our experiments was to determine the performance of the algorithms we implemented and which constitute part of the traffic-engineering algorithm proposed in [D1.4], compared to several already implemented algorithms.

In order to use a constrained multicast routing algorithm as part of the traffic-engineering algorithm we are examining, the algorithm must be able to satisfy several requirements. In particular, it is necessary that the algorithm constructs trees with minimum cost that meet of course the imposed upper bound on the number of hops. Moreover, the algorithm should also have a reasonable running time, as it will be part of a complex traffic-engineering algorithm. However, the requirement of low execution time is not strict as the traffic-engineering algorithm is computed off-line and does not require re-computation unless the network information changes.

Page 71:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 71 of 204

TEQUILA Consortium – October 2002

From the analysis of the results, we can see that although the optimal constrained multicast routing is intractable, there exist heuristics that produce good solutions. However, it is obvious that there exists no ultimate algorithm that presents the best performance in terms of cost of the multicast tree, execution time and maximum end-to-end delay. In general, we can say that there is a trade-off between the cost of the multicast tree generated by an algorithm and the execution time of this algorithm. Simple heuristics keep the computational cost low but yield worse cost performance than heuristics that use more complicated search in order to detect paths with lower costs.

Moreover, the performance of the algorithms depends also on the constraints imposed on the number of hops. For less stringent constraints the performance of some of the algorithms gets very close to the performance of more “sophisticated” ones, while their execution time remains lower. The size of the multicast group can also influence the cost performance of an algorithm, as there are algorithms that tend to produce better results, compared to the others, for small multicast groups, while others have better performance when dealing with large groups.

When deciding which routing algorithm to use for Traffic Engineering we should also take into account the size of the network. As shown from the experimental results, for small networks all algorithms are quite fast, therefore it is preferable to use the algorithms that have better cost performance. However, as the size of the network increases, some of these algorithms become very slow, so we should choose other algorithms that have adequate cost performance and reasonable execution time.

The two versions of the TEQUILA MWHCT algorithm we examined have in most of the cases good cost performance. In particular, the second version has better cost performance than the first one, with negligible increase in the execution time, when the hop-count constraint is tight for the network and more nodes must be added from steps 8-10 of the algorithm. For small to medium multicast groups, compared to the network’s size, the performance of both versions is equal or sometimes better than the performance of sub-optimal BSMA when the hop-count constraint is not tight for the network easily. What is more important is that for these cases they have much lower execution times than BSMA, so they would be suitable for the traffic-engineering algorithm we are studying. Nevertheless, for large multicast groups (more than 50-60% of the total network nodes), their performance deteriorates, especially when the hop-count constraint becomes stringent. Therefore the traffic-engineering algorithm that uses them will produce less satisfactory results and will require more time than in the previous case.

To summarise there is no algorithm that presents the best overall performance. So we propose that when running a traffic engineering algorithm someone needs to have a library of a couple of the Steiner algorithms we checked, and according to the number of Steiner points compared to the total link capacity and the hop-count constraint, to choose the appropriate based on the analysis of the above results. Our MWHCT and BSMA or QDMR can be a good minimum set of algorithms.

4.3.8 Network Dimensioning Experimentation: Simulation Environment In this section we will investigate the performance of the network dimensioning described in [D1.4] (see part B of this Deliverable) in terms of long to medium term average link load and total network average delay.

The ND algorithm described in was implemented in ANSI C and a random SLS/Forecast generator in Java (see later). For the experimental results presented in the section we used a 2x450Mhz ULTRA SPARC II with 1024Mb memory running Solaris 7.

4.3.8.1 Topologies

When evaluating a system like the one proposed in this article, care is required regarding the simulation conditions, including the topologies used, the traffic demand generation procedure and the intensity of the expected load.

Page 72:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 72 of 204

TEQUILA Consortium – October 2002

Figure 4-33: Fixed topology used for experimentation

The main topology used is shown on 4-33; it has 10 nodes and 34 unidirectional links. This is a rather small topology that resembles the early NSFNet backbone. More complex and bigger topologies (up to 300 nodes) where also used, according to the widely used models for random topology generation presented [Zegura96]. We used both random and transit-stub networks, which represent the AS system hierarchical structure better than pure random. The topologies used were of 50, 100, 200 and 300 nodes with average node degree between 3 to 6 (bi-directional), which are close to the typical values for real networks. In general, for the results we opted for 90% confidence level with a confidence interval 8-10% of the corresponding values.

4.3.8.2 Initial conditions The initial feasible solution (step 0) of the network dimensioning iterative procedure described [D1.4] is set to be the same as if the traffic trunks were to be routed with a shortest path first (SPF) algorithm. That corresponds to the case that all the traffic of a particular class from one ingress to one egress will be routed through the same shortest path according to some weight (routing metric). If there exist more than one such shortest paths, the load is distributed equally among them. The metric we are using for the SPF algorithm is set to be inversely proportional to the physical link capacity. The scenario we are using for step 0 described above is the norm in today’s operational networks.

The experiments shown in this section correspond to only one forecasting period, i.e. one run of the provisioning algorithm. It is straightforward to get results for the provisioning period by concatenating more than one forecasting period results. The termination parameter ε is set equal to 0.0008.

In this work we are using the following cost function:

( )( )1

11

,

( ),

1

kl

k kl l l

kl

kk kll l lk

l lk k x

l l kh h C k kl

l l lhk

l

xx h C

C xf x

he x h C

h e

≤ −=

>−

(1)

This function combined with a cost function the total sum of all link costs gives as a total cost function the average number of packets in the system based on the hypothesis that each queue behaves as a M/M/1 queue [Berts92].

Page 73:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 73 of 204

TEQUILA Consortium – October 2002

4.3.8.3 Traffic load generation In order to be able to experiment with a wide variety of traffic loads we have implemented a random SLS and traffic trunk generator in Java. This takes as input a topology file with a specified the number of edge nodes. In our experiments we selected the edge nodes to be in the range of 30-50% of the total network nodes. We have defined a number of SLS types offered by the ISP to the costumers. These are network-oriented SLS types (see Table 4-2) defining video, audio, and control traffic.

SLS type

Video Audio Control Traffic Medium load 40% 30% 9% 1% High load 70% 50% 18% 2%

Table 4-2: Loading profiles for each SLS type

We define as total throughput of a network the sum of the capacities of the first-hop links of all the edge nodes. This is actually an upper bound of the throughput and in reality it is a much greater than the real total throughput a network can handle. This happens because, although the sum of the first-hop link capacity imposes this limit, the rest of the backbone might not be able to handle so much traffic, which in our case is particularly true due to the random nature of the topologies used. Therefore in our experiments we used 70% load of the total throughput of the network, as the highly loaded condition, and a 40% load as a medium loaded one. The set of SLS instances that adhere to the high or medium load profile (70% or 40%) are assigned randomly at the edge node pairs and are placed into a file, which corresponds to the SLS repository. Because of the random nature of the assignment to edge nodes, we can provide any number of such files for a particular load profile and a particular network topology. Each SLS file is then formatted into a traffic trunk file (corresponding to the expected traffic matrix), according to the procedure described in [D1.4] SLSs are aggregated per ingress egress pair into CoS classes. The set of traffic trunks is generated with this procedure, resulting in one matrix for each SLS file for each loading profile.

Edge Router

Edge Router

Edge Router Edge Router

Edge Router

Edge Router

Edge Router

Edge Router

Network Core

total first hop capacity (upper bound onthe total throughput)

:

Figure 4-34 Illustration of the first hop capacity and total throughput of the simulated network.

Page 74:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 74 of 204

TEQUILA Consortium – October 2002

4.3.9 Network Dimensioning Performance Assessment: Link Load Distributions Figure 4-35 shows the link loads for the 10-node topology shown in Figure 4-33 for the first step and after the algorithm has run. It is clear that at step 0 solution, which corresponds to the SPF with equal cost multi-path distribution enabled, parts of the network are over utilised while others have no traffic at all. The final step, which corresponds to the final output of our provisioning algorithm, balances the traffic over the whole network. Note that the largest over utilised link at step 0 is not necessarily the largest at the final solution.

link load distribution lightly (40%) loaded network

0

20

40

60

80

100

120

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34

links

load

(%)

step 0 (SP)

final step

link load distribution heavy (70%) loaded network

0

20

40

60

80

100

120

140

160

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34

links

load

(%

)

step 0 (SPF)

final step

link load distribution lightly (40%) loaded network

0

20

40

60

80

100

120

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34

links

load

(%)

step 0 (SP)

final step

link load distribution heavy (70%) loaded network

0

20

40

60

80

100

120

140

160

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34

links

load

(%

)

step 0 (SPF)

final step

Figure 4-35: Link load distribution of after the first and the last step

0

5

10

15

20

25

30

35

40

45

50

1 2 3 4 5 6 7

iteration number

stan

dard

dev

iati

on

heavy load (70%)

light load (40%)

0

10

20

30

40

50

60

70

1 2 3 4 5 6 7

iteration number

aver

age

link

load heavy load (70%)

light load (40%)

0

5

10

15

20

25

30

35

40

45

50

1 2 3 4 5 6 7

iteration number

stan

dard

dev

iati

on

heavy load (70%)

light load (40%)

0

10

20

30

40

50

60

70

1 2 3 4 5 6 7

iteration number

aver

age

link

load heavy load (70%)

light load (40%)

Figure 4-36: Average and standard deviation of link load per iteration

Figure 4-364-36 shows the mean and standard deviation of the link loads for the 10-node network after each iteration of the algorithm. As we can see the load becomes more balanced over the network after each iteration (standard deviation reduces). We run those experiments with the exponent 2n = . This value compromises between minimising the total (sum) of link costs and minimising the maximum link load. These two objectives generally lead to different solutions, with the first favouring path solutions with the least number of links while the other does not care about the number of links but only for the maximum link load (and therefore the deviation from the mean). We can observe the effect of these different objectives at the various ups and downs over the various steps of the algorithm.

4.3.10 Network Dimensioning Performance Assessment: Scalability The same behaviour observed with the 10-node network is also observed with larger random and transit-stub topologies. We experimented with large topologies up to 300 nodes. The maximum, mean, standard deviation and link utilisation resulted after the first step (SPF) and after algorithm finished are shown in Figure 4-374-37.

Page 75:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 75 of 204

TEQUILA Consortium – October 2002

high load (70%)

0

50

100

150

200

250

300

350

400

50 100 200 300

number of nodes

max

imum

link

uti

lisat

ion

first step (SPF)

final step

medium load (40%)

0

50

100

150

200

250

300

350

50 100 200 300

number of nodes

max

imum

link

uti

lisat

ion

first step (SPF)

final step

high load (70%)

0

50

100

150

200

250

300

350

400

50 100 200 300

number of nodes

max

imum

link

uti

lisat

ion

first step (SPF)

final step

medium load (40%)

0

50

100

150

200

250

300

350

50 100 200 300

number of nodes

max

imum

link

uti

lisat

ion

first step (SPF)

final step

medium load (40%)

0

5

10

15

20

25

30

50 100 200 300

number of nodes

aver

age

utili

sati

on

first step (SPF)

final step

high load (70%)

0

5

10

15

20

25

30

35

40

45

50

50 100 200 300

number of nodes

aver

age

utili

sati

on

first step (SPF)

final step

medium load (40%)

0

5

10

15

20

25

30

50 100 200 300

number of nodes

aver

age

utili

sati

on

first step (SPF)

final step

high load (70%)

0

5

10

15

20

25

30

35

40

45

50

50 100 200 300

number of nodes

aver

age

utili

sati

on

first step (SPF)

final step

high load (70%)

0

5

10

15

20

25

30

35

40

50 100 200 300number of nodes

std

of li

nk u

tilis

atio

n

first step (SPF)final step

medium load (40%)

0

5

10

15

20

25

30

35

40

50 100 200 300

number of nodes

std

of li

nk u

tilis

atio

n

first step (SPF)final step

high load (70%)

0

5

10

15

20

25

30

35

40

50 100 200 300number of nodes

std

of li

nk u

tilis

atio

n

first step (SPF)final step

medium load (40%)

0

5

10

15

20

25

30

35

40

50 100 200 300

number of nodes

std

of li

nk u

tilis

atio

n

first step (SPF)final step

Figure 4-37: Maximum, average and standard deviation of link utilisation for various network sizes

In Table 4-33 we provide the average running times of the various experiments conducted in this article. We can see that even for quite large networks the running times are acceptable. For example for the 300 nodes networks, for medium load the running time is about 17 minutes, and for high load about 25 minutes. These times are perfectly acceptable taking into account the timescale of the dimensioning system operation

Number of Nodes Medium

load High load 10 0.055 0.061 50 9.761 10.164 100 123.989 302.079 200 529.532 1002.245 300 981.175 1541.937

Table 4-3: Average running times in seconds for the various network sizes.

Page 76:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 76 of 204

TEQUILA Consortium – October 2002

4.3.11 Network Dimensioning Performance Assessment: Sensitivity to the Cost Function

Reminder about the cost function: we have two objectives (a) and (b) in mathematical terms.

Avoid overloading parts of the network: minimise max ( ) max ( )h hl l l l ll E l E

h H

F x f x C∈ ∈

=

∑ (2)

Minimize overall network utilization: minimise ( ) ( )h hl l l l l

l E l E h H

F x f x C∈ ∈ ∈

=

∑ ∑ ∑ (3)

We provide an objective that compromises between the previous two. More specifically,

minimize ( )( ) ( )n

n h hl l l l l

l E l E h H

F x f x C∈ ∈ ∈

=

∑ ∑ ∑ , 1n ≥ (4)

When 1n = , this objective (4) reduces to (3), while when n → ∞ it can be shown that it reduces to (2). The exponent n is important for the results presented here. By using (4), even if we are using the linear cost function for ( )h h

l lf x , the problem becomes a non-linear optimisation problem.

Now we are going to look at the effect of exponent n of the cost function as we defined it in (4). This parameter compromises between the two objectives, (2) and (3), for minimising the maximum link load and minimising the overall cost. Figure 4-38shows the results of the experiment with varying the exponent of the cost function for the 10 and 50 node networks. We can see that the maximum link load reduces as n increases, while the mean link load slightly increases since minimising the maximum link utilisation results in solutions with paths having more links. The solution with 1n = means that we only optimise for the total cost and we do not take into account the maximum link load objective. We can observe that the reduction of the maximum link load, and the corresponding increase of the average link load, is important at the first increase of n from 1 to 2, but further increments give only a small difference. This is a consequence of the fact that we have the hop-count constraint that limits the number of links per path and therefore very long paths, which help reducing the maximum link load, are prohibited. Finally, we can see that the same behaviour persists for the 10 and the 50 node networks, as well as all for the larger topologies we used for experimentation.

10 node - 40% and 70% load

30

40

50

60

70

80

90

1 2 3 4 5 6

cost function exponent (n)

max

imum

link

load

heavy load (70%)light load (40%)

10 node - 40% and 70% load

20

25

30

35

40

45

50

55

60

1 2 3 4 5 6

cost function exponent (n)

aver

age

link

load

heavy load (70%)light load (40%)

10 node - 40% and 70% load

30

40

50

60

70

80

90

1 2 3 4 5 6

cost function exponent (n)

max

imum

link

load

heavy load (70%)light load (40%)

10 node - 40% and 70% load

20

25

30

35

40

45

50

55

60

1 2 3 4 5 6

cost function exponent (n)

aver

age

link

load

heavy load (70%)light load (40%)

Page 77:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 77 of 204

TEQUILA Consortium – October 2002

50 node - 40% load

20

21

22

23

24

25

1 2 3 4 5 6

cost function exponent (n)

aver

age

link

load

50 node - 40% load

70

72

74

76

78

80

82

84

86

1 2 3 4 5 6

cost function exponent (n)

max

imum

link

load

50 node - 40% load

20

21

22

23

24

25

1 2 3 4 5 6

cost function exponent (n)

aver

age

link

load

50 node - 40% load

70

72

74

76

78

80

82

84

86

1 2 3 4 5 6

cost function exponent (n)

max

imum

link

load

70

72

74

76

78

80

82

84

86

88

90

92

1 2 3 4 5 6

cost function exponent n

Max

Lin

k Lo

ad (

%)

Figure 4-38: The effect of the exponent for 10, 50 and 100 node network

4.3.12 Network Dimensioning Performance Assessment: Average Delay We are now going to see the average delay over the network. We calculated the average delay as a function of the link utilisation according to the following formula:

1 hl

hl E l l

xC xγ ∈ −∑

where all avgF sγ = , is the average of the total rate of the incoming packets in the network, allF is the

total in expected rate and avgs is the mean packet size (set to 1K for our experiments). The above formula is based on the assumption that each queue (PSC) at each link can be modelled as M/M/1 queue. Although, this is not always true, we can only use the qualitative nature of the result and not the exact values. Note that the only way to find the exact impact on delay and loss is by monitoring the network after applying the provisioning configuration. As part of continuing the work presented in this article we are planning to have such results from the simulator and from the PC-based router testbeds we maintain in our laboratories.

Page 78:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 78 of 204

TEQUILA Consortium – October 2002

0

0.1

0.2

0.3

0.4

0.5

10 50 100 200 300

number of nodes

aver

age

dela

y (m

sec)

high load (70%)medium load (40%)

0

0.1

0.2

0.3

0.4

0.5

10 50 100 200 300

number of nodes

aver

age

dela

y (m

sec)

high load (70%)medium load (40%)

Figure 4-39: Average overall network delay

Figure 4-39 shows the average total delay in seconds of the solutions provided for the various network sizes according to the formula given above. The important observation is that the algorithm manages to keep the average total delay for all network sizes at about the same levels. This is particularly important since at the first step we have many links with utilisation more than 100%, which yields very high (almost infinite) total average delay. Another important observation is that the provisioning algorithm results in average delay for medium and high loaded networks being very close together, almost proportionally to their relative load difference. This is an indication that even for high loads the provisioning algorithm manages to restrain the level of increase of average delay.

4.3.13 Network Dimensioning Performance Assessment: Conclusions Supporting demanding applications network traffic requires dedicated networks with high switching capacity. In this work we investigated the possibility of using common IP-based packet network infrastructure, with Differentiated Services and MPLS as the key QoS technologies, in order to support such traffic with the appropriate Service Level Agreements and network provisioning.

Our Network Dimensioning Algorithm performs excellent reducing the maximum link load in all our experiments to well bellow 100% while shortest path solutions give utilisation more than 300%. This means that the traffic that we can handle when using Network Dimensioning can as much as three times more than when using simple shortest path routing schemes.

Network Dimensioning achieving to lower the maximum link load, load balance the network and achieve overall low cost solution. These two objectives can be fine tuned as we proved with the cost function exponent. This can reflect the policies of the administrator, so by having in mind the assessment we performed in the previous sections can make the appropriate decisions regarding the load balancing policy.

We test the performance of the algorithm with large networks, and we saw that the same good behaviour we observed with small networks persist, and most importantly the execution time stays within acceptable timescales (less than 20mins) even for very large networks (up to 300 nodes). This proves the scalability argument of our approach.

Finally, we saw that with the average end-to-end delay achieved by our algorithms is very small which makes it appropriate for real-time demanding traffic. In conclusion, we believe it is possible to support demanding Service Level Agreements through IP networks with DiffServ and MPLS, as long as the appropriate SLSs are defined and agreed. They can then be used to calculate anticipated traffic while the dimensioning algorithm takes into account the expected traffic demand and QoS constraints. Using IP networks to transport demanding traffic will result in lower cost and greater flexibility.

Page 79:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 79 of 204

TEQUILA Consortium – October 2002

4.3.14 Dynamic Resource Management Experimentation: Simulation Environment

In this section we will describe the simulation environment, the topologies, queue configuration, the policy parameters and the traffic scenarios we used for the performance analysis experiments for Dynamic Resource Management. The functionality of this component and the details of our implementation are described in [D1.4].

4.3.14.1 Topology The topology of the simulation model is presented in Figure 4-40. Traffic aggregate with two classes (EF, AF1x) is transmitted to the destination node D via A. The capacity of the links from individual sources to node A is 100Mbps while the bottleneck link between node A and D is configured to be DiffServ-capable, i.e. it is the TEQUILA NS implementation of the Per-Hop-Behaviours (PHBs) also named as Tequila queue (tq_queue) with the total capacity of 10Mbps. In the tq_queue there may be up to 6 physical queues namely EF, AF1-4 and BE queue. In our simulation we configured and used only EF and AF1x physical queues. The detailed configuration of EF and AF1 queues is illustrated in Table 4-44-4. Additionally, we also restricted the total AF (AF1-4) maximum rate to 5Mbps. When DRsM is active the maximum rate varies according to the decision taken dynamically by DRsM.

EF

A D

EF

AF11

AF11

...

...

100Mb drop-tail

10Mb Tequila queue

Figure 4-40 Topology used for DRsM experimentation.

Name Type Buffer Size Maximum rate Other configuration

EF Drop-tail 50 5Mbps N/A

AF1 Random Early Detection (RED) 50 5Mbps RED early drop threshold: 10

RED early drop probability: 0.02

Table 4-4: Physical queue configuration for DRsM experiments.

4.3.14.2 Default Policy/parameter Setting

If not further specified, the following default policies and parameters apply to all experiments to be conducted, as it is shown in Table 4-5 and Table 4-6.

Policy name Policy setting Parameter value Bandwidth request policy Proportional request rinc= +20% (of offered load when upper

threshold crossing alarm) rdec= -20% (of offered load when lower threshold crossing alarm)

Spare bandwidth allocation policy

Proportional split N/A

Threshold updating policy Relative update percU= +10% (of throughput) percL= -10% (of throughput)

Page 80:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 80 of 204

TEQUILA Consortium – October 2002

Crosswarning update policy Group PHB update N/A Time step policy Minimum time duration

between any two DRSM bandwidth re-allocations

0 sec

Table 4-5: Default Policy setting for DRsM experiments.

Parameter name Description Value Winlength Window size for queue rate calculation (Time Sliding

Window average smoothing algorithm) 0.05 sec

Mon-interval Interval for queue monitoring output 0.2 sec

Table 4-6: Non-DRsM related parameter setting.

4.3.14.3 Traffic Scenarios

In order to comprehensively evaluate the performance of DRSM, three sets of experiments are conducted, as described below:

Simple scenario:

In this set of experiment we provide 2 on/off data sources for each PHB. Initially one EF traffic and both of the two AF11 traffic sources are active and last for 2 seconds, and after that the other EF source starts sending data while one of the AF11 source becomes silent, as illustrated in Figure 4-41. We investigate several situations by varying total offered load (sum of EF and AF11 traffic) for different experiments. For example, the overall traffic aggregates is fixed at 60% and 120% of the link capacity, hence the offered load l from each source is equal to 2Mb and 4Mb respectively (We assume that the sending rate of all sources are identical). Without the loss of generality, we both consider CBR traffic and VBR traffic for this experiment with mean packet size equal to 500 bytes. When VBR sources are used, the offered load from each source mentioned above refers to the average source rate, and the standard deviation is set to 0.5Mbps.

t(s)

2.0 4.00.0

L(Mbps)

EF traffic

2.0 4.0

L(Mbps)

AF11 traffic

t(s)

0.0

l

2l 2l

l

Figure 4-41: simple scenario case traffic model for DRsM experiments.

Complex scenario:

In this set of experiments four on/off data sources are associated with each of the two PHBs, and the behaviour of each data source is presented in Figure 4-42. Specifically, from time 0 one new EF source becomes active every second till time point 3.0 and then starting from time point 4.0 they stop sending data one by one with 1 second of interval till all of them are inactive at time point 7.0. On the other hand, all the AF11 sources are initially active and one of them become silent every second till time 3.0. From time point 4.0, they one by one start sending data with the time interval of 1 second and finally at time 7.0 all the AF11 sources return active again. We will define a number of cases where the overall traffic aggregates are fixed for an experiment by vary between experiments. For total aggregate loads of 80%, 100% and 120% of link capacity, for example, the corresponding rate of each data source is 1.6Mb, 2.0Mb and 2.4Mb. From Figure 4-41 and Figure 4-42 we can find that the complex case is in reality an extended version of the simple case experiment.

Page 81:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 81 of 204

TEQUILA Consortium – October 2002

t(s)

7.00.0

L(Mbps)

EF traffic

7.0

L(Mbps)

AF11 traffic

t(s)

0.0

l

4l 4l

Figure 4-42: Complex case traffic model for DRsM experiments.

Random VBR scenario:

In this third scenario we used random VBR traffic sources to emulate the real network traffic scenario. For each PHB, the average rate of the first VBR source is fixed at 0.5Mbps with deviation rate at 0.5Mbps. We change the aggregated traffic load by means of modifying the average rate of the second VBR source, which performs a significant fluctuation every 2 seconds. We repeat this simulation with the average rate of the second VBR source, from 2.5Mbps up to 6.0Mbps, with the standard deviation fixed at 1.5Mbps. In this case it can be inferred that the minimum and maximum value of the aggregated load is 7Mbps and 12Mbps. The time duration of each experiment is set to 20 seconds. Figure 4-43 shows an example plot of offered load against time where the mean rate for both EF and AF11 traffic has been configured to 2.5Mbps.

0

1

2

3

4

5

6

7

8

0 2 4 6 8 10 12 14 16 18 20

Time (s)

Dat

a ra

te (

Mbi

t/s)

EF

AF11

Aggregate

Figure 4-43: Random VBR traffic model example for DRsM experiments.

4.3.15 Examples of DRsM Operation This section illustrates the operation of DRsM activity over time in the complex and random VBR traffic scenarios. In these examples the crosswarning time was 0, the bandwidth request policy was set to relative, 20% of offered load. The threshold updating policy was relative increments/decrements of 10% of throughput. The spare bandwidth distribution policy was set to equal distribution between the queues.

Page 82:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 82 of 204

TEQUILA Consortium – October 2002

0 1 2 3 4 5 6 7

1

2

3

4

5

6

7

8

AllocatedBW Upper Threshold Throughput Lower Threshold

Dat

a R

ate

(Mb/

s)

Time (s)

DRsM Activity Plot - Complex Traffic - AF Queue

0 1 2 3 4 5 6 7

0

1

2

3

4

5

6

7

8

DRsM Activity Plot - Complex Traffic - EF Queue

Allocated BW Upper Threshold Throughput Lower Threshold

Dat

a R

ate

(Mb/

s)

Time (s)

The above figures show allocated bandwidth to the AF and EF queues respectively in a single experiment for the complex traffic experiment with a total aggregate load (AF+EF traffic) of 70%. It can be seen that DRsM follows the offered load as the upper and lower thresholds are crossed and therefore ensures that sufficient capacity is allocated to the queues for the traffic offered to them.

Page 83:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 83 of 204

TEQUILA Consortium – October 2002

0 5 10 15 20

0

1

2

3

4

5

6

7

8

Allocated BW Upper Threshold Throughput Lower Threshold

Dat

a R

ate

(Mb/

s)

Time (s)

DRsM Activity Plot - Random VBR Traffic - AF

0 5 10 15 201

2

3

4

5

6

7

DRsM Activity Plot - Random VBR Traffic - EF

AllocatedBW Upper Threshold Throughput Lower Threshold

Dat

a R

ate

(Mb/

s)

Time (s)

The above figures show similar plots of activity against time for the random VBR traffic scenario where the aggregate offered load was 80%. Again the plots show that DRsM correctly allocated the queue capacities to follow the offered load of each traffic type.

Page 84:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 84 of 204

TEQUILA Consortium – October 2002

4.3.16 DRsM Policy Parameter Experiments This set of experiments examines the performance of DRsM under various values of the policy parameters. Experiments were undertaken to tune DRsM with respect to the bandwidth request policy (proportional versus extrapolation), threshold updating policy (constant versus relative updating), spare bandwidth allocation policy (equal split between PHBs or proportional split according to their relative consumption of capacity), and cross-warning time (minimum interval between DRsM actions). The results are presented in the graphs in the following subsections.

4.3.16.1 Bandwidth request policy 4.3.16.1.1 Proportional bandwidth request policy

Bandwidth Request Policy - ProportionalSimple Traffic

0.00%

5.00%

10.00%

15.00%

20.00%

25.00%

30.00%

0.00% 10.00% 20.00% 30.00% 40.00% 50.00% 60.00%

Increase/Decrease of Required Bandwidth

Pac

ket

Lo

ss 60% - CBR

60% - VBR120% - CBR120% - VBR

Bandwidth Request Policy - ProportionalComplex Traffic

0.00%

5.00%

10.00%

15.00%

20.00%

25.00%

0.00% 10.00% 20.00% 30.00% 40.00% 50.00% 60.00%

Increase/Decrease of Required Bandwidth

Pac

ket L

oss

80% - CBR80% - VBR

100% - CBR100% - VBR

120% - CBR

120% VBR

Page 85:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 85 of 204

TEQUILA Consortium – October 2002

4.3.16.1.2 Extrapolated bandwidth request policy

Bandwidth Request Policy - ExtrapolationSimple Traffic

0.00%

5.00%

10.00%

15.00%

20.00%

25.00%

30.00%

0.00 2.00 4.00 6.00 8.00 10.00 12.00

Extrapolation Multiplier

Pac

ket

Lo

ss 60% - CBR

60% - VBR120% - CBR120% -VBR

Bandwidth Request Policy - ExtrapolationComplex Traffic

0.00%

5.00%

10.00%

15.00%

20.00%

25.00%

30.00%

0.00 2.00 4.00 6.00 8.00 10.00 12.00

Extrapolation Multiplier

Pac

ket

Lo

ss

80% - CBR80% - VBR

100% - CBR100% - VBR120% - CBR

120% - VBR

Page 86:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 86 of 204

TEQUILA Consortium – October 2002

4.3.16.2 Threshold updating policy 4.3.16.2.1 Constant increment/decrement

Simple Traffic - Constant Threshold - Proportional Spare Bandwidth Distribution

0.00%

5.00%

10.00%

15.00%

20.00%

25.00%

0 200 400 600 800 1000 1200

Theshold (kbits/s)

Pac

ket

Lo

ss

80% - CBR80% - VBR

100% - CBR100% - VBR120% - CBR

120% - VBR

Complex Traffic - Constant Threshold - Proportional Spare Bandwidth Distribution

0.00%

5.00%

10.00%

15.00%

20.00%

25.00%

0 200 400 600 800 1000 1200

Threshold (kbits/s)

Pac

ket

Lo

ss

80% - CBR80% - VBR100% - CBR

100% - VBR120% - CBR

120% - VBR

Page 87:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 87 of 204

TEQUILA Consortium – October 2002

4.3.16.2.2 Proportional increment/decrement

Simple Traffic - Percentage Threshold - Proportional Spare Bandwidth Distribution

0.00%

5.00%

10.00%

15.00%

20.00%

25.00%

30.00%

0.00% 5.00% 10.00% 15.00% 20.00% 25.00% 30.00% 35.00%

Threshold increment/decrement

Pac

ket

Lo

ss

80% - CBR80% - VBR

100% - CBR100% - VBR120% - CBR

120% - VBR

Complex Traffic - Percentage Threshold - Proportional Spare Bandwidth Distribution

0.00%

5.00%

10.00%

15.00%

20.00%

25.00%

30.00%

0.00% 5.00% 10.00% 15.00% 20.00% 25.00% 30.00% 35.00%

Threshold increment/decrement

Pac

ket

Lo

ss

80% - CBR80% - VBR

100% - CBR100% - VBR120% - CBR

120% - VBR

4.3.16.3 Spare bandwidth allocation policy

The previous set of tests is repeated here but with the spare bandwidth allocation policy set to equal. By comparing the results from corresponding experiments with the spare bandwidth allocation policy set to proportional it is possible to see which spare bandwidth allocation policy performs better under different load and threshold crossing conditions.

Page 88:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 88 of 204

TEQUILA Consortium – October 2002

4.3.16.3.1 Constant increment/decrement

Simple Traffic - Constant Threshold - Equal Spare Bandwidth Distribution

0.00%

5.00%

10.00%

15.00%

20.00%

25.00%

30.00%

0 200 400 600 800 1000

Threshold (kbits/s)

Pac

ket

Lo

ss

80% - CBR80% - VBR

100% - CBR100% - VBR120% - CBR

120% - VBR

Complex Traffic - Constant Threshold - Equal Spare Bandwidth Distribution

0.00%

5.00%

10.00%

15.00%

20.00%

25.00%

0 200 400 600 800 1000

Threshold (kbits/s)

Pac

ket

Lo

ss

80% - CBR80% - VBR

100% - CBR100% - VBR120% - CBR

120% - VBR

Page 89:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 89 of 204

TEQUILA Consortium – October 2002

4.3.16.3.2 Proportional increment/decrement

Simple Traffic - Percentage Threshold - Equal Spare Bandwidth Distribution

0.00%

5.00%

10.00%

15.00%

20.00%

25.00%

30.00%

0.00% 5.00% 10.00% 15.00% 20.00% 25.00% 30.00% 35.00%

Threshold

Pac

ket

Lo

ss

80% - CBR80% - VBR

100% - CBR100% - VBR120% - CBR

120% - VBR

Complex Traffic - Percentage Threshold - Equal Spare Bandwidth Distribution

0.00%

5.00%

10.00%

15.00%

20.00%

25.00%

30.00%

0.00% 5.00% 10.00% 15.00% 20.00% 25.00% 30.00% 35.00%

Threshold

Pac

ket

Lo

ss

80% - CBR80% - VBR

100% - CBR100% - VBR120% - CBR

120% - VBR

Page 90:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 90 of 204

TEQUILA Consortium – October 2002

4.3.16.4 Time step policy

Cross Warining Time PolicySimple Traffic

0.00%

5.00%

10.00%

15.00%

20.00%

25.00%

30.00%

0.00 0.20 0.40 0.60 0.80 1.00

Cross Warining Time (s)

Pac

ket

Lo

ss 60% - CBR

60% - VBR120% CBR120% VBR

Cross Warning TimeComplex Traffic

0.00%

5.00%

10.00%

15.00%

20.00%

25.00%

30.00%

0.00 0.20 0.40 0.60 0.80 1.00

Cross Warning Time (s)

Pac

ket

Lo

ss

80% - CBR80% - VBR

100% - CBR100% - VBR120% - CBR

120% - VBR

4.3.16.5 Conclusions According to the results presented above the best parameter values for the traffic cases considered were the following.

cross warning time = 0

bandwidth request policy = extrapolated with parameter 2

threshold policy = proportional 10%

spare bandwidth distribution policy = equal

Page 91:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 91 of 204

TEQUILA Consortium – October 2002

cross warning update policy = all PHBs

These values were used in the subsequent experiments to compare QoS with and without DRsM, see section 4.3.17 below.

4.3.17 DRsM Comparative Experiments: DRsM vs. Static Bandwidth Allocation These graphs plot the results of a series of experiments where the same traffic was offered in two cases: where 50% of the link capacity was allocated statically to each of the two queues, and where the initial allocation of 50% to each queue was dynamically modified by DRsM.

4.3.17.1 Simple traffic scenario

Comarison of DRsM verses Static Bandwidth Allocation Simple Traffic - CBR

0.00%

2.00%

4.00%

6.00%

8.00%

10.00%

12.00%

14.00%

16.00%

60 70 80 90 100 110

Total Offered Load (% of Link Capacity)

Pac

ket

Lo

ss

Without DRsMWith DRsM

Page 92:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 92 of 204

TEQUILA Consortium – October 2002

Comarison of DRsM verses Static Bandwidth AllocationSimple Traffic - VBR

0.00%

2.00%

4.00%

6.00%

8.00%

10.00%

12.00%

14.00%

16.00%

60 65 70 75 80 85 90 95 100 105 110

Total Offered Load (% of Link Capacity)

Pac

ket

Lo

ss

Without DRsMWith DRsM

4.3.17.2 Complex traffic scenario

Comarison of DRsM verses Static Bandwidth Allocation Complex Traffic - CBR

0.00%

2.00%

4.00%

6.00%

8.00%

10.00%

12.00%

14.00%

16.00%

18.00%

20.00%

60 70 80 90 100 110

Total Offered Load (% of Link Capacity)

Pac

ket

Lo

ss

Without DRsM

With DRsM

Page 93:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 93 of 204

TEQUILA Consortium – October 2002

Comarison of DRsM verses Static Bandwidth AllocationComplex Traffic - VBR

0.00%

2.00%

4.00%

6.00%

8.00%

10.00%

12.00%

14.00%

16.00%

18.00%

20.00%

60 65 70 75 80 85 90 95 100 105 110

Total Offered Load (% of Link Capacity)

Pac

ket

Lo

ss

Without DRsMWith DRsM

4.3.17.3 Random VBR traffic scenario

Comarison of DRsM verses Static Bandwidth Allocation Random VBR

0.00%

2.00%

4.00%

6.00%

8.00%

10.00%

12.00%

14.00%

40.00% 50.00% 60.00% 70.00% 80.00% 90.00% 100.00% 110.00% 120.00%

Total Offered Load (% of Link Capacity)

Pac

ket

Lo

ss

With DRsMWithout DRsM

4.3.17.4 Conclusions The previous five graphs show that DRsM improves QoS, in terms of reduced packet losses, in all traffic scenarios and under all aggregate average load levels.

Page 94:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 94 of 204

TEQUILA Consortium – October 2002

Loss Improvement for DRsM over Static Bandwidth Allocation

0.00%

2.00%

4.00%

6.00%

8.00%

10.00%

12.00%

14.00%

16.00%

60% 70% 80% 90% 100% 110% 120%

Total Offered Load (% of Link Capacity)

Lo

ss Im

pro

vem

ent

Complex - VBRComplex - CBR

Simple - VBRSimple - CBRRandom VBR

The previous figure shows the improvement in packet losses when DRsM is used compared to static bandwidth allocation in all five sets of experiments. A loss improvement of 10% means that there were 10% (of link capacity) less packet losses when DRsM is used compared to a static bandwidth allocation. There is no improvement up to 75% average load in the case of the simple traffic scenarios, and up to 60% average aggregate load in the complex and random VBR traffic scenarios due to the fact that zero packet losses were seen in both DRsM and non DRsM cases. The improvement in the simple and complex CBR/VBR cases is more marked than in the random VBR traffic scenario because the fluctuations in offered load on each queue were more extreme. One conclusion to be drawn from this graph is that DRsM can provide up to 13% (of link capacity) less packet losses in cases where offered load fluctuates significantly and total offered load is as high as 95% of link capacity. The improvements are less marked under low load conditions and our simulations show no improvement when the average aggregate load was below around 60% of link capacity.

As seen in section 4.3.16 the performance of DRsM varies significantly with the values assigned to its policy parameters. These were tuned to the traffic conditions in our experiments and may have to be tuned again to best cope with different traffic patterns. This demonstrates that the TEQUILA policy management system has a role in tuning the operation of DRsM according to actual experiences of traffic fluctuations. Whether these parameters need to be continually tuned or whether the values will reach a steady state offering acceptable performance under all reasonable traffic scenarios is for further study.

Page 95:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 95 of 204

TEQUILA Consortium – October 2002

5 SYSTEM LEVEL PERFORMANCE ASSESSMENT TESTS AND RESULTS

5.1 Test-Suites System level performance assessment tests have been organised and carried out per sub-system of the overall TEQUILA architecture. They comprise the following test-suites.

Test Suite Purpose SUITE3_1/NTUA/SLSM Performance assessment of the TEQUILA system, focusing on SLS

Management sub-system functionality. SUITE3_2/GC/MPLSTE Performance assessment of the TEQUILA system, focusing on

MPLS-based TE sub-system functionality. SUITE3_3/GC/Mon Performance assessment of the TEQUILA system, focusing on

Monitoring sub-system functionality. SUITE3_4/FTR&D/IPTE/DP Performance assessment of the COPS-PR- (Common Open Policy

Service for Provisioning purposes, [COPS-PR])-based dynamic provisioning capabilities of the IP-based TE sub-system, for providing the routers with the appropriate configuration information, as far as the enforcement of an IP TE policy is concerned.

SUITE3_5/UniS/POL Performance assessment of the TEQUILA system, focusing on Policy Management sub-system functionality.

SUITE3_6/FTR&D/MPLSTE/IFT To assess the performance of the IFT-based network element and compare them to a pure Linux based one. Therefore, assessing the concepts developed in "IP Fast Translator (IFT) – TEQUILA white paper"; the implementation of the control path of a network element in the same way as in regular PC-based Linux routers through decoupling of control and transfer planes.

5.2 SLS Management Performance Tests and Results The following table provides references for the tests conducted for the Service Management sub-system.

Test ID Purpose Platform Results

SUITE3_1/NTUA/SLSM/SCAL/1 Assessment of the scalability of the subscription process. Note that this test also serves for the invocation processing as the subscription processing times provide upper bounds on the invocation processing times.

Greek Dev. Platform (see A.4)

Section 5.2.1

SUITE3_1/NTUA/SLSM/BC/1 Assess the Benefits/Cost of the RSIM component in terms of its configuration variables.

Greek Dev. Platform (see A.4)

Section 5.2.2, 52.3

SUITE3_1/NTUA/SLSM/BC/1 Assess the Benefits/Cost of the SSM component with respect to its configuration variables.

Greek Dev. Platform (see A.4)

Section 5.2.2, 5.2.4

Page 96:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 96 of 204

TEQUILA Consortium – October 2002

5.2.1 Scalability of Subscription Management – SUITE3_1/NTUA/SLSM/SCAL/1

5.2.1.1 Experimentation Set-up The purpose of this test-suite is to prove that the response times of the service subscription management processing grows linearly with the external entities influencing its behaviour. As yielded by the theoretical scalability analysis, subscription response times are influenced by:

• The size of the tables storing relevant data (data size grows mainly with the number of the established SLSs and the number of network TTs)

• The processing characteristics of the platform that hosts the subscription management functionality and its very implementation.

The response time of the subscription management is the sum of the execution times of its two main functions, which are executed sequentially:

• The 'Validation and Translation' function, which is responsible for analysing and translating a received SSS, based on the TEQUILA SLS-template, from XML format to the appropriate data format (database table field value types, types of Corba/Java/C++ function arguments) assumed by the related service admission and provisioning components of the TEQUILA system. Furthermore, this function translates the received SSS from customer-parlance to the parlance understood by the service admission and provisioning and TE components. This translation involves: analysis into SLSs included in the SSS, mapping of customer edges to network sites, mapping of quality levels as understood by the customers to QoS-classes (corresponding to the PHBs supported by the network) and mapping of customer flow address groups to extended Ipaccess lists. Furthermore, this function performs a number of validity checks on SSS data, primarily for ensuring uniqueness of customer/users identification and flow specifications amongst subscriptions. This task is essential towards a fully automated service provisioning cycle, substantiating the benefits of the proposed TEQUILA SLS template.

• The 'Admission Logic' function, which determines whether the received subscription can be accepted in the network, based on its current availability, or negotiations will be initiated. The function operates using availability estimates per TT, determined by the off-line TE functions at the beginning of a Resource Provisioning Cycle.

The following tests were carried out in this suite.

Test1 – moderate net

Platform : OS:Suse Linux 8.0, DB: Oracle 9.2 DB Server, CPU: Double xeon 2.4 GHz with hyper Threading On, MEMORY: 4 GByte, HD: SCSI Ultra 3 15000 rpm Customers: 200 customers with 5 sites per each customer Service Classes: 20 Network: 6 edge routers, 3 core routers, 5 QoS classes, 150 TTs Subscriptions: Permanent / On-Demand / Managed Bandwidth Peer 2 Peer services, 2 SLS per SSS, 10,000 SSSs, 2 SLSs per SSS Measurement: Total Subscription processing time, Subscription Admission Logic response time

Test2 – large net

Platform: As above Customers: As above Service Classes: As above Network: 100 edge routers, 3 core routers, 5 QoS classes, 49,500 TTs Subscriptions: As above Measurement: Total Subscription processing time, Subscription Admission Logic response time

Test3 – large net, large SLSs/SSS

Platform: As above Customers: As above Service Classes: As above Network: As above Subscriptions: Permanent / On-Demand / Managed Bandwidth Peer 2 Peer services, 2 SLS per SSS, 10,00 SSSs, 20 SLSs per SSS Measurement: Total Subscription processing time, Subscription Admission Logic response time

Test4 – Platform : OS:Suse Linux 7.2, DB: Oracle 9.1 DB Server, CPU: AMD Athlon 1.2 GHz, MEMORY:

Page 97:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 97 of 204

TEQUILA Consortium – October 2002

implem1 1.5 GByte, HD: ATA 100 Implementation: Using cursor loops for implementing Flow ID validation Customers: 200 customers with 5 sites per each customer Service Classes: 20 Network: 6 edge routers, 3 core routers, 5 QoS classes, 150 TTs Subscriptions: Permanent / On-Demand / Managed Bandwidth Peer 2 Peer services, 2 SLS per SSS, 4,600 SSSs, 2 SLSs per SSS Measurement: Total Subscription processing time

Test5 – implem2

Platform : As above Implementation: Using single query for implementing Flow ID validation Customers: 200 customers with 5 sites per each customer Service Classes: 20 Network: 6 edge routers, 3 core routers, 5 QoS classes, 150 TTs Subscriptions: Permanent / On-Demand / Managed Bandwidth Peer 2 Peer services, 2 SLS per SSS, 1,000 SSSs, 2 SLSs per SSS Measurement: Total Subscription processing time

Effect of TTs - Comparison Test1-Test2

Comparing total Subscription process time, Subscription Admission Logic response time for a small network (150 TTs) -Test 1- and a large network (49500 TTs) -Test 2-

Effect of SLSs/SSS Comparison Test2-Test3

Comparing total Subscription processing time for different established subscriptions (2 SLSs per SSS) -Test 2- and (20 SLSs per SSS) -Test 3-

Effect of h/w platform Comparison Test2-Test5

Comparing total Subscription processing time, for different execution platforms a desktop PC (Test 5) and a Server (Test2)

Effect of implementation Comparison Test4-Test5

Comparing total Subscription processing time, for different a algorithm implementations, using cursor loops (Test4) and using single query (Test 5)

5.2.1.2 Moderate Network -Test 1- Results

test 1

0

0.2

0.4

0.6

0.8

1

1.2

0

668

1336

2004

2672

3340

4008

4676

5344

6012

6680

7348

8016

8684

9352

1002

0

1068

8

1135

6

1202

4

1269

2

1336

0

1402

8

1469

6

1536

4

1603

2

1670

0

1736

8

1803

6

1870

4

1937

2

number of established SLSs

sec

Total Subscription Time

Figure 5-1: Total subscription processing time (test 1).

Page 98:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 98 of 204

TEQUILA Consortium – October 2002

As it can be seen, the total subscription processing time grows linearly with the number of subscriptions already established; therefore it scales.

test 1

0

0.01

0.02

0.03

0.04

0.05

0.06

0

676

1352

2028

2704

3380

4056

4732

5408

6084

6760

7436

8112

8788

9464

1014

0

1081

6

1149

2

1216

8

1284

4

1352

0

1419

6

1487

2

1554

8

1622

4

1690

0

1757

6

1825

2

1892

8

1960

4

Number of establ ished SLSs

sec

Subscr ipt ion Admission logic Time

Figure 5-2: Subscription admission logic response time (test 1).

As it can be seen from the above figure, the subscription admission logic response time is independent of the number of established subscriptions (depends only on the number of network TTs, as it will be seen in the following).

Furthermore, it can be deduced form the Figures 5-1,2 that the function incurring the heaviest delay in subscription processing is that of 'Validate and Translate'. The 'Admission Control' function is very light. This is attributed to the 'first plan and then take care' feature of the TEQUILA system substantiated through the concept of Resource Provisioning Cycle.

5.2.1.3 Large Network -Test 2- Results

test 2

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

0

638

1276

1914

2552

3190

3828

4466

5104

5742

6380

7018

7656

8294

8932

9570

1020

8

1084

6

1148

4

1212

2

1276

0

1339

8

1403

6

1467

4

1531

2

1595

0

1658

8

1722

6

1786

4

1850

2

1914

0

1977

8

number of established SLSs

sec

Subscription total Time

Figure 5-3: Total subscription processing time (test 2).

Page 99:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 99 of 204

TEQUILA Consortium – October 2002

test 2

0

0.05

0.1

0.15

0.2

0.25

0.3

0

552

1104

1656

2208

2760

3312

3864

4416

4968

5520

6072

6624

7176

7728

8280

8832

9384

9936

1048

8

1104

0

1159

2

1214

4

1269

6

1324

8

1380

0

1435

2

1490

4

1545

6

1600

8

1656

0

1711

2

1766

4

1821

6

1876

8

1932

0

1987

2

number of established SLSs

sec

Subscription admission logic time

Figure 5-4: Subscription admission logic response time (test 2)

The results of this test confirm the results of the previous test. That is, subscription processing grows linearly with the number of SLSs already established, therefore it scales with them; the admission logic is not affected by the number of SLSs established; and that subscription processing is mainly consumed in the 'Validate and Translate' function.

5.2.1.4 Large Network, Large Number of SLSs per SSS -Test 3- Results

test 3

0

1

2

3

4

5

6

7

8

9

10

0

500

1000

1500

2000

2500

3000

3500

4000

4500

5000

5500

6000

6500

7000

7500

8000

8500

9000

9500

1000

0

1050

0

1100

0

1150

0

1200

0

1250

0

1300

0

1350

0

1400

0

1450

0

1500

0

1550

0

1600

0

1650

0

1700

0

1750

0

1800

0

1850

0

1900

0

1950

0

number of established SLSs

sec

Subscription total time

Figure 5-5: Total subscription processing time (test 3).

Page 100:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 100 of 204

TEQUILA Consortium – October 2002

test 3

0

0.05

0.1

0.15

0.2

0.25

0.30

940

1880

2820

3760

4700

5640

6580

7520

8460

9400

1034

0

1128

0

1222

0

1316

0

1410

0

1504

0

1598

0

1692

0

1786

0

1880

0

1974

0

number o f es tab l ished SLSs

sec

Subscr ip t ion admiss ion log ic t ime

Figure 5-6: Subscription admission logic response time (test 3).

The above results confirm, yet again, the results of the previous tests. That is, subscription processing grows linearly with the number of SLSs already established, therefore it scales with them; the admission logic is not affected by the number of SLSs established; and that subscription processing is mainly consumed in the 'Validate and Translate' function.

Moreover, it is worth-noting that in this test, where the number of SLSs per SSS, is significantly larger than in the previous tests, that the results of this test have less variability than the previous ones. This is because the occasional peaks (due to database or OS functionality) are hidden in this case by the inevitable increase of the response time in this test.

Page 101:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 101 of 204

TEQUILA Consortium – October 2002

5.2.1.5 Cursor-based Implementation -Test 4- Results

test 4

0

1

2

3

4

5

6

7

0

248

496

744

992

1240

1488

1736

1984

2232

2480

2728

2976

3224

3472

3720

3968

4216

4464

4712

4960

5208

5456

5704

5952

6200

6448

6696

6944

7192

7440

7688

7936

8184

8432

8680

8928

9176

number of established SLSs

sec

Subscription total time

Figure 5-7: Total subscription processing time (test 4).

Again, the results of this test verify the linear growth of subscription processing time with respect to number of already established SLSs.

5.2.1.6 Single Query-based Implementation -Test 5- Results

test 5

0

0.5

1

1.5

2

2.5

0 56 112

168

224

280

336

392

448

504

560

616

672

728

784

840

896

952

1008

1064

1120

1176

1232

1288

1344

1400

1456

1512

1568

1624

1680

1736

1792

1848

1904

1960

number of established SLSs

sec

Subscription total t ime

Figure 5-8: Total subscription processing time (test 4).

Page 102:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 102 of 204

TEQUILA Consortium – October 2002

Again, the results of this test verify the linear growth of subscription processing time with respect to number of already established SLSs.

5.2.1.7 Effect of TTs - Comparing Test 1 and Test 2

C o m p a r i s o n o f s u b s c r i p t i o n t o t a l t i m e

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

20

912

1824

2736

3648

4560

5472

6384

7296

8208

9120

1003

2

1094

4

1185

6

1276

8

1368

0

1459

2

1550

4

1641

6

1732

8

1824

0

1915

2

n u m b e r o f e s t a b l i s h e d S L S s

sec

tes t 1 tes t 2

Figure 5-9: Effect of TTs in total subscription processing time.

Comparison Subscription admition logic time

0

0.05

0.1

0.15

0.2

0.25

0.3

0

652

1304

1956

2608

3260

3912

4564

5216

5868

6520

7172

7824

8476

9128

9780

1043

2

1108

4

1173

6

1238

8

1304

0

1369

2

1434

4

1499

6

1564

8

1630

0

1695

2

1760

4

1825

6

1890

8

1956

0

number of established SLSs

sec

test 1 test 2

Figure 5-10: Effect of TTs in subscription admission logic.

The above figures indicate the monotonic increase of subscription times (total and admission logic) as a function of TTs (150 TTs and 49,500TTs).

Page 103:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 103 of 204

TEQUILA Consortium – October 2002

5.2.1.8 Effect of number of SLSs per SSS - Comparing Test 2 and Test 3

Comparison of Subscription total time for test 2 and test 3

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

maximum time / SLS number avgerage time / SLS number minimum time / SLS number

se

c2 SLSs perSSS 20 SLSs per SSS

Figure 5-11: Effect of number of SLSs per SSS on total subscription processing time.

When comparing the total subscription time divided by the number of SLSs per subscription in the case of subscriptions with 2 SLSs (Test 2) and in the case of subscriptions with 20 SLS (Test 3) it is generally expected that in the second case times will be grater. This is because of the time the 'Translate and Validate'', the most prevailing function, takes in order to validate 2 SLSs rather than 20 SLSs. This expectation is verified by the figure above.

The fact that we see that the maximum value of the total subscription processing time is bigger in the first case than in the second case is attributed to the fact that the occurred database or OS peaks are taken into account as are when calculating the maximum, whereas not in the calculation of the average and minimum values.

5.2.1.9 Effect of h/w platform - Comparing Test 2 and Test 5

Subscription total time in different platforms

0

0.5

1

1.5

2

2.5

0 56 112

168

224

280

336

392

448

504

560

616

672

728

784

840

896

952

1008

1064

1120

1176

1232

1288

1344

1400

1456

1512

1568

1624

1680

1736

1792

1848

1904

1960

number of established SLSs

sec

test 5 test 2

Figure 5-12: Effect of h/w platform on total subscription processing time.

Page 104:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 104 of 204

TEQUILA Consortium – October 2002

As it can be seen, there is a difference in processing times on different platforms. A normal desktop PC (Test 5) is slower than a normal Server (Test 2). In cases of very large networks and number of subscriptions a clustering solution could be required to yield favourable processing times.

5.2.1.10 Effect of implementation - Comparing Test 4 and Test 5

Subscription total time for different implementations

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

0 48 96 144

192

240

288

336

384

432

480

528

576

624

672

720

768

816

864

912

960

1008

1056

1104

1152

1200

1248

1296

1344

1392

1440

1488

1536

1584

1632

1680

1728

1776

1824

1872

1920

1968

number of established SLSs

sec

test 5 test 4

Figure 5-13: Effect of implementation on total subscription processing time.

The above results show that implementation can significantly affect subscription processing times. When using cursor loops -PL/SQLs’ kind of for loop- (test 4) to validate the subscription flow identification amongst the ones of the already established SLSs of the same customer, the processing time grows largely with the number of the established SLSs (see the triangles in the above figure). If we stick only to SQL functionality -joins of tables, analytic functions, 'case when' statements- (test 5) to implement the flow validation functionality with a single query, the processing time is dramatically improved.

5.2.2 Benefit/Cost Experimental Set-up The service management cost/benefit tests are performed using the developed emulation platform (see section 8-3). Network dimensioning is static; no dynamic TE functionality is applied. The network topology as depicted in Figure 5-14 has been used for all performed tests and service subscription requests have been generated using BSRGen according to table 5-1 settings. The supported service types entail traffic for the EF, AF1 and AF3 OAs.

Page 105:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 105 of 204

TEQUILA Consortium – October 2002

C0C0C1C1

C2C2

E2E2

E3E3

E4E4

E5E5E1E1

Access Link – Infinite CapacityCore Link – Low Capacity (35Mbps)Core Link – High Capacity (52Mbps)

Access Link – Infinite CapacityCore Link – Low Capacity (35Mbps)Core Link – High Capacity (52Mbps)

Figure 5-14: Emulation Topology

Service Types OA min max min maxPermanent VPN EF 20% 1000 2000 1000 2000

Flexible VPN AF3 25% 500 500 500 500

Flexible Peer-to-Peer AF1 15% 500 1000 500 1000

On-Demand Peer-to-Peer AF1 30% 64 64 64 64

455

Subscription Generation

Dow

nstr

eam

Rat

e

Ups

trea

m R

ate

Sub

scri

ptio

ns N

umbe

r

Per

cent

age

Table 5-1: Subscription Generation Settings

5.2.3 Benefit/Cost of Invocation Admission Control – SUITE3_1/NTUA/SLSM/BC/1

This test suite aims at assessing the benefits and costs of the RSIM component in terms of its configuration parameters. However, the behaviour of the RSIM component, besides regulated through its configuration parameters, it also largely depends on the accuracy of the guidelines provided by the offline TE functions in the form of Resource Availability Matrix (RAM). These guidelines are set on the basis of the anticipated demand estimates and reflect the actual network dimensioning in place, hence, they influence both the algorithm’s state transitions as well as the effect of the actions taken at each state. The accuracy of the RAM entails two aspects:

• the deviation of the actual traffic demand over the resource availabilities in RAM, that is how intense the demand is comparing to what it has been provisioned in the network and

• the diversity in deviation among different QoS-classes, that is how homogeneous the intensity of the demand is among different QoS-classes comparing to what it has been provisioned per QoS-class.

Page 106:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 106 of 204

TEQUILA Consortium – October 2002

Taking into account the above considerations and the issues inherent to admission control, five test cases have been identified as depicted in Table 5-2.

Intensity

At Minimum Over Availability in RAM

Weight to Remaining from Target

Test Case Id Description Target

Overall EF AF1 AF3 EF AF1 AF3

TC-R-1

Low and symmetrically distributed among QoS-classes intensity. The target is to assess cost in case of over-provisioned networks with traffic demand inline with network dimensioning and RSIM guidelines. Overall load is under total availability and evenly distributed among QoS-classes.

0.6 - - - 1 1 1

TC-R-2

Low and asymmetrically distributed among QoS-classes intensity. The target is to assess cost under various settings of precaution level in case of over-provisioned networks with traffic demand not in conformance with network dimensioning and RSIM guidelines. Overall load is under total availability, with EF and AF1 significantly over-provisioned and under-provisioned AF3.

0.7 0.1 0.1 - - - 1

TC-R-3

High intensity symmetrically distributed between AF1 and AF3, keeping EF over-provisioned. The target is to assess RSIM cost and benefits in handling high loads under various settings of precaution level.

1.25 0.4 - - - 1 1

TC-R-4

Very high intensity asymmetrically distributed between AF1 and AF3, keeping EF over-provisioned. The target is to assess RSIM cost and benefits in handling asymmetrically high loads under various settings of precaution level. The particular setting results in traffic demands from EF, AF1 and AF2 corresponding to half, the whole and double of the resources provisioned respectively.

1.5 0.5 - - - 1 2

Table 5-2: RSIM Cost/Benefit Test Cases

RSIM aims at minimising QoS degradation while maximising admitted traffic. Given offered invocation requests, RSIM controls the invocation admission. Then, given the admitted invocations and their offered load, RSIM controls their admitted rate, therefore the traffic injected into the network and consequently the level of congestion. Network congestion is identified by detection of traffic experiencing QoS requirements violation, called red traffic hereon; green traffic is called the traffic enjoying its target QoS.

RSIM admitting traffic that the network cannot gracefully sustain results in QoS degradation increase, hence red over green traffic increase. RSIM rejecting traffic that the network can gracefully sustain results in rejecting green traffic, hence green traffic decrease. It becomes clear that the target of RSIM is to maximise the green traffic and that, the benefit gained by its operation can be assessed based on the proportion of green traffic serviced with and without RSIM for the same load conditions. The proportion of rejected over requested invocations and the level of policing below contractual rates are the grounds based on which the cost of RSIM operation can be assessed.

Different settings of RSIM affect how conservative its operation is, hence the related benefit and cost.

Page 107:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 107 of 204

TEQUILA Consortium – October 2002

5.2.3.1.1 SUITE3_0/Ntua/SLSM/BC/1 – TC-R-1

The traffic patterns used for this test case are depicted in the following table.

Service Types OAPermanent VPN EF 0.46 Pareto 30 3 N/A

Flexible VPN AF3 1.95 10 Gen Exp 100 78.2 5 Constant

Flexible Peer-to-Peer AF1 0.65 8 Gen Exp 1200 4015 5 Pareto 30 39 N/A

On-Demand Peer-to-Peer AF1 0.60 11 Gen Exp 240 600 5 Gen Exp 15 45 8

Var

ianc

e

Var

ianc

e

Dis

trib

utio

n

Mea

n O

FF T

ime

Mea

n O

N T

ime

Max

Sim

ulta

neou

s U

sers

Dis

trib

utio

n

Mea

n H

oldi

ng T

ime

Mea

n In

tera

rriv

al T

ime

Traffic Demand GenerationInvocation Load

Infla

tion

Fact

or

One test is performed to assess the cost in RSIM operation being at maximum conservative (precaution level 0). The results are depicted in Figure 5-15.

5.2.3.1.2 SUITE3_0/Ntua/SLSM/BC/1 – TC-R-2

The traffic patterns used for this test case are depicted in the following table.

Service Types OAPermanent VPN EF 0.09 Pareto 30 0.5 N/A

Flexible VPN AF3 0.60 10 Gen Exp 50 183 5 Constant

Flexible Peer-to-Peer AF1 1.40 8 Gen Exp 1200 1543 5 Pareto 30 53 N/A

On-Demand Peer-to-Peer AF1 1.08 19 Gen Exp 240 600 5 Gen Exp 15 45 8

Var

ianc

e

Var

ianc

e

Dis

trib

utio

n

Mea

n O

FF T

ime

Mea

n O

N T

ime

Max

Sim

ulta

neou

s U

sers

Dis

trib

utio

n

Mea

n H

oldi

ng T

ime

Mea

n In

tera

rriv

al T

ime

Traffic Demand GenerationInvocation Load

Infla

tion

Fact

or

Four tests are performed:

Test-0 RSIM disabled

Test-1 RSIM enabled, precaution level 0.5

Test-2 RSIM enabled, precaution level 0.2

Test-3 RSIM enabled, precaution level 0

The results for each test are depicted in Figure 5-16 to Figure 5-19. Comparative results are depicted in Figure 5-20.

5.2.3.1.3 SUITE3_0/Ntua/SLSM/BC/1 – TC-R-3

The traffic patterns used for this test case are depicted in the following table.

Page 108:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 108 of 204

TEQUILA Consortium – October 2002

Service Types OAPermanent VPN EF 0.33 Pareto 30 2.1 N/A

Flexible VPN AF3 3.17 10 Gen Exp 600 173 5 Constant

Flexible Peer-to-Peer AF1 3.20 8 Gen Exp 1200 338 5 Pareto 30 137 N/A

On-Demand Peer-to-Peer AF1 1.08 19 Gen Exp 240 600 5 Gen Exp 15 45 8

Var

ianc

e

Var

ianc

e

Dis

trib

utio

n

Mea

n O

FF T

ime

Mea

n O

N T

ime

Max

Sim

ulta

neou

s U

sers

Dis

trib

utio

n

Mea

n H

oldi

ng T

ime

Mea

n In

tera

rriv

al T

ime

Traffic Demand GenerationInvocation Load

Infla

tion

Fact

or

Four tests are performed:

Test-0 RSIM disabled

Test-1 RSIM enabled, precaution level 1

Test-2 RSIM enabled, precaution level 0.5

Test-3 RSIM enabled, precaution level 0

The results for each test are depicted in Figure 5-21 to Figure 5-24. Comparative results are depicted in Figure 5-25.

5.2.3.1.4 SUITE3_0/Ntua/SLSM/BC/1 – TC-R-4

The traffic patterns used for this test case are depicted in the following table.

Service Types OAPermanent VPN EF 0.43 Pareto 30 2.8 N/A

Flexible VPN AF3 2.20 10 Gen Exp 600 382 5 Constant

Flexible Peer-to-Peer AF1 4.90 8 Gen Exp 1200 12.2 5 Pareto 30 ### N/A

On-Demand Peer-to-Peer AF1 1.08 19 Gen Exp 240 600 5 Gen Exp 15 45 8

Var

ianc

e

Var

ianc

e

Dis

trib

utio

n

Mea

n O

FF T

ime

Mea

n O

N T

ime

Max

Sim

ulta

neou

s U

sers

Dis

trib

utio

n

Mea

n H

oldi

ng T

ime

Mea

n In

tera

rriv

al T

ime

Traffic Demand GenerationInvocation Load

Infla

tion

Fact

or

Four tests are performed:

Test-0 RSIM disabled

Test-1 RSIM enabled, precaution level 1

Test-2 RSIM enabled, precaution level 0.5

Test-3 RSIM enabled, precaution level 0

The results for each test are depicted in Figure 5-26 to Figure 5-29. Comparative results are depicted in Figure 5-30.

Page 109:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 109 of 204

TEQUILA Consortium - October 2002

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

(a) Traffic per QoS-class in time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

EF traffic AF1 traffic AF3 traffic

Minimum Sustainable at Congestion Offered Total Injected Green Red

(b) Traffic per QoS-class on average (offset at 500 time units)

Figure 5-15: Test Case TC-R-1 results

Page 110:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 110 of 204

TEQUILA Consortium - October 2002

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

(a) Traffic per QoS-class in time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

EF traffic AF1 traffic AF3 traffic

Minimum Sustainable at Congestion Offered Total Injected Green Red

(b) Traffic per QoS-class on average (offset at 500 time units)

Figure 5-16: Test Case TC-R-2 / Test-0 results

Page 111:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 111 of 204

TEQUILA Consortium - October 2002

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

(a) Traffic per QoS-class in time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

EF traffic AF1 traffic AF3 traffic

Minimum Sustainable at Congestion Offered Total Injected Green Red

(b) Traffic per QoS-class on average (offset at 500 time units)

Figure 5-17: Test Case TC-R-2 / Test-1 results

Page 112:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 112 of 204

TEQUILA Consortium - October 2002

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

(a) Traffic per QoS-class in time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

EF traffic AF1 traffic AF3 traffic

Minimum Sustainable at Congestion Offered Total Injected Green Red

(b) Traffic per QoS-class on average (offset at 500 time units)

Figure 5-18: Test Case TC-R-2 / Test-2 results

Page 113:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 113 of 204

TEQUILA Consortium - October 2002

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

(a) Traffic per QoS-class in time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

EF traffic AF1 traffic AF3 traffic

Minimum Sustainable at Congestion Offered Total Injected Green Red

(b) Traffic per QoS-class on average (offset at 500 time units)

Figure 5-19: Test Case TC-R-2 / Test-3 results

Page 114:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation

TEQUILA Consortium - October 2002

No Control

Optimistic

Cautious

ConservativeRed

Green

Injected

Sustainable

Offered byAdmitted

Calls

Total Offered

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

Red Green Injected Sustainable Offered by Admitted Calls Total Offered

(a) Overall traffic on average (offset at 500 time units) per test

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

-0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5

Normalised Benefit in GREEN traffic

No

rmal

ised

Co

st

Invocation Rejection Rate Policing

(b) Normalised Cost and Benefit per test

Figure 5-20: Test Case TC-R-2 comparative results

Page 115:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 115 of 204

TEQUILA Consortium - October 2002

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

(a) Traffic per QoS-class in time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

EF traffic AF1 traffic AF3 traffic

Minimum Sustainable at Congestion Offered Total Injected Green Red

(b) Traffic per QoS-class on average (offset at 500 time units)

Figure 5-21: Test Case TC-R-3 / Test-0 results

Page 116:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 116 of 204

TEQUILA Consortium - October 2002

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

(a) Traffic per QoS-class in time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

EF traffic AF1 traffic AF3 traffic

Minimum Sustainable at Congestion Offered Total Injected Green Red

(b) Traffic per QoS-class on average (offset at 500 time units)

Figure 5-22: Test Case TC-R-3 / Test-1 results

Page 117:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 117 of 204

TEQUILA Consortium - October 2002

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

(a) Traffic per QoS-class in time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

EF traffic AF1 traffic AF3 traffic

Minimum Sustainable at Congestion Offered Total Injected Green Red

(b) Traffic per QoS-class on average (offset at 500 time units)

Figure 5-23: Test Case TC-R-3 / Test-2 results

Page 118:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 118 of 204

TEQUILA Consortium - October 2002

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

(a) Traffic per QoS-class in time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

EF traffic AF1 traffic AF3 traffic

Minimum Sustainable at Congestion Offered Total Injected Green Red

(b) Traffic per QoS-class on average (offset at 500 time units)

Figure 5-24: Test Case TC-R-3 / Test-3 results

Page 119:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 119 of 204

TEQUILA Consortium - October 2002

No Control

Optimistic

Cautious

ConservativeRed

Green

Injected

Sustainable

Offered byAdmitted

Calls

Total Offered

0

50000

100000

150000

200000

250000

Red Green Injected Sustainable Offered by Admitted Calls Total Offered

(a) Overall traffic on average (offset at 500 time units) per test

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

-0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5

Normalised Benefit in GREEN traffic

No

rmal

ised

Co

st

Invocation Rejection Rate Policing

(b) Normalised Cost and Benefit per test

Figure 5-25: Test Case TC-R-3 comparative results

Page 120:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 120 of 204

TEQUILA Consortium - October 2002

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

50000

100000

150000

200000

250000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

50000

100000

150000

200000

250000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

(a) Traffic per QoS-class in time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

EF traffic AF1 traffic AF3 traffic

Minimum Sustainable at Congestion Offered Total Injected Green Red

(b) Traffic per QoS-class on average (offset at 500 time units)

Figure 5-26: Test Case TC-R-4 / Test-0 results

Page 121:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 121 of 204

TEQUILA Consortium - October 2002

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

(a) Traffic per QoS-class in time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

EF traffic AF1 traffic AF3 traffic

Minimum Sustainable at Congestion Offered Total Injected Green Red

(b) Traffic per QoS-class on average (offset at 500 time units)

Figure 5-27: Test Case TC-R-4 / Test-1 results

Page 122:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 122 of 204

TEQUILA Consortium - October 2002

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

(a) Traffic per QoS-class in time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

EF traffic AF1 traffic AF3 traffic

Minimum Sustainable at Congestion Offered Total Injected Green Red

(b) Traffic per QoS-class on average (offset at 500 time units)

Figure 5-28: Test Case TC-R-4 / Test-2 results

Page 123:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 123 of 204

TEQUILA Consortium - October 2002

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

(a) Traffic per QoS-class in time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

EF traffic AF1 traffic AF3 traffic

Minimum Sustainable at Congestion Offered Total Injected Green Red

(b) Traffic per QoS-class on average (offset at 500 time units)

Figure 5-29: Test Case TC-R-4 / Test-3 results

Page 124:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 124 of 204

TEQUILA Consortium - October 2002

No Control

Optimistic

Cautious

ConservativeRed

Green

Injected

Sustainable

Offered byAdmitted

Calls

Total Offered

0

50000

100000

150000

200000

250000

Red Green Injected Sustainable Offered by Admitted Calls Total Offered

(a) Overall traffic on average (offset at 500 time units) per test

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

-0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5

Normalised Benefit in GREEN traffic

No

rmal

ised

Co

st

Invocation Rejection Rate Policing

(b) Normalised Cost and Benefit per test

Figure 5-30: Test Case TC-R-4 comparative results

Page 125:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 125 of 204

TEQUILA Consortium - October 2002

5.2.4 Benefit/Cost of Subscription Admission Control – SUITE3_1/NTUA/SLSM/BC/2

This test suite aims at assessing the benefits and costs of the SSM component in terms of its configuration parameters. SSM aims at protecting the network from being overload at the level of subscription. Its behaviour depends on the accuracy of the resource availability estimates as provided by the offline TE functions in RAM and by validity of the source models used to deduce and aggregate the implied demand for the existing and the requested subscriptions.

When the network can gracefully sustain the total offered load, it is RSIM and the TE dynamic components that ensure resources are appropriately shared and occasional fluctuations gracefully smoothed out. However, when congestion occurs and tends to persist, it is partially due to SSM having admitted excessive subscriptions.

Along the above observations, the SSM tests use intense traffic offer to enable the assessment of the SSM impact into traffic under various settings of the satisfaction level. More over, traffic is offered in an asymmetric way, so as to assess the effect of incompliance in source models and resource availability estimates.

To this end the performed tests entail two steps. First, subscription requests are offered to SSM under various settings of satisfaction level. Then, for each resulted subscription population test case TC-R-4 is executed at minimum and maximum conservative precaution levels.

The considered test cases and tests are as follows:

TC-S-1 Satisfaction level 1, Test-1 precaution level 0

TC-S-2 Satisfaction level 0.6, Test-1 precaution level 1, Test-2 precaution level 2

TC-S-3 Satisfaction level 0.3, Test-1 precaution level 1, Test-2 precaution level 2

TC-S-4 Satisfaction level 0.1, Test-1 precaution level 1, Test-2 precaution level 2

The benefit and cost brought to the system by the operation of SSM is equivalent with the benefits and costs of RSIM operation for the different set of services admitted by SSM and under the same source traffic patterns. Consider that when SSM operates at satisfaction level –1, it admits all incoming subscription requests. Hence, the results from test case T-R-4 (see section 5.2.3.1.4) imply the SSM operation configured at satisfaction level –1.

The results for each test are depicted in Figure 5-31 to Figure 5-37. Comparative results are presented in Figure 5-38.

Page 126:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 126 of 204

TEQUILA Consortium - October 2002

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

(a) Traffic per QoS-class in time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

EF traffic AF1 traffic AF3 traffic

Minimum Sustainable at Congestion Offered Total Injected Green Red

(b) Traffic per QoS-class on average (offset at 500 time units)

Figure 5-31: Test Case TC-S-1 / Test-2 results

Page 127:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 127 of 204

TEQUILA Consortium - October 2002

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

(a) Traffic per QoS-class in time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

EF traffic AF1 traffic AF3 traffic

Minimum Sustainable at Congestion Offered Total Injected Green Red

(b) Traffic per QoS-class on average (offset at 500 time units)

Figure 5-32: Test Case TC-S-2 / Test-1 results

Page 128:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 128 of 204

TEQUILA Consortium - October 2002

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

(a) Traffic per QoS-class in time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

EF traffic AF1 traffic AF3 traffic

Minimum Sustainable at Congestion Offered Total Injected Green Red

(b) Traffic per QoS-class on average (offset at 500 time units)

Figure 5-33: Test Case TC-S-2 / Test-2 results

Page 129:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 129 of 204

TEQUILA Consortium - October 2002

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

(a) Traffic per QoS-class in time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

EF traffic AF1 traffic AF3 traffic

Minimum Sustainable at Congestion Offered Total Injected Green Red

(b) Traffic per QoS-class on average (offset at 500 time units)

Figure 5-34: Test Case TC-S-3 / Test-1 results

Page 130:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 130 of 204

TEQUILA Consortium - October 2002

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

(a) Traffic per QoS-class in time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

EF traffic AF1 traffic AF3 traffic

Minimum Sustainable at Congestion Offered Total Injected Green Red

(b) Traffic per QoS-class on average (offset at 500 time units)

Figure 5-35: Test Case TC-S-3 / Test-2 results

Page 131:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 131 of 204

TEQUILA Consortium - October 2002

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

(a) Traffic per QoS-class in time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

EF traffic AF1 traffic AF3 traffic

Minimum Sustainable at Congestion Offered Total Injected Green Red

(b) Traffic per QoS-class on average (offset at 500 time units)

Figure 5-36: Test Case TC-S-4 / Test-1 results

Page 132:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 132 of 204

TEQUILA Consortium - October 2002

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

EF traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF1 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

AF3 traffic

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

1 101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401

Sustainable Offered Red Green Minimum Sustainable at Congestion

(a) Traffic per QoS-class in time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

EF traffic AF1 traffic AF3 traffic

Minimum Sustainable at Congestion Offered Total Injected Green Red

(b) Traffic per QoS-class on average (offset at 500 time units)

Figure 5-37: Test Case TC-S-4 / Test-2 results

Page 133:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 133 of 204

TEQUILA Consortium - October 2002

Maximum Conservative

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

-1 -0.75 -0.5 -0.25 0 0.25 0.5 0.75 1

Satisfaction Level

RS

IM P

erfo

rman

ce

Invocation Rejection Rate Policing Normalised Benefit

(a) RSIM performance at maximum conservative depending on satisfaction level

Minimum Conservative

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

-1 -0.75 -0.5 -0.25 0 0.25 0.5 0.75 1

Satisfaction Level

RS

IM P

erfo

rman

ce

Invocation Rejection Rate Policing Normalised Benefit

(b) RSIM performance at minimum conservative depending on satisfaction level

Figure 5-38: SUITE3_0/Ntua/SLSM/BC/2 Comparative results

Page 134:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 134 of 204

TEQUILA Consortium - October 2002

5.3 MPLS-TE Performance Tests and Results The following table provides references for the tests conducted for the MPLS-TE.

Test ID Purpose Platform Results

SUITE3_2/GC/MPLSTE/BC/1 Assessment of the operation of the subsystem consisting of: Network Dimensioning and static parts of DRtM and DRsM. The aim of this test is to show that if real-time traffic is injected into the system around predictions, the system can provide the QoS for which it was designed.

UK testbed (see A.1)

Sections 5.3.1, 5.3.2

SUITE3_2/GC/MPLSTE/BC/2 To compare the performance of the static MPLS-TE system against configurations employed by other means.

UK testbed (see A.1)

Sections 5.3.1, 5.3.3

SUITE3_2/GC/ MPLSTE /BC/4 Assessment of the operation of functional triangle (SLS Subscription, Traffic Forecast, and Network Dimensioning).

UK testbed (see A.1)

Sections 5.3.1,5.3.4

SUITE3_2/GC/ MPLSTE /BC/5 Assessment of the operation of functional triangle (SLS Subscription, Traffic Forecast, and Network Dimensioning) assuming traffic forecast.

UK testbed (see A.1)

Sections 5.3.1, 5.3.5

SUITE3_2/GC/ MPLSTE /BC/6 Assessment of network engineering with hoses vs. pipes.

UK testbed (see A.1)

Sections 5.3.1, 5.3.6

5.3.1 Experimentation Set-up This section presents the test environment followed by assessment of benefit/costs of MPLS-TE. To conduct these tests, the TEQUILA system including SLS Management, MPLS-TE (TF, ND, RtM), and Monitoring sub-systems have been used. Based on the network physical topology and customer site locations shown in Figure 8-1, a set of customer subscriptions has been specified for PIPE SLSs as shown in Table 5-1. Three classes of services have been defined i.e., Expedited Forwarding (EF), Assured Forwarding (AF1), and Best effort (BE) for these subscriptions.

Ingress point

Egress point

Traffic Source Traffic Destination

Traffic Class

Delay Req. in ms1

Rate limit in Kbps2

Avg. Bandwidth Demand in Kbps

PE1 PE2 LAN1: 194.166.11.10 194.166.11.19 EF 1000 Different for each test 3 PE2 PE1 LAN9: 194.166.11.19 194.166.11.10 EF

<150 1000 Different for each test

PE1 PE2 LAN1: 194.166.11.14 194.166.11.21 AF1 1000 Different for each test PE2 PE1 LAN9: 194.166.11.21 194.166.11.14 AF1 <400 1000 Different for each test PE1 PE2 LAN1: 194.166.11.1 194.166.11.22 BE 1000 - PE2 PE1 LAN9: 194.166.11.22 194.166.11.1 BE

- 1000 -

PE3 PE2 LAN3: 194.166.11.37 194.166.11.23 EF 1000 Different for each test PE2 PE2 LAN9: 194.166.11.23 194.166.11.37 EF

<150 1000 Different for each test

PE3 PE2 LAN3: 194.166.11.38 194.166.11.24 AF1 1000 Different for each test PE2 PE3 LAN9: 194.166.11.24 194.166.11.38 AF1 <400 1000 Different for each test PE3 PE2 LAN3: 194.166.11.39 194.166.11.25 BE 1000 - PE2 PE3 LAN9: 194.166.11.25 194.166.11.39 BE

- 1000 -

1 The SLS delay requirements are specified for EF and AF1. EF and AF1 classes must not experience more than 150 ms and 400 ms respectively. 2 In order to make use of available bandwidth that is not used by other classes, this is the maximum rate that a traffic class can generate during non-congestion periods. During congestion periods, the bandwidth assigned to a class (PHB) is the guaranteed bandwidth delivered to that class. 3 The average bandwidth demand in Kbps is configured differently for each test. This bandwidth demand is specified in the related section to each test.

Page 135:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 135 of 204

TEQUILA Consortium - October 2002

PE1 PE3 LAN1: 194.166.11.2 194.166.11.40 EF 1000 Different for each test PE3 PE1 LAN3: 194.166.11.40 194.166.11.2 EF

<150 1000 Different for each test

PE1 PE3 LAN1: 194.166.11.3 194.166.11.41 AF1 1000 Different for each test PE3 PE1 LAN3: 194.166.11.41 194.166.11.3 AF1 <400 1000 Different for each test PE1 PE3 LAN1: 194.166.11.4 194.166.11.42 BE 1000 - PE3 PE1 LAN3: 194.166.11.42 194.166.11.4 BE

- 1000 -

Table 5-3: Customer subscriptions and their bandwidth demands in Pipe model.

Using the network physical topology given in Figure 8-1, the network repository has been populated and a nominal link propagation delay of 10 ms has been specified for each link. The delay characteristics of links are used for ND calculations. The actual one-way delay per hop is around 1.5 msec measured in normal network conditions.

In the system level performance experiments, TEQUILA system has set-up unidirectional LSP tunnels for carrying traffic between edge routers (PE1, PE2, and PE3). Cisco's Low Latency Queuing (LLQ) has been automatically configured at the output interfaces of all routers in which tunnels transit. The LLQ feature brings strict priority queuing to Class-Based Weighted Fair Queuing (CBWFQ). Strict priority queuing gives preferential treatment to a traffic class over other traffic classes. With this feature, data belonging to the priority class (i.e., EF class) is sent first before packets in other queues are treated. CBWFQ extends the standard WFQ functionality to provide support for user-defined traffic classes. A queue is reserved for each class, and traffic belonging to a class is directed to the queue for that class. To characterise a class, bandwidth, weight, and maximum packet limit are assigned to it ( see Table 5-2). The bandwidth assigned to a class is the guaranteed bandwidth delivered to the class during congestion. Three traffic classes (PHBs) have been defined (i.e., EF, AF1, and BE). Generally (unless otherwise stated), each of the three classes have been given 333 Kbps of bandwidth. EF traffic is directed to the priority queue while AF1 and BE traffic use normal CBWFQ.

Queue type Maximum queue limit in packets

Committed burst size in bytes for priority traffic4

Assigned bandwidth to PHBs (queue) in Kbps

Exponential weight factor for WRED5

Priority - EF - 8325 333 -

CBWFQ- AF1 64 - 333 9

CBWFQ- BE 64 - 333 9

Table 5-4: LLQ-CBWFQ configuration parameters.

Traffic generation sources have been used to transmit EF, AF1 and BE traffic using Poisson distribution. The packets from all traffic generation sources have been set to be 128 bytes long including headers. The size of synthetic packets is configured to be a similar size of user traffic generated by the traffic generators. This is to ensure that the CBWFQ give the same treatment to both the synthetic traffic and user traffic. Therefore, the delay measured for synthetic traffic is similar to the delay experienced by user traffic.

Active monitor jobs have been set-up to monitor one-way delay and packet loss experienced by traffic using the LSP tunnels and related hops. The assessment tests have been conducted using the test scenarios specified in Table 5.3. This table shows mean rate of synthetic traffic in packet per second (PPS) injected by active monitoring agents, data summarised on period6, and the link delay in milliseconds (ms) introduced by Data Channel Simulator (DCS).

4 Default value for committed burst size in bytes = assigned bandwidth in bps * specified time interval of Tc ( i.e., 200msec) / 8. 5 Weighted Random Early Detection (WRED) is used as a congestion avoidance mechanism by randomly dropping packets when congestion occurs. The exponential weight factor is used for the WRED operation in order for the average queue size calculation of the reserved queue for the class. 6 NodeMon averages individual raw data measured by the monitoring agents during the summarisation period. If no summarisation period is used, raw data measurements are not averaged and the raw data measurements are shown in the figures.

Page 136:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 136 of 204

TEQUILA Consortium - October 2002

Test scenario Synthetic traffic mean injection rate

Summarisation period

Programmed delay in DCS-1 attached to Link 5 (P1 to P3)

1 2 PPS - 0 2 2 PPS - 155 ms 3 - - 0 4 4 PPS 10 seconds 0

Table 5-5: Experimental scenarios for MPLS-TE tests.

5.3.2 Assessing Network Configuration for QoS - SUITE3_2/GC/MPLSTE/ BC/1

Table 5-4 shows the average bandwidth requested by customers for different type of traffic entering the TEQUILA network (testbed) using PE1 and exiting from PE2 and PE3 for the following two tests.

Customer bandwidth demand (bi-directional) in Kbps for: Ingress – Egress Pair

EF traffic AF1 Traffic BE traffic

PE1-PE2 333 333 333

PE1-PE3 333 333 333

Table 5-6: Customers' average bandwidth demand for different type of traffic.

Traffic generation sources have been used to transmit traffic from PE1 to PE2 and PE3 at the rates specified in Table 5-4 with exception of BE traffic forwarded from PE1 to PE3 that has been set at 383 Kbps (i.e., BE traffic is subjected to loss as more BE traffic is injected to the network than bandwidth reserved for it in order to bring the link to congestion). This caused congestion at link 1 (PE1-P1 link) only. This arrangement has been used for the following two tests.

5.3.2.1 Assessing the QoS (Test scenario 1) This experiment was conducted in order to see that the customer QoS requirements are met under normal network delay conditions. The TEQUILA system was run to completion. The simplified logical configuration of the network (i.e., LSP set-ups) shown in Table 5.5 is the result of system run.

Tunnel Ids Classes of traffic Explicit route of tunnels

PE1-1, PE1-3, PE1-4 AF1, EF, BE PE1-P1-P3-PE3

PE1-2, PE1-5, PE1-6 AF1, BE, EF PE1-P1-P2-PE2

PE2-1, PE2-3, PE2-5 AF1, EF, BE PE2-P2-P1-PE1

PE2-2, PE2-4, PE2-6 EF,BE, AF1 PE2-P2-P3-PE3

PE3-1, PE3-3, PE3-5 EF,BE, AF1 PE3-P3-P2-PE2

PE3-2, PE3-4, PE3-6 EF, BE, AF1 PE3-P3-P1-PE1

Table 5-7: Explicit routes for LSP tunnels calculated by TEQUILA system.

The bandwidth allocated to the PHBs of all router interfaces are shown in Table 5.6.

Page 137:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 137 of 204

TEQUILA Consortium - October 2002

Router's Output Interface

Traffic Class (OA)

Avg. Subscription Demand

Amount of Bandwidth allocated to PHBs

Amount of Link Bandwidth used

PE3_Serial 3/0:0 EF, BE,AF1 666, 666, 666 666, 666, 666 1998 PE2_Serial 3/1:0 EF, BE, AF1 PE2_Serial 3/0:0 EF, BE, AF1 666, 666, 666 666, 666, 666 1998 PE1_Serial 3/0:0 EF, BE, AF1 PE1_Serial 3/1:0 EF, BE, AF1 666, 666, 666 666, 666, 666 1998 P3_Serial 3/1:0 EF, BE, AF1 666, 666, 666 666, 666, 666 1998 P3_Serial 3/2:0 EF, BE, AF1 P3_Serial 3/0:0 EF, BE, AF1 P3_Serial 4/0 EF, BE, AF1 333, 333, 333 333, 333, 333 999 P3_Serial 4/1 EF, BE, AF1 333, 333, 333 333, 333, 333 999 P2_Serial 0/0 EF, BE, AF1 666, 666, 666 666, 666, 666 1998 P2_Serial 0/2 EF, BE, AF1 333, 333, 333 333, 333, 333 999 P2_Serial 0/1 EF, BE, AF1 333, 333, 333 333, 333, 333 999 P1_Serial 0/1 EF, BE, AF1 666, 666, 666 666, 666, 666 1998 P1_Serial 1/0 EF, BE, AF1 333, 333, 333 333, 333, 333 999 P1_Serial 0/0 EF, BE, AF1 333, 333, 333 333, 333, 333 1110

Table 5-8: Subscription demand and PHB bandwidth allocations in Kbps.

One-way delays have been measured for the tunnels between PE1 and PE3 (i.e., PE1-1, PE1-3, PE1-4 LSP tunnels shown in Table 5-4). The one-way delay results are shown in Figure 5-14. Based on the results, the SLS delay requirements specified in Table 5-1 are satisfied.

One way delay expreienced by EF, AF1, and BE traffic carried by three LSPs

0

50

100

150

200

250

300

08:55:00 09:00:00 09:05:00 09:10:00 09:15:00 09:20:01 09:25:01 09:30:01 09:35:01

Time of day

One

way

Del

ay in

mse

c

LSP3-EF (mean = 15.7ms) LSP1-AF1 (mean = 76.1ms) LSP4-BE (mean = 102.2ms)

Figure 5-39: One-way delay experienced by EF, AF1, and BE traffic.

5.3.2.2 Introducing Excessive Link Delay (Test scenario 2) This test has been carried out by introducing a link delay of 155 ms in link 5 (P1-P3) using DCS. The TEQUILA system was run to completion again. Table 5-7 shows the calculated explicit routes for the tunnels.

Page 138:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 138 of 204

TEQUILA Consortium - October 2002

Tunnel IDs Classes of traffic Tunnel explicit route

PE1-1, PE1-4 AF1, BE PE1-P1-P3-PE3

PE1-3 EF PE1-P1-P2-P3-PE3

PE1-2, PE1-5, PE1-6 AF1, BE, EF PE1-P1-P2-PE2

PE2-1, PE2-3, PE2-5 AF1, EF, BE PE2-P2-P1-PE1

PE2-2, PE2-4, PE2-6 EF,BE, AF1 PE2-P2-P3-PE3

PE3-1, PE3-3, PE3-5 BE, AF1 PE3-P3-P2-PE2

PE3-4, PE3-6 BE, AF1 PE3-P3-P1-PE1

PE3-2 EF PE3-P3-P2-P1-PE1

Table 5-9: Explicit LSP tunnels calculated by TEQUILA when 155 ms of delay introduced in link 5.

Table 5-7 shows that the tunnels carrying EF traffic use different explicit routes than the ones in the previous test and shown in Table 5-5. It is clearly shown in Table 5-7 that the tunnel carrying EF traffic bypassed the Link 5 due to its excessive delay (150 msec is the maximum delay tolerated by EF traffic, see Table 5-1), whereas link 5 has been used for tunnels carrying AF1 and BE traffic. Figure 5-15 shows the one-way delay measurements on the tunnels from PE1 to PE3 carrying the three types of traffic. The measurements show that the specified SLS delay requirements are met.

One way delay expreienced by EF, AF1, and BE traffic carried by three LSPs

0

50

100

150

200

250

300

08:55:00 09:00:00 09:05:00 09:10:00 09:15:00 09:20:01 09:25:01 09:30:01 09:35:01

Time of day

One

way

Del

ay in

mse

c

LSP3-EF (mean = 16.6ms) LSP1-AF1 (mean = 231.2ms) LSP4-BE (mean = 257.2ms)

Figure 5-40: One-way delay experienced by EF, AF1, and BE traffic when 155ms of delay was introduced in link 5 (P1-P3).

5.3.3 Comparing TEQUILA Traffic Engineering - SUITE3_2/GC/MPLSTE/BC/2

This test is to compare the static MPLS-TE of TEQUILA system with an ad-hoc approach (i.e., Cisco MPLS Traffic Engineering).

Page 139:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 139 of 204

TEQUILA Consortium - October 2002

5.3.3.1 Introducing Excessive Link Delay (Test Scenario 2) Cisco MPLS traffic engineering approach determines the routes for traffic flows across a network based on the resources the traffic flow requires and the resources available in the network. It employs "constraint-based routing," in which the path for a traffic flow is the shortest path that meets the resource requirements (constraints) of the traffic flow. Cisco MPLS-TE automatically establishes and maintains LSPs across the backbone by using RSVP. The path that LSPs use are determined by the LSP resource requirements and network resources, such as bandwidth. Available resources are flooded by means of extensions to a link-state based IGP. Traffic engineering tunnels are calculated at the LSP head based on a fit between required and available resources. The IGP automatically routes the traffic onto these LSPs.

TEQUILA system uses two methods for route calculations the routes: 1- by using Hop-count which is similar to the Cisco's shortest path approach 2- by taking into account the Link Delays for route calculations.

We conducted the two following tests:

• TEQUILA system using link delays method

• Cisco MPLS-TE using its constraint-based routing method

In both tests, we shut down the Link 4 (PE1 to P3) in order to make the shortest path from PE1 to PE3 to be the PE1-P1-P3-PE3 route. We left the rest of network unchanged. Link delay of 155ms in Link 5 was in place. In the Cisco MPLS-TE test, the tunnels have been configured as to find a dynamic route. Results of tunnel paths for both tests are shown in Table 5-8. As it is shown in Table 5-8, a different route is used for EF traffic in the TEQUILA approach in order to bypass the Link 5 whereas in the Cisco MPLS-TE approach, dynamic routing used the shortest path and it didn't take into consideration the link delay. Figure 5-16 shows a comparison between the two in terms of one-way delay measurements for EF traffic. This shows the superiority of TEQUILA approach by taking into account the link delay characteristic in its configuration calculations.

Method Tunnel IDs Classes of traffic (OA) Explicit route of tunnels

PE1-1, PE1-4 AF1, BE PE1-P1-P3-PE3

PE1-3 EF PE1-P1-P2-P3-PE3

PE3-4, PE3-6 BE, AF1 PE3-P3-P1-PE1

TEQUILA

PE3-2 EF PE3-P3-P2-P1-PE1

PE1-1, PE1-3, PE1-4 AF1, EF, BE PE1-P1-P3-PE3 Cisco

PE3-1, PE3-3, PE3-5 EF,BE, AF1 PE3-P3-P2-PE2

Table 5-10: Explicit and dynamic routes for LSP tunnels calculated by TEQUILA and Cisco MPLS-TE approaches (155 ms of delay was introduced in link 5).

Page 140:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 140 of 204

TEQUILA Consortium - October 2002

Comparison of one way delay expreienced by EF traffic using TEQUILA and an Ad-hoc (Cisco) approaches

0

20

40

60

80

100

120

140

160

180

200

08:55:00 09:00:00 09:05:00 09:10:00 09:15:00 09:20:01 09:25:01 09:30:01 09:35:01

Time of day

One

way

Del

ay in

mse

c

Tequila - LSP3-EF (mean = 16.6ms) Cisco - LSP3-EF (mean =170.6ms)

Figure 5-41: One-way delay experienced by EF traffic in TEQUILA and Cisco MPLS-TE approches.

5.3.3.2 LSP Creation and PHB bandwidth allocation (Test Scenario 3) Table 5-9 shows the average bandwidth requested by customers for different type of traffic entering and exiting the testbed from all edge routers.

Customer bandwidth demand in Kbps for: Ingress - Egress Pairs

EF traffic AF1 Traffic BE traffic

PE1-PE2 & PE2-PE1 600 600 600

PE1-PE3 & PE3-PE1 144 144 600

PE2-PE3 & PE3-PE2 144 144 600

Table 5-11: Average bandwidth demand of customer for different type of traffic.

The next two sections show how the TEQUILA and Cisco solutions address the above customer demand.

5.3.3.2.1 LSP creation and PHB bandwidth allocation by TEQUILA system

TEQUILA system was run on completion. The calculated explicit routes for tunnels are shown in Table 5-10.

Page 141:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 141 of 204

TEQUILA Consortium - October 2002

Tunnel Global ID assigned by GAL

Classes of traffic (OA) Explicit route of tunnels

1, 13, 25 EF, AF1, BE PE1-P1-P2-PE2

2, 14, 26 EF, AF1, BE PE1-P3-PE2

3, 15, 27 EF, AF1, BE PE1-P1-P3-PE3

4, 16, 28 EF, AF1, BE PE1-P3-PE3

5, 17, 29 EF, AF1, BE PE2-P2-P1-PE1

6, 18, 30 EF, AF1, BE PE2-P3-PE1

7, 19, 31 EF, AF1, BE PE2-P2-P3-PE3

8, 20, 32 EF, AF1, BE PE2-P3-PE3

9, 21, 33 EF, AF1, BE PE3-P3-P1-PE1

10, 22, 34 EF, AF1, BE PE3-P3-PE1

11, 23, 35 EF, AF1, BE PE3-P3-P2-PE2

12, 24, 36 EF, AF1, BE PE3-P3-PE2

Table 5-12: Explicit routes for LSP tunnels calculated by TEQUILA system.

The allocated bandwidth to the PHBs of all router interfaces are shown in Table 5-11.

Router's Output Interface

Traffic Class (OA)

Avg. Subscription Demand

Amount of allocated bandwidth to PHBs (no forecast)

Amount of Link Bandwidth used

PE3_Serial 3/0:0 EF, BE,AF1 288, 1200,288 288, 1200,288 1776

PE2_Serial 3/1:0 EF, BE, AF1 304, 456, 304 304, 456, 304 1064

PE2_Serial 3/0:0 EF, BE, AF1 439, 743, 439 439, 743, 439 1621

PE1_Serial 3/0:0 EF, BE, AF1 304, 456, 304 304, 456, 304 1064

PE1_Serial 3/1:0 EF, BE, AF1 439, 743, 439 439, 743, 439 1621

P3_Serial 3/1:0 EF, BE, AF1 288, 1200, 288 288, 1200, 288 1776

P3_Serial 3/2:0 EF, BE, AF1 304, 456, 304 304, 456, 304 1064

P3_Serial 3/0:0 EF, BE, AF1 304, 456, 304 304, 456, 304 1064

P3_Serial 4/0 EF, BE, AF1 68, 372, 68 68, 372, 68 508

P3_Serial 4/1 EF, BE, AF1 68, 372, 68 68, 372, 68 508

P2_Serial 0/0 EF, BE, AF1 439, 743, 439 439, 743, 439 1621

P2_Serial 0/2 EF, BE, AF1 68, 372, 68 68, 372, 68 508

P2_Serial 0/1 EF, BE, AF1 370, 370, 370 370, 370, 370 1110

P1_Serial 0/1 EF, BE, AF1 439, 743, 439 439, 743, 439 1621

P1_Serial 1/0 EF, BE, AF1 68, 372, 68 68, 372, 68 508

P1_Serial 0/0 EF, BE, AF1 370, 370, 370 370, 370, 370 1110

Table 5-13: Subscription demands and PHB bandwidth allocations in Kbps.

Table 5-10 and Table 5-11 show that TEQUILA system distributes the bandwidth requirements of customers along all the available paths by creating LSP tunnels and configuring PHBs while maintaining the QoS requirements.

Page 142:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 142 of 204

TEQUILA Consortium - October 2002

5.3.3.2.2 LSP creation and PHB Bandwidth allocation in Cisco MPLS-TE

TEQUILA does not allocate bandwidth to LSP tunnels but to the PHBs. But in Cisco MPLS-TE and based on the bandwidth requirements, there is a need to allocate bandwidth to tunnels in order to create traffic-engineered paths for the LSPs. If no bandwidth is allocated to the tunnels and if tunnel establishments are requested, Cisco MPLS-TE will create infinite number of tunnels using the shortest path. As a test, we have created more than 60 tunnels from PE1 to PE3 using dynamic routes. The tunnels have been established using PE1-P3-PE3 route.

In Cisco MPLS-TE, the tunnels are created dynamically. The bandwidth allocated to the tunnels is based on the requests. The tunnel routes and bandwidth allocated to them are shown in Table 5-12.

Tunnel Ids Classes of traffic (OA) Allocated bandwidth to Tunnel in Kbps

Explicit route of tunnels

PE1-1, PE1-2, PE1-3 BE, EF, AF1 600, 144,144 PE1-P3-PE3

PE1-4 EF 600 PE1-P3-PE2

PE1-5, PE1-6 AF1, BE 600, 600 PE1-P1-P2-PE2

PE2-1, PE2-2, PE2-5 EF, AF1, BE 600, 144, 144 PE2-P3-PE3

PE2-3, EF 600 PE2-P3-PE1

PE2-4, PE2-6 BE, AF1 600, 600 PE2-P2-P1-PE1

PE3-1, PE3-4, PE3-5 BE, AF1, EF 600, 144,144 PE3-P3-PE1

PE3-2, PE3-3, PE3-6 BE, AF1 600, 144, 144 PE3-P3-PE2

Table 5-14: LSP tunnels calculated by Cisco.

In Cisco, tunnel explicit routes can also be manually configured and in Cisco MPLS-TE approach, in order to have differentiated service, the PHBs along the tunnel routes must be configured manually. The manual configurations of LSPs and PHBs are cumbersome even for very small networks such as UK testbed.

In conclusion, both approaches use different techniques to utilise the network bandwidth capacity. TEQUILA uses an automatic method to map the customer demands to the network resources whereas in Cisco MPLS-TE, this must be done manually or through some external applications.

5.3.4 Assessing QoS with Minimum Demand-based PHB Configuration - SUITE3_2/GC/MPLSTE/BC/3

In Table 5-9, we specified the average bandwidth requested by customers for different types of traffic entering and exiting the testbed from all edge routers. The same bandwidth requirements have been used for this test.

Here, we run the TEQUILA system to allocate resources to customer requests up to the subscribed limit (i.e., no traffic forecast has been performed). LSP configuration and PHB bandwidth allocations are the same as shown in Table 5-10 and Table 5-11 respectively. All links have been loaded based on the subscription demands as shown in Table 5-11(e.g., EF = 439, AF1 = 439, BE = 743 Kbps in PE1to PE2 route).

Figure 5-17 shows the edge-to-edge one-way delays measured from PE1 to PE2 for the three tunnels carrying EF, AF1 and BE traffic passing through their respected PHBs. As the network has not been congested, the one-way delays experienced by the three traffic classes have been very low. A similar pattern of the results observed for the three cases.

As it was expected, no packet loss has been experienced for each type of traffic in the LSP routes and PHBs.

Page 143:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 143 of 204

TEQUILA Consortium - October 2002

One way delay on experienced by EF, AF1 and BE traffic carried by three LSPs (no forecast)

0

5

10

15

20

25

30

9:27:00 9:32:46 9:38:31 9:44:17 9:50:02 9:55:48 10:01:34 10:07:19

Time of day

One

Way

Del

ay in

mse

c

LSP1-EF (mean = 4.8ms) LSP25-BE (mean = 5.2ms) LSP13-AF1 (mean = 5.1ms)

Figure 5-42: One-way delay experienced by EF, AF1, and BE traffic carried by three LSPs (no forecast).

Figure 5-18 shows one-way delay results in Hop2 (P1-P2) for the EF, AF1 and BE traffic.

One way delay for EF, AF1, and BE PHBs in Hop2 (P1-P2) - no forecast

0

1

2

3

4

5

09:10:00 09:17:12 09:24:24 09:31:36 09:38:48 09:46:00 09:53:12

Time of day

One

Way

Del

ay in

mse

c

PHB1-EF (mean = 1.51ms) PHB2-AF1 (mean = 1.53ms) PHB2-BE (mean = 1.52ms)

Figure 5-43: One way delay for EF, AF1, and BE PHBs in Hop2 (P1-P2) - no forecast.

As the network has not been congested, the one-way per hop delays shown in Figure 5-18 have been very low for the three traffic classes. The one-way per hop delay comprises of synthetic packet transmission delay on the link's interface that is 0.51 msec, the link propagation delay and very low queuing delay (i.e., 1.0 msec).

Page 144:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 144 of 204

TEQUILA Consortium - October 2002

5.3.5 Assessing QoS with Availability-based PHB Configuration - SUITE3_2/GC/MPLSTE/BC/4

In Table 5-9, we showed the average bandwidth requested by customers for different type of traffic entering and exiting the testbed from all edge routers. The same bandwidth requirements have been used for this test.

In this test, the TEQUILA system was run to allocate resources not only to subscribed demands but also up to the forecast limit, taking into account availability estimates having fulfilled the anticipated minimum traffic demand. LSP configurations are the same as shown in Table 5-10. PHB bandwidth allocations are shown in Table 5-13. All links have been fully loaded. The amount of BE traffic injected to the network from PE1 to PE2 was set slightly higher than the amount of bandwidth reserved for the queue belonging to the BE PHB in order to bring the link into congestion (i.e., EF = 561, AF1 = 561, BE = 950 Kbps in PE1 to PE2 route).

Traffic Classes (OA)

Router's Output Interface

Avg. Subscription Demand

Amount of Bandwidth allocated to PHBs

(with forecast)

Amount of Link Bandwidth allocated

(with forecast)

EF, BE,AF1 PE3_Serial 3/0:0 288, 1200,288 324, 1351, 324 1999

EF, BE, AF1 PE2_Serial 3/1:0 304, 456, 304 493, 645, 493 1631

EF, BE, AF1 PE2_Serial 3/0:0 439, 743, 439 561, 923, 561 2045

EF, BE, AF1 PE1_Serial 3/0:0 304, 456, 304 493, 645, 493 1631

EF, BE, AF1 PE1_Serial 3/1:0 439, 743, 439 561, 923, 561 2045

EF, BE, AF1 P3_Serial 3/1:0 288, 1200, 288 324, 1351, 324 1999

EF, BE, AF1 P3_Serial 3/2:0 304, 456, 304 493, 645, 493 1631

EF, BE, AF1 P3_Serial 3/0:0 304, 456, 304 493, 645, 493 1631

EF, BE, AF1 P3_Serial 4/0 68, 372, 68 86, 448, 86 620

EF, BE, AF1 P3_Serial 4/1 68, 372, 68 86, 448, 86 620

EF, BE, AF1 P2_Serial 0/0 439, 743, 439 561, 923, 561 2045

EF, BE, AF1 P2_Serial 0/2 68, 372, 68 86, 448, 86 620

EF, BE, AF1 P2_Serial 0/1 370, 370, 370 475, 475, 475 1425

EF, BE, AF1 P1_Serial 0/1 439, 743, 439 561, 923, 561 2045

EF, BE, AF1 P1_Serial 1/0 68, 372, 68 86, 448, 86 620

EF, BE, AF1 P1_Serial 0/0 370, 370, 370 475, 475, 475 1425

Table 5-15: Traffic demands and PHB bandwidth allocations in Kbps.

Page 145:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 145 of 204

TEQUILA Consortium - October 2002

Figure 5-19 shows a graphical representation of Table 5-11 and Table 5-13 the PHB bandwidth allocation and total bandwidth used in each network's link.

As can be seen in Table 5-13, the maximum link capacity has been allocated to PHBs in the interfaces of PE2_Serial 3/0:0 and P1_Serial 0/1, and P1_Serial 0/1. We learned that we must make sure to leave some spare bandwidth for housekeeping such as routing protocols, CORBA communications, etc. and not to forecast and allocate bandwidth to PHBs up to the maximum link capacity. This has been incorporated in the latest version of TEQUILA system.

Symetrical bandwidth reserved for PHBs (EF, AF1, BE) in the outgoing interfaces of each link

0

200

400

600

800

1000

1200

1400

1600

1800

2000

Link number specified in Figure A.1.

Ban

dwid

th r

eser

ved

in K

bps

(EF/AF1 BE Total - no forecast) (EF/AF1 BE Total - with Forecast)

L1 L2L7 L6L5L4L3 L8

Figure 5-44: Symetrical bandwidth reserved for PHBs (EF, AF1, BE) of the outgoing interfaces connected to each end of network links (in both cases: no forecast and with forecast).

Page 146:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 146 of 204

TEQUILA Consortium - October 2002

Figure 5-20 shows the edge-to-edge one-way delay results for the three PE1 to PE2 tunnels carrying the EF, AF1 and BE traffic passing through their respected PHBs. As the network has been fully congested, the one-way delays have been high in all three cases compared with the ones shown in Figure 5-17. Extra delay may also be caused by penultimate hoping approach used by Cisco. This means that in the last hop data streams may not get different treatment. In the MPLS environment, packets are directed to the PHBs using MPLS EXP field. As the MPLS tunnels are terminated in the last hop to the egress point, IP packets are directed to the congested outgoing interface of the last hop where there is not packet differentiation is performed.

One way delay exprienced by LSPs under heavy load conditions

0

50

100

150

200

250

12:27:00 12:32:46 12:38:31 12:44:17 12:50:02 12:55:48 13:01:34 13:07:19

Time of day

One

way

del

ay in

mse

c

LSP1-EF (mean = 20.8ms) LSP13-AF1 (mean = 134.5ms) LSP25-BE (mean = 141.5ms)

Figure 5-45: One-way delay experienced by LSPs carried EF, AF1, and BE traffic (with forecast).

Page 147:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 147 of 204

TEQUILA Consortium - October 2002

Figure 5-21 shows one-way packet losses experienced by the three LSPs.

Packet loss exprienced by EF, AF1, and BE traffic carried by three LSPs (with forecast - under heavy load conditions)

0

10

20

30

40

50

60

70

80

90

100

12:27:00 12:32:46 12:38:31 12:44:17 12:50:02 12:55:48 13:01:34 13:07:19

Time of day

% o

f on

e w

ay p

acke

t lo

ss

LSP1-EF (mean = 0.2%) LSP13-AF1 (mean = 1.6%) LSP25-BE (mean = 19.6%)

Figure 5-46: Packet losses experienced by LSPs carried EF, AF1, and BE traffic (with forecast).

The packet loss has been very low in the tunnel carried the EF traffic and was high in the LSP carried the BE traffic. This latter is due to the fact that we injected more traffic to LSP-BE than the bandwidth reserved for the BE PHBs.

Page 148:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 148 of 204

TEQUILA Consortium - October 2002

Figure 5-22 shows one-way delay results in Hop2 (P1-P2) for the EF, AF1 and BE traffic. As the network has been congested, the one-way delay results in all three cases have been higher than the delay experienced in non-fully loaded network. The one-way delay comprises of synthetic packet transmission delay on the link's interface that is 0.51 msec, the link propagation delay and queuing delay (i.e., about 5.1 msec for AF1 traffic) as shown in Figure 5-18. We can deduce from Figure 5-20 and Figure 5-22 that AF1 and BE traffic experienced more delay at the network edge than any other hop along the tunnel routes.

One way delay for EF, AF1, and BE PHBs in Hop2 (P1-P2) - with forecast

0

2

4

6

8

10

12

14

16

18

20

12:30:09 12:33:31 12:36:53 12:40:15 12:43:37 12:46:58 12:50:20 12:53:42 12:57:05

Time of day

One

way

del

ay in

ms

PHB1-EF (mean = 5.7ms) PHB2-AF1 (mean = 6.6ms) PHB3-BE (mean = 7.9ms)

Figure 5-47: One way delay for EF, AF1, and BE PHBs in Hop2 (P1-P2) - with forecast.

No packet loss for EF, AF1, BE traffic has been reported in Hop2 (P1-P2). This shows that almost all of the packet losses occurred at the edge where incoming traffic competed for the resources.

5.3.6 Hoses vs. Pipes - SUITE3_2/GC/MPLSTE/BC/5 This test is to compare Hoses with Pipes. A set of customer subscriptions has been specified for Hose SLSs as shown in Table 5-14. Three classes of services have been defined i.e., Expedited Forwarding (EF), Assured Forwarding (AF1), and Best effort (BE) for these subscriptions.

Page 149:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 149 of 204

TEQUILA Consortium - October 2002

Hose Number

Ingress point

Egress point

Traffic Source Traffic Destinations

Traffic Class

Delay Req. in ms

Rate limit in Kbps

Avg. Bandwidth Demand in Kbps

1 PE1 PE2 or PE3 LAN1:194.166.11.10 194.166.11.19 194.166.11.37

EF <150

1000 300

2 PE2 PE1 or PE3 LAN9:194.166.11.19 194.166.11.10 194.166.11.37

EF <150

1000 300

3 PE3 PE1 or PE2 LAN3:194.166.11.37 194.166.11.10 194.166.11.19

EF <150

300

4 PE1 PE2 or PE3 LAN1:194.166.11.14 194.166.11.21 194.166.11.38

AF1 <400

1000 300

5 PE2 PE1 or PE3 LAN9:194.166.11.21 194.166.11.14 194.166.11.38

AF1 <400

1000 300

6 PE3 PE1 or PE2 LAN3:194.166.11.38 194.166.11.14 194.166.11.21

AF1 <400

300

7 PE1 PE2 or PE3 LAN1:194.166.11.1 194.166.11.22 194.166.11.39

BE -

1000 300

8 PE2 PE1 or PE3 LAN9:194.166.11.22 194.166.11.1 194.166.11.39

BE -

1000 300

9 PE3 PE1 or PE2 LAN3:194.166.11.39 194.166.11.1 194.166.11.22

BE -

300

Table 5-16: Customer subscriptions and their bandwidth demands in Hose model.

TEQUILA system was run to completion in the cases where Hose and Pipe models were used to accommodate the demands. Two separate tests have been performed on Hoses and Pipes.

• In the first test VPNs were created using the hose model. Nine unidirectional Hose VPNs (shown in Table 5-14) have been constructed to transport EF, AF1 and BE traffic from PE1 to PE2 and PE3, PE2 to PE1 and PE3, and PE3 to PE1 and PE2.

• In the second test VPNs were created using the Pipe model. Eighteen unidirectional pipes have been constructed to transport EF, AF1 and BE traffic between PE1, PE2, and PE3.

The bandwidth allocated to PHBs of all router interfaces in the first and second tests are shown in Table 5-15.

Router's Output Interface

Traffic Classes (OA)

Amount of Bandwidth allocated to PHBs (Hose)

Amount of Bandwidth allocated to PHBs (Pipe)

PE3_ Serial 3/0:0 EF, BE,AF1 300, 300, 300 600, 600, 600

PE2_ Serial 3/0:0 EF, BE, AF1 300, 300, 300 600, 600, 600

PE1_ Serial 3/1:0 EF, BE, AF1 300, 300, 300 600, 600, 600

P3_ Serial 3/1:0 EF, BE, AF1 600, 600, 600 600, 600, 600

P1_ Serial 0/1 EF, BE, AF1 600, 600, 600 600, 600, 600

P2_ Serial 0/0 EF, BE, AF1 600, 600, 600 600, 600, 600

P3_Serial4/0 EF, BE, AF1 420, 420, 420 300, 300, 300

P3_Serial4/1 EF, BE, AF1 420, 420, 420 300, 300, 300

P1_Serial1/0 EF, BE, AF1 300, 300, 300 300, 300, 300

P2_Serial0/2 EF, BE, AF1 300, 300, 300 300, 300, 300

P2_Serial0/1 EF, BE, AF1 180, 180, 180 300, 300, 300

P1_Serial0/0 EF, BE, AF1 180, 180, 180 300, 300, 300

Table 5-17: Traffic demands and PHB bandwidth allocations in Kbps.

Page 150:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 150 of 204

TEQUILA Consortium - October 2002

Figure 5-23 shows a graphical representation of Table 5-15 in allocation of bandwidth to EF PHBs in Hose and Pipe cases. The bandwidth allocation to AF1 and BE PHBs are the same as EF PHBs and are not drawn in figure 8-1, depicting the UK testbed. With regard to the bandwidth demand and by taking into account the experimental testbed, figure 5-23 shows that 16.7% more bandwidth is allocated to EF PHBs when the pipe case is used. The bandwidth saving in the Hose case is gained in the shared parts of Hose paths (e.g., Link 1: PE1_ Serial 3/1:0 for Hose 1, Link 3: PE2_ Serial 3/0:0 for Hose 2 and Link 7: PE3_ Serial 3/0:0 for Hose 3. This is shown in the First Part of the figure. Although Second Part of Figure 5-23 shows some differences between the two cases in allocating bandwidth to non-shared part of hose paths (i.e., core network, P1, P2, P3), in total the same amount of bandwidth is used in non-shared part of hose paths in both cases. The differences are due to the fact that different routes are used in Hose case than Pipe case to transport traffic.

Therefore, Hose approach can provide significant reduction in bandwidth allocation. In the case of ad-hoc approaches such as Cisco, hoses need to be broken down into pipes and each pipe LSP must be given a certain amount of bandwidth. This over allocates the bandwidth on the links that shared by pipe LSPs constituting the hose.

Bandwidth reserved for EF PHBs in the outgoing interfaces of each link

0

100

200

300

400

500

600

700

800

Interface number specified in Figure 19.

Ban

dwid

th r

eser

ved

in K

bps

Hose: EF

Pipe: EF

PE3-S3/0:0

P1-S0/0

P1-S0/1

P3-S3/1:0

PE1-S3/1:0

PE2-S3/0:0 P2-S0/0

P3-S4/0 P2-S0/1

P2-S0/2

P1-S1/0

P3-S4/1

First Part Second Part

Figure 5-48: Bandwidth reserved for EF PHBs of the outgoing interfaces in Hose and Pipe (no forecast).

Page 151:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 151 of 204

TEQUILA Consortium - October 2002

5.4 Monitoring Performance Tests and Results This section presents the test environment followed by the assessment of the Monitoring sub-system. The following table provides references for the tests conducted for the Monitoring sub-system.

Test Id. Purpose Platform Results SUITE3_2/GC/Mon/BC/1 Accuracy test on delay and loss measured between

two nodes. UK testbed (see A.1.)

Sections 5.4.1, 5.4.2

SUITE3_2/GC/Mon/BC/2 Accuracy test on delay and loss measured between two edge nodes by performing hop-by-hop measurements.

UK testbed (see A.1.)

Sections 5.4.1, 5.4.3

SUITE3_2/GC/Mon/BC/3 Benefits of the monitoring process granularity at the LSP level

UK testbed (see A.1.)

Sections 5.4.1, 5.4.4.1

SUITE3_2/GC/Mon/BC/4 Cost of the Monitoring sub-system. UK testbed (see A.1.)

Sections 5.4.1, 5.4.4.2

SUITE3_2/GC/Mon/SCAL/1 Scalability of Monitoring (in term of synthetic traffic volume) in the case of Edge-to-Edge Measurements

UK testbed (see A.1.)

Sections 5.4.1, 5.4.5.1

SUITE3_2/GC/Mon/SCAL/2 Scalability of Monitoring (NetMon) in the case of Hop-by-Hop Measurements

- Analytical considerations; section 5.4.5.2

SUITE3_2/GC/Mon/SCAL/3 Scalability of Ingress/egress NodeMons - Analytical considerations; section 5.4.5.3

SUITE3_2/GC/Mon/STAB/1 Stability of Monitoring subsystem - Analytical considerations; section 5.4.6

5.4.1 Experimentation Set-up This section presents experimental results obtained by using UK testbed. Figure 5-13 shows in details part of UK testbed that was used to get monitoring results. The complete map of UK testbed is depicted in Figure 8-1. All routers are connected to each other via 2Mbps serial links. Two SX/12 Data Channel Simulators (DCS) are used to introduce delay and loss into the links. A commercial traffic generator (SmartBits-2000) is used to generate a separate set of synthetic traffic in a loopback form. The one-way delay results measured by the traffic generator are used to verify the results measured by our monitoring system. A NodeMon with its associated Active Monitoring Agents (AMA) and Passive Monitoring Agents (PMA) and NetMon are hosted on a PC attached to PE1. This PC also acts as the NTP server that is time synchronized via a GPS time receiver. This provides an accurate stable clock source. A PC hosting a Node Monitor and its associated AMA and PMA is attached through Ethernet to each of the other three routers. These three PCs are synchronized to the NTP server in a client/server mode with an accuracy of 50 microseconds.

Page 152:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 152 of 204

TEQUILA Consortium - October 2002

EdgeRouter

NodeMonitor 1

PE2PE1 P2P1EdgeRouter

Core Router

Traffic Generator

10Mbps Ethernet 10Mbps Ethernet

2Mbps serial link

2Mbps 2Mbps

NetworkMonitor

NodeMonitor 2

NodeMonitor 3

NodeMonitor 4

GPS TimeReceiver

RS232NTP Server

Pentium-41.5MHz PCs

Antenna

10Mbps10Mbps 10Mbps

10Mbps

Edge-to-Edge

LSP, IP Route, and SLS Scope

Hop-by-HopHop 2Hop 1 Hop 3

EdgeRouter

Core Router

Link/PHB Scope

AMA1PMA1 AMA2PMA2 AMA3PMA3 AMA4

Edge-to-Edge loopback

PMA4

AMA: Active Monitoring AgentPMA: Passive Monitoring AgentDCS: Data Channel Simulator

DCS-2DCS-1

Figure 5-49: Testbed configuration for Monitoring sub-system tests.

The performance assessment tests are conducted using the test scenarios specified in Table 5-16. A single PHB along the paths is used for scenarios 5 to 8. It should be noted that there is a small amount of background traffic in the network for test scenarios 5 to 8.

Test scenario

IP packet size for synthetic

traffic

Mean injection rate of synthetic traffic by AMA

Data summarization

period7

Delay introduced by DCS-1

Delay introduced by DCS-2

% Mean packet loss introduced

by DCS-1

% Mean packet loss introduced

by DCS-2

5 128 bytes 2 PPS8 - 3 millisecond (ms) 7 ms 0.0 0.0 6 128 bytes 4 PPS 5 minutes 0 0 0.0 3.1 7 128 bytes 4 PPS 5 seconds 3 ms 7 ms 0.0 0.0 8 128 bytes 4 PPS 5 minutes 0 0 2.0 3.1 9 128 bytes 4 PPS 2 seconds 0 0 0 0

10 128 bytes 4 PPS - 0 0 0 0

Table 5-18: Experimental scenarios for Monitoring sub-system tests.

5.4.2 Delay/Loss Accuracy – SUITE3_2/GC/Mon/BC/1 As the network operation relies on the monitoring system, the monitoring information must be accurate and reliable; i.e. traffic and performance-related metrics must be measured with great accuracy.

7 NodeMon averages individual raw data measured by AMA during the summarization period in order to comply with principle set in Section 3.6 of D1.4 - Scalability and Stability Analysis. 8 PPS: Packet Per Second.

Page 153:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 153 of 204

TEQUILA Consortium - October 2002

5.4.2.1 One-way Delay Accuracy (Test Scenario 5) A test is setup to inject synthetic traffic from AMA1 on NodeMon 1 to AMA4 on NodMon 4 (i.e., edge-to-edge from PE1 to PE2) to measure one-way delay using test scenario 5 specified in Table 5-16. The AMA1 timestamps transmitting packets and AMA4 timestamps receiving packets. At the same time, the traffic generator is used in a loopback form to generate synthetic traffic, which is directed to PE1 and received from PE2 (i.e., edge-to-edge from PE1 to PE2). The traffic generator is configured to inject synthetic traffic with a similar profile to traffic injected by AMA1. The traffic generator time stamps the packets it transmits and receives using its internal clock. Figure 5-25 shows the raw measurement results for one-way delay as reported by the monitoring subsystem and by the traffic generator. The figure shows the results obtained by our monitoring subsystem and traffic generator are mainly overlapped and are almost the same. This shows good accuracy of the monitoring results even in the case where monitoring agents are located outside the routers. Based on these results, one-way delays resulted from active measurements are suitable for use in short and long-term dynamic traffic engineering operations of the network as well as for SLS Monitoring.

Accuracy Test: One-way delay (Traffic Generator and Monitoring System)

0

2

4

6

8

10

12

14

16

18

20

9:50:00 10:00:00 10:10:00 10:20:00 10:30:00 10:40:00 10:50:00

Time of day

One

-way

del

ay in

ms

Traffic Generator Monitoring System

Figure 5-50: One-way delay accuracy result.

5.4.2.2 One-way Packet Loss Accuracy (Test Scenario 6) The second test is setup to inject edge-to-edge synthetic traffic from AMA 1 to AMA 4 (PE1 to PE2) to measure one-way packet loss using test scenario 6 specified in Table 5-16. A summarization period for calculating the sampled packet loss ratio was defined to be 5 minutes. Figure 5-26 shows the result of one-way packet loss as measured by our system (for 5-minute intervals and overall average), and as it was programmed in the DCS. The oscillation of measured values in different intervals is due to the fact that synthetic packets are randomly generated and losses are introduced randomly giving different packet loss values in different intervals. In total, the measured mean loss (3.28%) is close to the introduced mean loss (3.1%) by DCS-2.

Page 154:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 154 of 204

TEQUILA Consortium - October 2002

Accuracy Test: Edge-to-Edge One-Way Packet Loss

0

2

4

6

8

10

16:48:00 19:12:00 21:36:00 0:00:00 2:24:00 4:48:00 7:12:00 9:36:00

Time of Day

% P

acke

t los

s

% Packet Loss Measured Mean Loss Measured: 3.28% Mean Loss Introduced: 3.10%

Figure 5-51: One-way packet loss accuracy result.

Based on the results obtained, it can be inferred that one-way packet loss resulted from active measurements is suitable only for longer-term path monitoring and SLS monitoring, because of these oscillations in shorter time periods. We recommend using passive monitoring of packet discards when it is required to perform short-term dynamic management of PHBs.

5.4.3 Edge-to-edge vs. Hop-by-hop Accuracy - SUITE3_2/GC/Mon/BC/2 5.4.3.1 Edge-to-edge versus Hop-by-hop One-way Delay Accuracy (Test Scenario 7) The third test is setup to inject edge-to-edge synthetic traffic from AMA 1 to AMA 4 (PE1 to PE2) to measure one-way delay using test scenario 7 specified in Table 5-16. At the same time measurement tests were also setup between all node pairs that is Hop 1 (PE1 to P1), Hop 2 (P1 to P2), and Hop 3 (P2 to PE2) for measuring one-way delay using the hop-by-hop method. Network Monitor performs hop-by-hop result aggregation in order to calculate edge-to-edge aggregated results. Figure 5-27 shows one-way delay as experienced by packets over each hop, edge-to-edge, and aggregated hop-by-hop results. The hop-by-hop, the aggregated hop-by-hop, edge-to-edge measurements were performed by our monitoring system.

Page 155:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 155 of 204

TEQUILA Consortium - October 2002

Acuracy Test: One Way Delay Hop-by-Hop & Edge-to-Edge

02468

10

1214161820

16:48:00 19:12:00 21:36:00 00:00:00 02:24:00 04:48:00 07:12:00 09:36:00

Time of day

One

Way

Del

ay in

ms

Hop 1 Hop 2 Hop 3 Edge-to-Edge Aggregated Hop-by-Hop

Figure 5-52: Edge-to-edge and hop-by-hop one-way delay accuracy results.

The mean difference between edge-to-edge and aggregated hop-by-hop result is 1.1 milliseconds. This is due to the fact that for edge-to-edge measurement, synthetic packets have to traverse Ethernet segments twice and therefore packets are processed twice (i.e., in AMA1 and AMA4). While in hop-by-hop measurements synthetic packets have to traverse Ethernet segments six times and therefore packets are processed six times. This introduces the observed 1.1 millisecond delay difference. If AMAs were embedded in the routers, the delay difference would have considerably reduced. Overall, we can state comparable results are obtained by both methods, making our proposed hop-by-hop measurement proposal more attractive because of it enhances monitoring scalability.

5.4.3.2 Edge-to-edge versus Hop-by-hop One-way Packet Loss Accuracy (Test Scenario 8) We contacted similar tests as the above to measure the impact of the hop-by-hop approach to the accuracy of the one-way packet loss measurements. We used the scenario 8, as specified in Table 5-16. Figure 5-18 shows one-way packet loss experienced over each hop, edge-to-edge, and aggregated hop-by-hop results. The average of the measured results were 5.26% for edge-to-edge, 2.04% for hop1, 0.0% for hop 2, 3.19% for hop 3, and 5.16% for aggregated hop-by-hop. The difference of 0.1% is negligible and can be attributed to rounding errors. Overall, we can state that the hop-by-hop method gave comparable results with the edge-to-edge method, with hop-by-hop method having the advantage of enhancing the monitoring system scalability.

Page 156:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 156 of 204

TEQUILA Consortium - October 2002

One-way Packet Loss (Edge-to-Edge and Hop-by-Hop)

0

2

4

6

8

10

16:48:00 19:12:00 21:36:00 0:00:00 2:24:00 4:48:00 7:12:00 9:36:00

Time of day

% P

acke

t los

s

Edge-to-edge Hop1 Hop 2 Hop 3 Aggregate Hop-by-hop

Figure 5-53: Edge-to-edge and hop-by-hop one-way packet loss accuracy result.

5.4.4 Benefits/Cost – SUITE3_2/GC/Mon/BC/3 and SUITE3_2/GC/Mon/BC/4 Experiments are being conducted at the system level to determine the benefits of our monitoring subsystem in the operation of traffic-engineered networks. That is to determine the degree of improvement to the network resource utilization attributable to the existence of a real-time monitoring system. A test was conducted to show the ability of monitoring subsystem for performing the measurement at the LSP granularity. The costs associated with the monitoring subsystem is also discussed below.

5.4.4.1 The benefit of Monitoring Process Granularity at the LSP level - One-way Delay and Throughput on LSPs - SUITE3_2/GC-Mon/BC/3

This experiment was conducted in order to measure one-way delay and throughput on different LSPs. For this experiment, three LSP tunnels have been set-up for carrying Expedited Forwarding (EF), Assured Forwarding (AF1), and Best effort (BE) traffic from PE1 to PE2. Low Latency Queuing (LLQ) has been set at the output interfaces of the routers that tunnel transit. Two classes have been defined each given 600 Kbps of bandwidth, and the third class has been defined with 700 Kbps of bandwidth. EF traffic is directed to the priority queue while AF1 and BE traffic use normal CBWFQ.

Three traffic generation sources are used to transmit EF and AF1 traffic at 600 Kbps rates and BE traffic at 800 Kbps (i.e., BE traffic is subjected to loss as more BE traffic is injected to the network than bandwidth reserved for it in order to bring the link near to congestion). The packets from all three traffic generation sources are set to be 128 bytes long including headers. Traffic belonging to these classes is not regulated at the ingress point. The size of synthetic packets is configured to be a similar size of user traffic generated by the traffic generators. This is to ensure that the CBWFQ give the same treatment to both the synthetic traffic and user traffic. Therefore, the delay measured for synthetic traffic is similar to the delay experienced by user traffic. Three active monitor jobs have been set-up to monitor one-way delay on these LSP tunnels (test scenario 9 in Table 5-16). As shown in Figure 5-29, on average EF, AF1, and BE traffic experienced 6.0, 42.7, and 144.9 ms of delay respectively. This experiment shows the benefit of our Monitoring system and its ability to monitor the configured classes of services at the LSP level.

Page 157:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 157 of 204

TEQUILA Consortium - October 2002

One-Way Delay Experienced on Different LSPs

0

20

40

60

80

100

120

140

160

180

200

16:30:00 17:30:00 18:30:00 19:30:00 20:30:00 21:30:00 22:30:00

Time of Day

One

-Way

Del

ay (m

s)

LSP1-EF (mean = 6.0 ms) LSP2-AF1 (mean = 42.7 ms) LSP3-BE (mean = 144.9 ms)

Figure 5-54: One-way observed on three different LSPs.

At the same time, three passive monitor jobs have been set-up to monitor throughput on these LSP tunnels. As shown in Figure 5-30, the average measured throughputs for the EF, AF1, and BE LSP tunnels are 602.9, 602.7, and 696.3 Kbps respectively. It should be noted that the summarisation period is 2 seconds for one-way delay results (i.e., one-way delay measured values are averaged in 2-second intervals representing a data point in Figure 5-29) and the read-out period for retrieving the throughput values from router is 10 seconds (representing a data point in Figure 5-30).

Throughput on Different LSPs

500

550

600

650

700

750

800

16:30:00 17:30:00 18:30:00 19:30:00 20:30:00 21:30:00 22:30:00

Time of Day

Thr

ough

put (

kbps

)

LSP1-EF (mean = 602.9 kbps) LSP2-AF1 (mean = 602.7 kbps) LSP3-BE (mean = 696.3 kbps)

Figure 5-55: Throughput measured on three different LSPs.

Page 158:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 158 of 204

TEQUILA Consortium - October 2002

5.4.4.2 Cost of Monitoring - SUITE3_2/GC/Mon/BC/4 The costs associated with the monitoring subsystem are the introduction of synthetic traffic to the network and the communication overhead to transfer node/network level measurements to the related management entities. The other cost is the deployment of reliable and accurate clock synchronization technology for PCs hosting Node Monitors and their associated AMAs for performing one-way delay measurements. For the edge-to-edge method, only edge Node Monitors needs to be synchronized. For the hop-by-hop method, all pairs of Node Monitors that perform per hop measurements need to be synchronized.

There were problems that we encountered in time-synchronizing the NodeMons and AMAs for getting accurate one-way-delay results. NTP settings were inaccurate even for a less than 5% utilized network, yielding “periodic and jittery” one-way-delay results. We had to dedicate a separate Ethernet segment, not shown in Figure 5-24, for NTP traffic connected to all the PCs through second Ethernet cards. So, the NTP server is time-synchronized via a GPS time receiver and NTP is used to synchronize the Node Monitors. This was necessary in order to ensure that NTP traffic was not subjected to the network load, packet loss and delay introduced by the DCSs. In a real world environment the possibility of a practical solution is to have a dedicated NTP server synchronized with a GPS time receiver and a dedicated NTP Ethernet segment at every location that hosts a number of routers.

5.4.5 Scalability– SUITE3_2/GC/Mon/SCAL/1,2,3,4 Scalability assessment of a monitoring sub-system must show the behaviour of the system when subjected to extending the network topological scope, increasing load, increasing measurement sampling rates, etc. To qualify how scalable the monitoring system is, the following scalability assessments are performed.

5.4.5.1 Scalability in terms of Amount of Synthetic traffic - SUITE3_2/GC/Mon/SCAL/1 (Test scenario 10)

The scalability assessment in terms of load (the packet rate) that introduced by hop-by-hop measurement compared with edge-to-edge measurement is shown in Figure 5-31. The network link interface used supported a maximum number of eight PHBs. A considerable number of LSPs can traverse the link. The injection rate is set to four packets per-second in both the edge-to-edge and the hop-hop methods. The graph shows that the number of synthetic packets crossing the link is equal in both methods if the number of LSPs and PHBs are less than or equal to eight. Increasing the number of LSPs traversing the link increases the number of synthetic packets linearly. This figure shows that the scalability claim, described in section 3.6 of D1.4 - Scalability and Stability Analysis, for hop-by-hop method against edge-to-edge method, is justified.

Scalability of Edge-to-Edge vs Hop-by-Hop Methods

1

10

100

1000

0 5 10 15 20 25 30 35 40 45 50

Number of PHBs per interface / Number of LSPs crossing a link

Pac

ket P

er S

econ

d (P

PS)

LSPs PHBs

Figure 5-56: Scalability of hop-by-hop versus edge-to-edge methods.

Page 159:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 159 of 204

TEQUILA Consortium - October 2002

5.4.5.2 Scalability of Network Monitoring as Centralised Component for Hop-by-hop Result Aggregation - SUITE3_2/GC/Mon/SCAL/2

It was not possible to perform this test in a limited testbed environment. It might be possible to conduct this test in a simulation environment.

Regarding the centralised components of the monitoring subsystem, it should be noted that the CORBA notification service is a centralised network service that is hosted on a single node/PC; it is used to dispatch event information from the point of measurement to monitoring clients. A single event channel is created for each network element (i.e., router). The number of events pushed to the notification service is similar for all nodes, hence the total event throughput processed by the notification server grows linearly with physical network size. For very large networks, the performance of the PC hosting the notification server and the bandwidth of links around it could be limiting factors. Alternative solutions are to have a federated notification system, with a notification server per domain of nodes. In this case, monitoring clients will have to register with all the notification servers, which should share the overall event throughput, resulting in better scalability.

5.4.5.3 Scalability of Ingress/egress NodeMonitoring - SUITE3_2/GC/Mon/SCAL/3 For bi-directional operation the processing effort (per second) required at each Node Monitor is of the order of ( )( )*2 * *m r m oO A P P R+ . Where mA is the number of active monitors and mP is the number of passive monitors that are operating, rP is the mean number of synthetic packets injected per second for each active monitor, and oR is the number of read-outs per second for each passive monitor. mA is equal to the number of LSPs originated from an edge node in the edge-to-edge method. mA is equal to the number of PHBs in the nodes in the hop-by-hop method. This expresses that processing effort is analogous to the above parameters. Therefore, the Node Monitor operations scale if these parameters remain within some defined boundaries. It is estimated that the typical number of monitors running on a single edge Node Monitor PC will be of the order of tens. Experiments indicate that this number of monitors would not place an unmanageable load on the Node Monitor PC and would not increase the response time. It is therefore concluded that the performance of individual Node Monitor PCs would not be a limiting factor in the scalability of the monitoring system.

5.4.6 Stability – SUITE3_2/GC/MON/STAB/1 Stability assessment must show that a system, given its specified dynamics/responsiveness, is operating in a way that drives the network to a stable state of operation. As the traffic engineering reacts to the measurement information provided by the monitoring system, the monitoring system should employ a reliable data transport mechanism for reporting events and statistics and must ensure that the network must not oscillate and become unstable as a consequence of its function or losing measurement data.

The monitoring subsystem uses the CORBA notification server that employs a reliable data transport mechanism in order to address the above concern. With event notification method, it is ensured that short-term oscillations (spikes) in the network are not passed to traffic engineering components that may further cause network oscillation and instability.

Page 160:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 160 of 204

TEQUILA Consortium - October 2002

5.5 IP-TE Performance Tests and Results The following table provides references for the tests conducted for the COPS-based dynamic provisioning scheme in the context of IP-based TE sub-system.

Test ID Purpose Platform Result

SUITE3_4/FTR&D/IPTE/DP/COPS/PERF/1 Evaluate the time needed to configure devices enabling COPS entities and compare the results to what could be obtained by other tool (CLI…)

French testbed (see A.2)

Section 5.5.1

SUITE3_4/FTR&D/IPTE/DP/COPS/PERF/2 Evaluate, when COPS is enabled in the device, the time need for the router to process a request. (The time between the reception of the DEC message and the sending of a RPT message related to this decision)

French testbed (see A.2)

Section 5.5.1

SUITE3_4/FTR&D/IPTE/DP/COPSPR/PERF/1 Evaluate the conformance of the data sent by COPS entities with what could be configured manually with CLI commands

French testbed (see A.2)

Section 5.5.1

SUITE3_4/FTR&D/IPTE/DP/COPSPR/PERF/2 Evaluate the time needed to configure a network with n routers enabling COPS-PR.

French testbed (see A.2)

Section 5.5.1

SUITE3_4/FTR&D/IPTE/DP/COPSPR/PERF/3 Evaluate the time of processing PEP' s requests by the PDP (or LPDP)

French testbed (see A.2)

Section 5.5.1

SUITE3_4/FTR&D/IPTE/DP/COPSPR/PERF/4 Evaluate the time needed for a PEP-enabled device to inform the PDP about the change of a state.

French testbed (see A.2)

Section 5.5.1

SUITE3_4/FTR&D/IPTE/DP/COPSPR/PERF/5 Evaluate the number of requests could be treated by the PDP.

French testbed (see A.2)

Section 5.5.1

5.5.1 COPS-based Configuration Results With reference to the French testbed and the COPS test set-up (see figure 8-2), the times needed to process a request or to send a decision, are measured and presented in the table bellow.

Delta(t) REQ DEC 0,078

PEP

RPT 0,062 REQ DEC 0,422

Initial Configuration

PDP

RPT 0 DEC PEP RPT 1,75 DEC

Router n°1

Configuration modification 70 commands PDP

RPT 1,906 REQ DEC 0,063

PEP

RPT 0,047 REQ DEC 0,125

Initial Configuration

PDP

RPT 0

Router n°2

Configuration PEP DEC

Page 161:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 161 of 204

TEQUILA Consortium - October 2002

RPT 1,735 DEC

modification 50 commands PDP

RPT 0,842 REQ DEC 0

PEP

RPT 0,047 REQ DEC 0,109

Initial Configuration

PDP

RPT 0 DEC PEP RPT 2,765 DEC

Router n°3

Configuration modification 30 commands PDP

RPT 2,859

Table 5-19: COPS configuration times.

As the above measurements indicate, the processing of 50 commands takes1.73s/0.842s. Overall, the configuration operations take less than 5mn for 12 routers. It should be noted that the reported measurements depend on the time needed to open a 'telnet' connection.

5.6 Policy Management Performance Tests and Results This section presents the assessment of the Policy Management sub-system. The following table provides references for the tests conducted for the Policy Management sub-system.

Test ID Purpose Platform Result

SUITE3_5/UniS/POL/BC/1 Assessing the benefits and costs of making our system policy-driven.

NS Simulator

Section 5.6.1

SUITE3_5/UniS/POL/SCAL/1 Assessing the scalability of the policy management regarding the number of entries in PolSS.

NS Simulator

Section 5.6.2

SUITE3_5/UniS/POL/STAB/1 Assessing the stability of the policy management (conflict detection)

NS Simulator

Section 5.6.3

SUITE3_5/UniS/POL/USAB/1 Assessing the usability of the policy management by demonstrating the use of the high-level policy language.

NS Simulator

Section 5.6.4

5.6.1 Benefits/Cost - SUITE3_5/UniS/POL/BC/1 The benefits of making our system to be policy-driven are mostly based on the fact that policies in our system are actually enhanced management logic of the components they influence which is produced and executed on the fly by the policy consumers. Consequently, this logic is not hardwired in the components and the code realising the policies entered by the administrator is dynamically generated and interpreted providing this way the flexibility to add, change or remove logic to the system while it is up and running. For example, the policy rule: If OA==EF and Ingress==4 and Egress==6 then Setup LSP 4-9-7-6 2 Mbps, is translated into a script described by the following pseudocode:

TTOA: the set of TTs belonging to OA For each tti ? TTOA we get the following: vingress, vegress : ingress, egress nodes b(tti): bandwidth requirement of tti for each tti ? TTEF do if ((vingress == 4) and (vegress == 6)) add_LSP (“4-9-7-6”, 2000) B(tti) = b(tti) – 2000 Else Policy not executed – TT not found

Page 162:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 162 of 204

TEQUILA Consortium - October 2002

Note that in our implementation the policy consumer produces TCL code due to the ease that it interfaces with C/C++ that the influenced components are implemented. If our system was not policy-based the logic of this policy should be hardwired in the TE components and parameterised in order to offer an interface to another component that could statically set these parameters according to the operator’s requirements. So for all the policies that can be entered in our system the logic should be pre-thought and hard-coded resulting into a less flexible and monolithic system.

On the other hand, with a policy-based approach, the logic realized by policies does not follow the rigid analysis, design, implementation, testing and deployment cycle of “hard-wired” long-term logic leading to inconsistencies due to conflicting policies. So, conflict detection and resolution mechanisms should be deployed in order to achieve stability in the system.

5.6.2 Scalability - SUITE3_5/UniS/POL/SCAL/1 In order to test if the Policy Management sub-system can cope with a large number of entered policies, we ran a test, checking the amount of memory consumed when they are stored in the policy repository. Since our system has a mechanism that takes advantage of reusable conditions and actions of policy rules we performed this test twice. The first time we entered policies that have rule specific conditions and actions creating every time new objects for every rule while the second time we entered policy rules that contain reusable condition or actions which are already stored in the Repository, creating this time only an object with a pointer to the reusable one. The following graph depicts the memory allocation of the policy repository in both cases described above.

As it can be clearly observed, in the case of policy rules with reusable components there is an increasing gain in the memory allocation as the number of added policy rules increase.

15000

20000

25000

30000

35000

40000

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51

Number of Policies

Mem

ory

Allo

cati

on

(KB

)

rule-specific reusable

Figure 5-57: Memory allocation with regards to number of policies

Page 163:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 163 of 204

TEQUILA Consortium - October 2002

5.6.3 Stability - SUITE3_5/UniS/POL/STAB/1 This test is to examine the stability of the system when numerous policies are entered in the Policy Management Tool. Since policies are dynamically entered by the administrator, it is very likely to have inconsistencies in the system due to policy rules with conflicting requirements. In our system we deployed a static conflict detection mechanism in order to prevent the introduction of a new policy that conflicts with an existing one already entered in the system, which was presented in D1.2 (Sec 8.1.3.3.2, p. 148). For example, if the following policy rule is already added in the system: if Ordered Aggregate == EF then allocate Network Resources > 30 % and then another policy rule is to be added: if Ordered Aggregate == EF then Allocate Network Resources < 20 % which conflicts with the previous rule, our conflict detection mechanism will detect the inconsistency by searching all the rules in the repository that belong to the same action class i.e. AllocateNwResources action class while the parameters are different, and then check the conditions of these rules to see if there is an overlap which in this case are both the same meaning that both actions will be executed at the same time (when the conditions are true). Our mechanism reports to the administrator that the policy rule he/she is about to enter is conflicting with the other rule in order to take appropriate actions i.e. change the existing one, delete the previous one etc.

Of course, our mechanism detects potential conflicts statically while there is again a possibility for conflicts to appear at run-time. These run-time conflicts cannot be detected at the stage where a policy rule is being added but only when the policies are executed which makes this task even more difficult. Run-time conflict detection and resolution still remains a big issue in the research community that deals with Policy-based Management.

5.6.4 Usability - SUITE3_4/UniS/POL/USAB/1 The usability tests intend to demonstrate the usability of the policy management system showing its behaviour when the administrator enters policy rules in the Policy Management Tool and how this affects the operation of the Traffic Engineering components. In the following section, two examples of Traffic Engineering policies are presented depicting their effect on the behaviour of the TE components.

In order to demonstrate the results of the enforcement of TE policies we used a 10-node (nodes 0-9) 36-link random topology and a traffic load of 70 % of the total throughput of the network. Our first example (P1) concerns a policy rule that wants to create an explicit LSP following the nodes 4, 9, 7, 6 with the bandwidth of the TT being 2 Mbps that is associated with this LSP. The policy rule is entered with the following syntax:

If OA==EF and Ingress==4 and Egress==6 then Setup LSP 4-9-7-6 2 Mbps (P1)

The second example (P2) of a policy rule concerns the effect of the cost function exponent in the capacity allocation of the network. As we mentioned earlier by increasing the cost function exponent, the optimisation objective that avoids overloading parts of the network is favoured. So, if the administrator would like to keep the load of every link below a certain point then he/she should enter the following policy rule in our system using again our policy notation:

If maxLinkLoad > 80% then Increase Exponent by 1 (P2)

As it can be observed from figure 5-33, the enforcement of the policy rule caused the optimisation algorithm to run for 4 times until the maximum link load utilisation at the final step drops below 80%. The exponent value that achieved the policy objective was 4n = .

Page 164:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 164 of 204

TEQUILA Consortium - October 2002

70

72

74

76

78

80

82

84

86

88

90

92

1 2 3 4 5 6

cost function exponent n

Max

Lin

k L

oad

(%

)

Figure 5-58: Effect of the cost function exponent on the maximum link load utilisation

For the purpose of demonstrating the effects of the enforcement of policies in our system we implemented a TE-GUI shown in Fig. 6. It depicts the topology of the network that the ND component is calculating a new configuration. The GUI draws the links of the topology with different colours according to load utilisation and all the LSPs for every OA created. It has also the capabilities to display overall statistics for the load distribution for every link per OA as well as statistics for every step of the ND algorithm i.e. average link utilisation, link load standard deviation, max link load, running time etc. In the following figure, two snapshots of the TE-GUI are depicted one before and one after the enforcement of the above policies. As it can be seen from the following figure the enforcement of the policies caused the link load to fall under 80 % (before the enforcement of policies the link 5->6 was loaded over 90%) as well as the LSP created by the P1 is also drawn.

(a) (b)

Figure 5-59: TE-GUI snapshots (a) before and (b) after the enforcement of policies P1 and P2

5.7 IFT Performance Tests and Results The following table provides references for the tests conducted for assessing the performance of an IFT-based network element.

Page 165:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 165 of 204

TEQUILA Consortium - October 2002

Test ID Purpose Platform Results

SUITE3_6/FTR&D/MPLSTE/IFT/PERF Verify that different QOS flows are carried out through the IFT network according to their DSCP characteristics. It includes the tests below:

SUITE3_6/FTR&D/MPLSTE/IFT/PERF/00 Performance of ATM Linux interfaces. French IFT testbed (see A.3)

Section 5.7.1

SUITE3_6/FTR&D/MPLSTE/IFT/PERF/01 Performance of the network bypassing IFTs.

French IFT testbed (see A.3)

Section 5.7.2

SUITE3_6/FTR&D/MPLSTE/IFT/PERF/02 Performance of the network with operational IFTs.

French IFT testbed (see A.3)

Section 5.7.3

SUITE3_6/FTR&D/MPLSTE/IFT/PERF/03 Performance of the network with operational IFTs and congestion situation.

French IFT testbed (see A.3)

Section 5.7.4

SUITE3_6/FTR&D/MPLSTE/IFT/COMP Compare IFT and Linux technologies performance. It includes the tests below:

French IFT testbed (see A.3)

SUITE3_6/FTR&D/MPLSTE/IFT/COMP/1 Performance of the Linux data path. French IFT testbed (see A.3)

Section 5.7.5

SUITE3_6/FTR&D/MPLSTE/IFT/COMP/2 Performance of the IFT data path. French IFT testbed (see A.3)

Section 5.7.6

5.7.1 ATM Interfaces Performance – Suite3_6/FTR&D/MPLSTE/IFT/PERF/00 Test Identifier: SUITE3_6/FTR&D/MPLSTE/ PERF/00

Type of Test Sub-system Performance Assessment

Version 1.0

Test Summary: Measurement of ATM fibre interfaces between hostA and hostB

Benefits: Provides reference values and validates netperf script Test Environment:

Test location: FTR&D/Issy-les-Moulineaux

Network Topology: LER-LSR-LER (see A.4, figure 8-14)

Traffic Load: Netperf script generates 5 TCP flows of different QOS from hostA to hostB

Test procedure:

Initial Conditions:

Direct full-duplex connection between ATM interfaces of the 2 end hosts

Checks to be performed in test:

Netperf script logs throughput values for each flow and different packet sizes

Verdict Criteria:

Metric: Netperf results

Results comments:

Page 166:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 166 of 204

TEQUILA Consortium - October 2002

flows

Mean throughput 64/1472 range

(Mbps) BE 26.82 EF 26.62 AF11 26.73 AF12 26.88 AF13 27.02 Total 134.07

Total throughput:

Max throughput is 86,5% of available bandwidth (155Mbit/s)

Relative throughput:

The flows are equally generated by the by the netperf wrapper script (less than 1% variation)

Figure 5-60: Throughput by flows (IFT test perf/00).

5.7.2 Network Performance with Looped IFTs – Suite3_6/FTR&D/MPLSTE/IFT/FERF/01

Test Identifier: SUITE1_4/FTR&D/TE/MPLS/PERF/01

Type of Test Sub-system Performance Assessment

Version 1.0

Test Summary: Measurement of the network throughput when IFTs are looped

Benefits: Provides reference values. Test Environment:

Test location: FTR&D/Issy-les-Moulineaux

Network Topology: LER-LSR-LER (see A.4, figure 8-14)

Traffic Load: Netperf script generates 5 TCP flows of different QOS from hostA to hostB

Test procedure:

Page 167:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 167 of 204

TEQUILA Consortium - October 2002

Initial Conditions:

IFT are statically configured: all input VCs are looped to output VCs

No MPLS software is running, IP packets cross the network without any modification.

Checks to be performed in test:

Netperf script logs throughput values for each flow and different packet sizes

Verdict Criteria:

Metric: Netperf results

Results comments:

flows

Mean throughput on 64/14172 range

(Mbps) BE 14.90EF 15.74AF11 14.53AF12 15.72AF13 15.94Total 76.82

Total throughput:

Maximum throughput stays close to 77 Mbit/s.

Because of to the server configuration, TCP frames and TCP Acks are bottlenecked at each IFT input queue. Thus, both useful data and control packet losses trigger retransmissions lowering the throughput. Full duplex bandwidth could be reached with a distributed configuration, but needs twice the number of IFT boards. This value will be the reference maximum throughput.

Relative throughput:

All generated flows reaching ingress node share equally a single 155Mbit/s switch port. IDT switch finds no congestion situation and all flows are equally treated.

Figure 5-61: Throughput by flows (IFTtest perf/01).

Page 168:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 168 of 204

TEQUILA Consortium - October 2002

5.7.3 Network Performance with Operational IFTs- Suite3_6/FTR&D/MPLSTE/IFT/FERF/02

Test Identifier: SUITE3_6/FTR&D/MPLSTE/PERF/02

Type of Test Sub-system Performance Assessment

Version 1.0

Test Summary: Measurement of the network throughput when IFTs are operational

Benefits: to show IFT induces no perturbation Test Environment:

Test location: FTR&D/Issy-les-Moulineaux

Network Topology: LER-LSR-LER (see A.4, figure 8-14)

Traffic Load: Netperf script generates 5 TCP flows of different QOS from hostA to hostB

Test procedure:

Initial Conditions:

Two asymmetric paths are set up between hostA and hostB and traffic is mapped on them according to flows generated by the NetPerf script. The running MPLS software dynamically configures IFTs. MPLS shim header is added and removed on both edges of the network.

Checks to be performed in test:

Netperf script logs throughput values for each flow and different packet sizes

Verdict Criteria:

Expected results: no perturbation induced by IFTs

Metric: Netperf results

Results comments:

Flows

Mean throughput on 64/1472 range

(Mbps) BE 14.86EF 14.69AF11 15.52AF12 15.12AF13 15.26Total 75.46

Maximum throughput:

No perturbation induced

Relative throughput:

No perturbation induced

Page 169:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 169 of 204

TEQUILA Consortium - October 2002

Figure 5-62: Throughput by flows (IFT test/perf/02).

5.7.4 Network Performance with Congestion -Suite3_6/FTR&D/MPLSTE/IFT/FERF/03

Test Identifier: SUITE3_6/FTR&D/TE/MPLS/PERF/03

Type of Test Sub-system Performance Assessment

Version 1.0

Test Summary: Measurement of the network throughput in several congestion situations

Benefits: to show that implemented DiffServ mechanisms modify the individual throughput of flows Test Environment:

Test location: FTR&D/Issy-les-Moulineaux

Network Topology: LER-LSR-LER (see A.4, figure 8-14)

Traffic Load:

• Netperf script generates 5 TCP flows of different QOS from hostA to hostB

• SmartBits ATM generates at the core level (The LSR?) one flow of 16000 MPLS UDP frames of 512 bytes length (=62,5 Mbit/s) with four different QoS.

Test procedure:

Initial Conditions:

Two asymmetric paths are set up between hostA and hostB and traffic is mapped on them according to flows generated by the NetPerf script. The running MPLS software dynamically configures IFTs. MPLS shim header is added and removed on both edges of the network.

Checks to be performed in test:

Netperf script logs throughput values for each flow and different packet sizes.

Verdict Criteria:

Expected results: Throughput of flows reflects the DSCP field

Metric: Netperf results

Page 170:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 170 of 204

TEQUILA Consortium - October 2002

Results comments:

Flows Mean throughput on 64/1472 range (Mbps)

L

(BE) M

(AF12) H

(AF11) V

(EF) BE 5,54 0,45 0,42 0,86 EF 19,06 29,04 54,42 49,01 AF11 19,95 28,95 15,72 1,23 AF12 20,34 13,31 1,07 1,28 AF13 4,84 0,50 0,57 0,90 Total 69,75 72,24 72,19 53,27

Relative throughput:

Flows are differentiated following their DSCP attributes, but, since there is no class bandwidth reservation, the throughput of each flow is the result of competition with all others.

Figure 5-63: Throughput by flows (IFT test/perf/03); Additional SmartBits flow of 62,5Mbit/s with low priority (BE)

Page 171:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 171 of 204

TEQUILA Consortium - October 2002

Figure 5-64: Throughput by flows (IFT test/perf/03); Additional SmartBits flow of 62,5Mbit/s with medium priority (AF12)

Figure 5-65: Throughput by flows (IFT tests/perf/03); Additional SmartBits flow of 62,5 Mbit/s with high priority (AF11)

Page 172:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 172 of 204

TEQUILA Consortium - October 2002

Figure 5-66: Throughput by flows (IFT test/perf/03); Additional SmartBits flow of 62,5 Mbit/s with very high priority (EF)

5.7.5 Performance of Linux Data Path - Suite3_6/FTR&D/MPLSTE/IFT/COMP/01

Test Identifier: SUITE3_6/FTR&D/MPLSTE/ CMP/01

Type of Test Comparison test Sub-system Performance Assessment

Version 1.0

Test Summary: Measurement of small packets Linux throughput

Benefits: metrics of Linux packets transfer rate Test Environment:

Test location: FTR&D/Issy-les-Moulineaux (see A.4, figure 8-14

)

Network Topology: Ingress node simulation

Traffic Load: Incremental traffic generation

Test procedure:

Verdict Criteria:

Metric: SmartBits results

Results comments:

Packet loss test shows that

• With no firewall rule, a maximum of 30000 64 bytes packets per second is reached before performances collapse.

• The number of firewall rules has a negative incidence on this value.

Since the Linux box is not tuned from a networking standpoint, the results are to be interpreted in a relative way but are of the same order of magnitude in other experiments . (e.g. The Click Modular Router Project http://www.pdos.lcs.mit.edu/click/ )

Page 173:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 173 of 204

TEQUILA Consortium - October 2002

At the ingress node, RSVP-TE software uses firewalling rules to filter and mark traffic that is to be mapped on a particular LSP. Theses rules can be directly entered with the firewall commands so the Ingress behaviour can be reproduced on a single node without setting MPLS paths. This procedure does not take in account the resources used for the MPLS encapsulation of IP frames and results will be overestimated. When firewall rules are entered, the Linux box is configured so that incoming packets on the Ethernet interface can only be forwarded on the outgoing Ethernet interface after matching the last rule (worst case).

Figure 5-67: Ethernet packet loss test.

On the LER(1) side:

31. IFT and RSVP-TE software are not running. Ethernet 4 interface is configured with an IP address belonging to the same network address as hostB (192.168.25.100). No firewalling rule is entered and the Linux box acts as an IP router with the standard routing table.

On SmartBits:

32. SmartApplication is configured to generate 64 bytes UDP frames from a pseudo-host (192.168.5.168,same network as hostA) to another pseudo-host (192.168.25.10, same network as hostB) and standard packet loss test are enabled during 10s at each rate.

On the LER(1) side:

33. 125 firewalling rules are entered varying destination address from 1 to 125, except SmartBits receive address which is the last one entered. /usr/local/sbin/iptables -A PREROUTING -t mangle -m dscp --dscp 0 -d 192.168.25.<VARIABLE>/32 -j MARK --set-mark 0x1

34. New routing rule, table and route are entered. Old standard route is deleted from main routing table. Now, a packet to SmartBits receive interface needs to go through the firewall rules to be correctly forwarded. [root#] ip rule add fwmark 1 table 1 [root#] ip route add 192.168.25.0/24 via 192.168.25.100 table 1 [root#] ip route del 192.168.25.0/24 via 192.168.25.100

On SmartBits:

35. Packet loss test is enabled and same test is repeated with 250 rules

Page 174:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 174 of 204

TEQUILA Consortium - October 2002

Figure 5-68: Ethernet packet loss results (IFT test/comp/01).

5.7.6 Performance of IFT Dta Path - Suite3_6/FTR&D/MPLSTE/IFT/COMP/02

Test Identifier: SUITE3_6/FTR&D/MPLSTE/COMP/02

Type of Test Comparison test Sub-system Performance Assessment

Version 1.0

Test Summary: Measurement of small packets IFT throughput

Benefits: metrics of Linux packets transfer rate Test Environment:

Test location: FTR&D/Issy-les-Moulineaux

Network Topology: Ingress node only

Traffic Load: Incremental traffic generation

Test procedure:

Verdict Criteria:

Metric: SmartBits results

Results comments:

The percentage of received frames is almost 100% at every rate , and particularly at the highest configurable SmartBits rate (~ 86Mbits/s with 32 bytes frames). This value is not sensitive to the number of patterns (rules) written in memory.

IFT is connected to the SmartBits ATM board. The same procedure as the Linux test is used except that the IFT monitor is running and enabled with a test parameter that forces a MPLS label value to be used, as if an LSP has been set-up before. In this case, MPLS header will be added to the incoming IP frame.

Page 175:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 175 of 204

TEQUILA Consortium - October 2002

Figure 5-69: IFT packet loss test. On the LER(1) side:

36. IFT monitor is started with a label and 125 firewalling rules are entered. [root#] /usr/local/bin/iftkmon –d –v –c –l 168 /usr/local/sbin/iptables -A PREROUTING -t mangle -m dscp --dscp 0 -d 192.168.25.<VARIABLE>/32 -j MARK --set-mark 0x1

On SmartBits:

37. SmartWindow is configured to generate 32 bytes UDP frames from a pseudo-host (192.168.5.168, same network as hostA) to another pseudo-host (192.168.25.10, same network as hostB). Frame size is chosen as short as possible to be included in an ATM cell and thus to maximize the frames generation rate. A trigger must be defined to count received frames. Each rate test lasts 100s and must be started and stopped manually. Test is repeated with 250 rules. The first test is to loop the SmartBits interface to validate counters information.

Figure 5-70: IFT packet loss results (IFT test/comp/02).

Page 176:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 176 of 204

TEQUILA Consortium - October 2002

Configured rate (AAL5 frames/s)

Observed rate (AAL5 frames/s)

Transmitted (AAL5 frames)

Receivedtrigger

% rec/trans

353207 118198 9494538 9494533 99,9999

Table 5-20: SmartBits Interface looped

Configured rate (AAL5 frames/s)

Observed rate (AAL5 frames/s)

Transmitted (AAL5 frames)

Received trigger

% rec/trans

17660 17655 1412566 1412566 100,0000 88302 88387 7110434 7110433 100,0000

176604 170781 13642541 13642404 99,9990 264905 198187 15919172 15918562 99,9962 353207 215562 17231885 17231341 99,9968

Table 5-21: IFT packet loss results, 125 rules

Configured rate (AAL5 frames/s)

Observed rate (AAL5 frames/s)

Transmitted (AAL5 frames)

Received trigger

% rec/trans

17660 17655 1410989 1410988 99,9999 88302 88387 7073074 7073073 100,0000

176604 170772 13710324 13710211 99,9992 264905 198190 15990375 15989789 99,9963 353207 215523 17248468 17247772 99,9960

Table 5-22: IFT packet loss results, 250 rules

5.7.6.1 Comparison of Data Paths By looking at packet transfer rate results, we can assume there is no loss in the IFT data path.

Figure 5-71: IFT Packet loss comparison.

Page 177:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 177 of 204

TEQUILA Consortium - October 2002

6 CONCLUSIONS

6.1 Overview This Part of Deliverable D3.4 presented the tests undertaken to assess the performance of the TEQUILA functionality and the yielded results. The tests and results presented are backed-up by a theoretical analysis of the system scalability and stability, presented in Part C of this Deliverable.

Overall, based on the yielded experimentation results, the following conclusions can be drawn:

• Prototype implementation and the undertaken tests prove the validity of the TEQUILA functional model and the proposed algorithms/schemes; the specified functionality can be feasibly implemented within desired levels of operation, as mandated by the time-criticality of the functionality.

• Measurements taken from prototype implementation prove that the proposed approach, algorithms and protocols scale, as data requirements and response times of the various functions of the system grow linearly with the external entities influencing the behaviour of the system. The measurements follow the expectations of the theoretical scalability analysis.

• Results indicate that the specified service admission and traffic engineering functions improve network performance as compared to ad-hoc configurations; furthermore, the specified functions yield favourable performance compared to alternative schemes wherever such comparisons were possible.

The following sections present the conclusions drawn for each of the major functional areas addressed by TEQUILA.

6.2 Service Management Conclusions • The proposed service management functionality, including service request establishment/agreement and

admission control, it has been functionally validated. Throughout all experiments it is shown that the response times of handling service subscription and invocation requests are within desired levels. Furthermore, its interfaces with the Monitoring and TE functions and the external systems (customers, network routers) can be feasibly implemented.

• The proposed service management functionality scales as its complexity grows linearly with the external entities influencing the behaviour and performance of the involved algorithms/protocols. This was the outcome of the theoretical scalability analysis and was verified through experimentation.

• The response times of the Traffic Aggregation algorithm, executed at Resource Provisioning Cycle epochs to calculate anticipated traffic demand based on a population of subscriptions, grow linearly with the number of SLSs of the subscription population and the size of the network in terms of TTs. The latter is analogous to the (square of the) number of network edges and the number of supported QoS-classes (corresponding to PHBs). This behaviour is in accordance with the expectations from the corresponding theoretical scalability analysis.

• It has been observed that the effect of the number of TTs on the Traffic Aggregation execution time is much greater than the effect of the number of established SLSs. Yet, that Traffic Aggregation execution time decreases as a function of the SLSs per SSS. It should be noted that even in the case of large networks (49,500 TTs), the total Traffic Aggregation execution time is within reasonable levels (100 sec) considering that the algorithm runs off-line.

• Subscription processing time grows linearly with the number of subscriptions already established and the size of the network in terms of TTs; as expected by the scalability analysis and verified through experimentation. The same holds for the invocation processing times; the latter are of magnitude smaller than the subscription processing time.

Page 178:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 178 of 204

TEQUILA Consortium - October 2002

• The most prevailing function, in terms of completion time, involved in subscription processing is the 'Validate and Translate' function. This function analyses, translates and validates a received subscription from the format and parlance as received/understood by the customers to the format and parlance as understood by the service admission and provisioning and TE functions; further it performs a number of validations to ensure uniqueness of customer/user identification and their flow specifications amongst subscriptions. The 'Admission Control' function of subscription processing is very light indeed (in the order of a few milliseconds even for large networks). This is attributed to the 'first plan and then take care' feature of the TEQUILA system substantiated through the concept of Resource Provisioning Cycle.

• The subscription admission logic response time is independent of the number of established subscriptions; it primarily depends on the size of the network in terms of TTs.

• The admission control scheme improves network performance in terms of QoS traffic throughput (goodput). This may be at the expense of agreed subscriptions and/or accepted service invocation and/or service rate to be actually admitted. These different types of penalty reflect the degrees of freedom that our scheme provides into resolving the well-known service admission control trade-off between network utilization and QoS deterioration.

• Our admission control scheme presents high degrees of customisation to business and operational policies regarding service provisioning. Providers may control the expense (see above) in resolving the service admission control trade-off, through appropriate policy settings reflecting their own 'style' in service provisioning. To this end, the policy parameters satisfaction level, precaution level, fully/almost satisfied service rates (cf. part B of this Deliverable) are provided to tune the effect of admission control as desired.

• It has been verified that when offered user traffic is within estimated levels, against which the network has been dimensioned, the network indeed delivers the offered traffic according to its QoS guarantees; the more the traffic is well within its estimates the less the effect of (cost incurred by) the settings of the admission control scheme. When traffic is above its estimated levels, under-provisioned network, at times of congestion our admission scheme allows QoS to be enjoyed, on an aggregate flow basis, at rates corresponding to the rates that the network has been engineered to deliver QoS traffic, which were calculated by the off-line TE functions at the beginning of the Resource Provisioning Cycle (RPC). This behaviour validates the TEQUILA approach to QoS delivery, which is built around the concept of RPC.

• It has been shown that our admission may restore the ability of the network to deliver the agreed QoS, should this is distorted; even, through appropriate policy settings, to ensure that the network will always deliver the agreed QoS. Of course, this can be achieved at the expense of service admissions (at subscription, invocation or packet level) and, as already has been pointed out, it is left to the policies of the provider to tune the effects of admission control.

• The performance of the specified service subscription and invocation handling functions, including admission control, largely depends on its implementation, the processing capability of the infrastructure and its deployment. Experimentation has shown that there is a difference in processing times on different platforms and on different implementations.

6.3 MPLS-based Traffic Engineering Conclusions This section contains a summary of the key results for the TEQUILA MPLS Traffic Engineering system, as validated in the UK testbed.

• Creates traffic-engineered paths that meet user bandwidth requests, based on SLSs.

The TEQUILA MPLS-TE system creates network-wide traffic engineered paths that meet the bandwidth requirements of the network users, as described by the TEQUILA Traffic Matrix. These traffic-engineered paths are formed by the computation and configuration of explicit MPLS LSPs.

While the TEQUILA MPLS-TE system is capable of more sophisticated functions, as discussed below, this capability is fundamental. Network operators who wish to operate a best effort network may want to use traffic-engineering techniques to ensure that they are optimising the usage of their network resources and to avoid congestion hotspots. The experiments carried out in the UK testbed showed that the LSP computation and configuration capabilities of the MPLS-TE system performed as expected.

• Creates a differentiated services network, where qualitative performance guarantees are provided.

Page 179:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 179 of 204

TEQUILA Consortium - October 2002

Building on the MPLS path control capability, the TEQUILA MPLS-TE system is also able to compute and configure the DiffServ PHBs at each node along the TE path. This enables the creation of a DiffServ network where traffic classes experience different edge-to-edge transport performance. A network operator can use this capability to offer a qualitatively guaranteed service, such as an Olympic (gold/silver/bronze) service, without explicitly specifying packet delay and loss characteristics for each class. The tests in the UK testbed, such as SUITE3_1/GC/TE/MPLS/BC/1, showed that the TEQUILA MPLS-TE system was capable of delivering this capability.

• Creates a differentiated services network, where quantified performance guarantees are provided.

The Network Dimensioning component of the TEQUILA MPLS-TE system is capable of accepting a delay constraint and calculating traffic-engineered paths that meet the delay constraint. This feature enables the provision of differentiated services where quantified guarantees are provided for certain traffic classes. Tests were carried out in the UK testbed where both high and low delay links were available and it was shown in SUITE3_1/GC/TE/MPLS/BC/2 that the MPLS-TE system avoided paths that would have violated the QoS constraint.

• Creates TE paths based on customer demand and forecast, meeting the QoS requirements.

The TEQUILA system creates network-wide TE paths that meet customer demands, as specified in the Traffic Matrix. It also employs techniques to forecast future customer demands and dimensions the network accordingly. This forecast feature avoids the need for frequent re-dimensioning and is an important part of the stability and scalability of the overall TEQUILA system.

Tests show that the system is capable of allocating resources up to a forecast limit, based on the physical capacity of the network, while meeting the customer QoS requirements.

• Creates a traffic-engineered network where traffic is loaded evenly across the network.

Network operators prefer traffic-engineering solutions that establish TE paths where the traffic load is distributed evenly across the network to solutions which load one part of the network heavily while leaving other parts lightly loaded. The even distribution increases the likelihood of being able to find a route for new paths and thereby minimises network disruption. Tests carried out in the UK testbed, such as SUITE3_1/GC/TE/MPLS/BC/2, show that the MPLS-TE system creates a well distributed load profile.

• Supports pipe and hose-based VPNs.

Previous work in TEQUILA had established that there were benefits for both the network operator and the network user if it was possible to create VPNs using hoses, in addition to the more conventional pipe model. In the performed tests the TEQUILA MPLS-TE system was used to compute and configure a hose-based VPN. The bandwidth savings when using hoses, compared to using pipes, are due to the fact that TEQUILA MPLS-TE allocates bandwidth to PHBs and not to LSPs and the test results demonstrate this.

• Value-added on commercial equipment providing MPLS-TE functionality.

The commercial routers used in the UK testbed have both MPLS-TE and DiffServ capabilities. However, the two sets of functions have not yet been sufficiently integrated for network operators to use them in combination to provide differentiated services. TEQUILA, or any similar performance management solution, depends on some low-level integration of MPLS and DiffServ, which must be done within the routers. We experienced problems in this area and details about these problems and restricted workarounds that had to be used are provided in Chapter 6. The equipment manufacturer is fixing the problem but it can be concluded that not much previous work has been carried out in this area.

Current commercial routing equipment has algorithms and protocols to dynamically establish MPLS LSPs. These algorithms currently accept a bandwidth constraint and therefore will not establish an LSP if there is not sufficient bandwidth resource along the route. These algorithms do not currently attempt to accept a delay constraint for calculating and providing an end-to-end delay guarantee as TEQUILA algorithms do.

Page 180:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 180 of 204

TEQUILA Consortium - October 2002

Currently, there is no equivalent of the TEQUILA system commercially available that provides close integration with customer demand, as specified in the SLSs. This lack of integration makes it difficult to devise a QoS management scheme that provision the network properly and incorporates a demand forecasting technique.

In the following, key results for the TEQUILA MPLS Traffic Engineering algorithms resulted by the conducted simulation studies are presented.

• The choice of a 'good' QoS path construction algorithm is multi-dimensional; it is driven by multiple and diverse requirements e.g. optimality and low processing time. There exists no ultimate algorithm that presents the best performance in terms of cost of the multicast tree, execution time and maximum end-to-end delay.

• The two versions of the proposed algorithm for finding Steiner trees have in most of the considered cases good cost performance. For small to medium multicast groups, compared to the size of the network, the performance of both versions is equal or sometimes better than the performance of the BSMA [BSMA] algorithm when the hop-count constraint is not tight for the network easily. What is more important is that for these cases they have much lower execution times than BSMA, so they would be suitable for the traffic-engineering algorithm we are studying. For large multicast groups (more than 50-60% of the total network nodes), their performance deteriorates, especially when the hop-count constraint becomes stringent.

• Our Network Dimensioning algorithm, finding appropriate QoS routes and dimensioning network resources based on anticipated demand estimates, performs excellent reducing the maximum link load in all our experiments to well bellow 100% while shortest path solutions give utilisation more than 300%; meaning that the traffic that we can handle can as much as three times more than when using simple shortest path routing schemes. Furthermore, we saw that the average end-to-end delay achieved is very small, which makes it appropriate for real-time demanding traffic.

• Our Network Dimensioning algorithm allows to be adjusted as to which its outcome will be, in terms of link load distribution. Therefore, it can be tuned according to specific network operation policies.

• Our Network Dimensioning algorithm scales with the number of network nodes; experimentation showed that execution time stays within acceptable timescales (less than 20mins) even for very large networks (up to 300 nodes).

• It has been shown that our Dynamic Resource Management algorithm, managing PHB scheduling parameters according to actual load conditions, improves network performance in terms of packet losses when compared to static link bandwidth allocation. Results indicate that it can provide up to 13% (of link capacity) less packet losses in cases where offered load fluctuates significantly and total offered load is as high as 95% of link capacity. The improvements are less marked under low load conditions and our simulations show no improvement when the average aggregate load was below around 60% of link capacity.

• Through experimentation we have gained insight into the most appropriate setting of the operational parameters of the Dynamic Resource Management algorithm according to different traffic cases. These parameters may be subject to policies regarding overall network operation for QoS delivery.

6.4 Monitoring Conclusions This section contains a summary of the key results for the TEQUILA Monitoring subsystem, as validated in the UK Testbed.

• One-way delay accuracy

The one-way delay measurements obtained by the TEQUILA Monitoring subsystem and by third party test equipment are very similar, as verified through experimentation. This shows that TEQUILA Monitoring subsystem provides very accurate results even in the case where monitoring agents are located outside the routers. It is concluded that one-way delay measurements are accurate and suitable for use in short and long-term dynamic traffic engineering operations of the network, as well as for SLS Monitoring.

• One-way packet loss accuracy

Page 181:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 181 of 204

TEQUILA Consortium - October 2002

The one-way packet loss measurements reported by third party test equipment and measured by the TEQUILA Monitoring subsystem averaged over longer time intervals are very close to each other, as verified through experimentation. As synthetic packets are randomly generated and packet losses are introduced randomly, this gives packet loss values that exhibit significant variation of measured values during short intervals (e.g., 5 minutes). Hence, it can be inferred that one-way packet loss resulted from active measurements is suitable only for longer-term path monitoring and SLS monitoring, because of these variations over short time periods. We therefore recommended using passive monitoring of packet discards for short-term dynamic management of PHBs.

• Edge-to-edge versus hop-by-hop measurement accuracy

By comparing the aggregated hop-by-hop and edge-to-edge measurements for one-way delay and packet loss performed by TEQUILA Monitoring subsystem, it is shown that comparable results are obtained by both methods. This makes the hop-by-hop proposal more attractive than edge-to-edge method because it enhances monitoring scalability.

• Benefits of Monitoring subsystem in the operation of MPLS traffic-engineered networks

The TEQUILA Monitoring subsystem’s ability to monitor the configured classes of services by measuring one-way delay, packet loss and throughput at the LSP level, has been verified through experimentation.

• Cost associated with Monitoring subsystem

The costs associated with the Monitoring subsystem, as identified in the tests carried out, are:

- The introduction of synthetic traffic to the network

- The communication overhead to transfer node/network level measurements to the related management entities.

- The deployment of reliable and accurate clock synchronization technology for PCs hosting Node Monitors for performing one-way delay measurements.

• Scalability of Monitoring subsystem in terms of synthetic traffic load

The scalability of Monitoring subsystem is assessed in terms of traffic load (i.e., the packet rate) that is introduced by edge-to-edge method compared with hop-by-hop method. It is deduced that increasing the number of LSPs traversing the link increases the number of synthetic packets linearly while the number of PHBs cannot grow more than a certain limit. It is shown that the scalability claim, for hop-by-hop method against edge-to-edge method, is justified.

• Scalability of centralised components of Monitoring subsystem

The centralised CORBA notification service hosted on a single node/PC is used to dispatch event information to the monitoring clients. A single event channel is created for each network element. Hence, the total event throughput processed by the notification server grows linearly with physical network size. We analysed that for very large networks, the performance of the PC hosting the notification server and the bandwidth of links around it could be limiting factors. Alternative solutions are to have a federated notification system, with a notification server per domain of nodes, resulting in better scalability.

• Scalability of ingress/egress Node Monitoring functions

The Node Monitoring processing effort is related to the following parameters: the number of active monitors, the number of passive monitors, the mean number of synthetic packets injected per second for each active monitor, the number of read-outs per second for each passive monitor. Therefore, the Node Monitor operations scale if these parameters remain within some defined boundaries. The typical number of monitors running on a single edge Node Monitor PC will be of the order of tens. Experiments indicate that this number of monitors would not place an unmanageable load on the Node Monitor PC and would not increase the response time. It is deduced from test results that the performance of individual Node Monitor PCs would not be a limiting factor in the scalability of the Monitoring subsystem.

Page 182:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 182 of 204

TEQUILA Consortium - October 2002

6.5 IP-based Traffic Engineering Conclusions • A COPS-based provisioning scheme for configuring routers in the context of IP traffic engineering

policies can be feasibly implemented.

• The use of COPS is helpful and the configuration operations are made quickly (less than 5mn for 12 routers).

6.6 Policy Management Conclusions • Flexibility to drive the behaviour of the TEQUILA system according to the operator’s business

objectives was verified.

In order to make the TEQUILA system flexible and guide its behaviour through policies, a Policy-based Management sub-system was deployed on all the policy influenced components, providing this way the infrastructure for the policies to be dynamically introduced, stored and enforced, realizing the business goals of the administrator. In the all policy influenced components, policy parameters were identified that are set when policies are enforced, changing the behaviour of the components according to new requirements. Furthermore, policies are entered in our system through a high level language that offloads the administrator from having to know low-level details of the system.

• Extensibility and adaptability of the TEQUILA system to new and emerging requirements was verified.

Policies in TEQUILA are seen as a means to achieve programmability in the system as well as in the network devices in order to adapt to changing requirements by adding, modifying or withdrawing logic on the fly, without having to rigorously test the resulting system in every modification. More specifically by adding new policies to our system, we enhance the management logic of the policy-influenced components, which is produced and executed on the fly by policy consumers that are attached to the every policy-driven component in the TEQUILA system.

• The Policy Management system is stable

The approach followed in TEQUILA for policies makes the existence of conflict detection mechanism a necessity. Since policies are also formally represented according to an information model and stored in the repository, every time a new policy is entered it can be checked if it conflicts with the already stored in the repository. Of course this mechanism detects only potential conflicts statically and not at runtime.

• The Policy Management system scales

Our system has the ability to cope with large amount of policy rules entered with regards to computer resources required. This is based on a mechanism that takes advantage of reusable conditions and actions of policy rules, identifying which are reusable and creating objects with pointers to the reusable ones instead of creating again new objects, providing this way the grounds that it can cope with large amount policy rules.

6.7 IFT-based Network Element Conclusions • The concepts developed in "IP Fast Translator (IFT) – TEQUILA white paper" were validated through

experimentation. That is to say, it was verified that IFT-based network elements combine the performance of commercial network elements with the flexibility PC-based Linux router, by decoupling control and transfer planes.

• The IFT-based network has favourable performance compared to pure Linux-based networks. A clear separation has been achieved between forwarding and control planes. IFT flexibility at the control path is equal to that of Linux-based routers and benefits from the extensive developers’ support that have been engaged on Linux community.

• The performance level of IFT-based network elements can easily be compared to the switching performances of commercial routers: 150000 pps throughput per interface can be achieved in the actual testbed (restricted to 155Mbit/s interfaces). More than 106 pps have been achieved with IFT running on 622Mbit/s interface in other experiments.

Page 183:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 183 of 204

TEQUILA Consortium - October 2002

• IFT scalability is obtained through the use of off-the-shelf switching fabric (currently ATM, Fast and Gigabit Ethernet later on). The testbed implements a multi-IFT configuration built on top of a shared ATM switch.

Page 184:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 184 of 204

TEQUILA Consortium - October 2002

7 REFERENCES [SHAL02} S. Shalunov, B. Teitelbaum, M. Zekauskas, "A One-way Delay Measurement Protocol",

Internet Draft, draft-ietf-ippm-owdp-03.txt, work in progress, April 2002.

[MILLS92] D. L. Mills, "Network Time Protocol (Version 3) Specification, Implementation", IETF Draft Standard RFC-1305, March 1992.

[RFC 2225] Classical IP and ARP over ATM.

[IPTE] C. Jacquenet, "A COPS client-type for IP traffic engineering", <draft-jacquenet-ip-te-cops-03.txt, June 2002

[IPTEPIB] M. Boucadair, C. Jacquenet, "An IP Traffic Engineering Policy Information Base ", draft-jacquenet-ip-te-pib-02.txt, June 2002

[D1.4] TEQUILA Consortium- October 2001, D1.4: Final Architecture, Protocols and Algorithms, 104/IMEC/b1.

[Sala97] H. F. Salama, Douglas S. Reeves, Yannis Viniotis, “Evaluation of Multicast Routing Algorithms for Real-Time Communication on High-Speed Networks”, IEEE Journal on Selected Areas in Communications 15(3), 1997.

[Karp72] R.Karp, “Reducibility among Combinatorial Problems”, Complexity of Computer Computations, Plenum Press, 1972.

[Ram96] S. Ramanathan, “An Algorithm for Multicast Tree Generation in Networks with Asymmetric Links”, Proceedings of IEEE INFOCOM ‘96, 1996.

[KMB81] L. Kou, G. Markowsky and L. Berman, “A Fast Algorithm for Steiner Trees”, Acta Informatica, vol. 15, no. 2, 1981.

[Steiner83] V. Rayward-Smith, “The Computation of nearly Minimal Steiner Trees in Graphs”, Mathematica Japonica, vol. 24, no. 6, January/February 1983.

[Dijkstra59] E. W. Dijkstra, “A note on two problems in connection with graphs” Numerische Mathematik, vol. 1, pp. 269-271, 1959

[BSMA] Q. Zhu, M. Parsa, and J. Garcia-Luna-Aceves, “A source-based Algorithm for Delay-Constrained Minimum-Cost Multicasting”, Proceedings of IEEE INFOCOM ’95, 1995.

[KPP] V. Kompella, J. Pasquale, G. Polyzos, “Multicast Routing for Multimedia Communication”, IEEE/ACM Transactions on Networking, Vol. 1, No. 3, June 1993.

[Floyd62] R. W. Floyd, “Algorithm 97:Shortest paths”, Comm. of the ACM, 1962.

[QDMR] I. Matta and L. Guo, “QDMR: An Efficient QoS Dependent Multicast Routing Algorithm”, Journal of Communications and Networks - Special Issue on QoS in IP Networks, June 2000.

[DDMC] A. Shaikh and K. Shin, “Destination-driven routing for low-cost multicast”, IEEE J. Select. Areas Commun., vol 15, April 1997.

[CDKS] Q. Sun and H. Langendoerfer, “Efficient Multicast Routing for Delay-Sensitive Applications”, Proceedings of the Second Workshop on Protocols for Multimedia Systems (PROMS), October 1995.

[Salama97b] Hussein Salama, Centre for Advanced Computing and Communication, North Carolina State University, September 1997.

[Eppstein98]D. Eppstein, “Finding k-shortest paths”, SIAM J. on .Computing, vol.28, no 2, pp.652-673, 1998

[Zegura96] E. W. Zegura, K. L. Calvert, and S. Bhattacharjee. “How to model an internetwork”, In Proceedings of IEEE INFOCOM 96, vol.2, pp. 594-602, San Francisco, March 1996

[KATZ] D. Katz, et al., "Traffic Engineering Extensions to OSPF, draft-katz-yeung-ospf-traffic-01.txt, work in progress, November 1999.

[RFC-1583] J. Moy, “OSPF Version 2”, RFC 1583, March 1994.

Page 185:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 185 of 204

TEQUILA Consortium - October 2002

8 APPENDIX A TESTBED PLATFORM ENVIRONMENT

8.1 UK Testbed Specification This section provides an overview of the UK testbed constructed at Thales Research, used for integration and experimentation of the TEQUILA system, including Service Management, Monitoring, MPLS-based TE and Cisco-based Router sub-systems. The testbed is based on typical intra- and inter-POP connectivity. In each POP, access/aggregation routers act as Provider Edge (PE) devices and are connected to core routers (P). PE devices are usually connected to two or more P devices for resiliency, although here the double connection is used because it creates a richer connectivity for the network dimensioning algorithms. The testbed configuration is shown in figure 8-1. Details of the hardware and software used to construct the testbed are provided in the following sections.

8.1.1 Hardware 8.1.1.1 Network Elements (Routers)

The test network is constructed from six routers. PE1, PE2, and PE3 are Cisco 7204 routers and are deployed as Provider Edge routers. P1 is a Cisco 3640, P2 is a Cisco 3620, and P3 is a 7204. These routers are deployed as Provider (core) routers. Details of the router specifications are available at http://www.cisco.com/. The test network is constructed inside-out (i.e. Cisco 7204 at the edges, Cisco 3640/3620/7204 in the core). The reason for this is to allow the MPLS experimental bit to be set at the edge-router input interfaces. The setting of this field is supported by 7204 routers, but is not supported by 3640/3620 routers. The routers are connected via 2Mbps serial links (X.21 serial, or E1). For details of the interconnections, refer to figure 8-1.

8.1.1.2 Tequila System PCs One or more PCs are connected via Ethernet to each router for performing the tasks resulted by the SLS Management, Traffic Engineering and Node Monitoring functionality. Further, PCs are deployed to perform centralised functions, such as persistent database storage and network monitoring. All the PCs used to host the TEQUILA software components have the following specification:

• Intel Pentium-4 CPU 1.5 GHz

• 512 MB RAM

• 20 GB hard disk drive

As mentioned above, each router has a PC attached to it for performing the nodal tasks of the TEQUILA system. To achieve accurate one-way-delay measurements all PCs involved in node monitoring must be accurately time-synchronised. To achieve this a GPS time receiver and the Network Time Protocol (NTP) are used. The PC designated as the NTP server (TEQ0001) is connected via an RS232 null-modem cable to an HP58503A GPS Time & Frequency Reference Receiver. This provides an accurate and stable reference clock. All NTP client PCs are connected to the NTP server via a dedicated NTP Ethernet, allowing synchronisation to the server to within about 50 microseconds. A dedicated Ethernet is used to ensure that NTP traffic is not subjected to the delay and packet loss introduced by data channel simulators used in the testbed. In a real-world environment a more practical solution may be to have an NTP server synchronised to a GPS receiver and connected to a dedicated NTP Ethernet segment at every location hosting a number of routers/node monitors.

8.1.1.3 Test Equipment Two Adtech SX/12 Data Channel Simulators are used to introduce delay and packet loss into the network. More information is available at http://adtech.spirentcom.com/products/sx.asp. A HP WAN Advisor and a HP LAN Advisor are used for link monitoring and troubleshooting, in addition to the PC based Ethereal software (see http://www.ethereal.com/).

Page 186:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 186 of 204

TEQUILA Consortium - October 2002

8.1.1.4 Traffic Generator A commercial traffic generator (SmartBits 2000) is used to inject user traffic into the network. Traffic can be originated from any PE router, and directed to any other PE router. SmartBits is also used in a loopback mode to verify the accuracy of one-way-delay and one-way-loss measurements made by the Tequila monitoring sub-system. More information about SmartBits is available from http://www.spirentcom.com/.

8.1.2 Operating Systems 8.1.2.1 Routers All the testbed routers are running IOS version 12.2 (5) enterprise. This provides a rich set of features, for example the ability to set the MPLS experimental field and create MPLS tunnels.

8.1.2.2 PCs All the PCs used to host TEQUILA software components are running Linux Slackware version 8.0 (see http://www.slackware.org), with version 2.4.9 of the Linux kernel (see http://www.kernel.org/). The machines are installed with version 2.95.3 of the GNU Compiler Compilation (GCC) (see http://gcc.gnu.org/).

8.1.3 Essential Software Packages This section contains details of software packages that must be installed in the testbed to successfully run the TEQUILA system.

8.1.3.1 ORBacus ORB / Notification Service

The Tequila sub-systems make use of CORBA for communicating with one another in a location and programming language independent fashion. All PCs in the testbed have ORBacus version 4.1.0 installed. More information is available at http://www.iona.com/products/orbacus_home.htm. The Tequila monitoring system uses the CORBA Notification Service to deliver monitoring events from the point of measurement to monitoring clients. ORBacus Notify version 2.0.0 is used as the Notification Service implementation, and this is also installed on all machines in the testbed. More information about ORBacus Notify is available at the following address: http://www.iona.com/products/orbacus/notify.htm.

8.1.3.2 Oracle 9i (9.0.1) Database Several TEQUILA sub-systems rely on an Oracle database for persistent storage of data. One machine in the testbed is nominated as the database server (TEQ0001), and therefore requires a full server installation of Oracle. Other machines in the network are database clients and only require a client Oracle installation. The Oracle installation was particularly problematic on Slackware. More information about Oracle is available at http://www.oracle.com/.

8.1.3.3 Java 2 Runtime Environment Some of the TEQUILA software components (specifically monitoring) are written in Java, and therefore require a Java runtime environment to be installed on the target machines. Sun’s Java 2 Runtime Environment, Standard Edition (build 1.3.1_01) is installed on the machines in the testbed. More information is available at http://java.sun.com/j2se/.

8.1.3.4 Network Time Protocol (NTP) An implementation of the Network Time Protocol is deployed in the testbed to time-synchronise node monitoring PCs to an NTP server, which is itself synchronised to a GPS time receiver. The NTP package installed in the testbed is ntp-4.0.99k23. Using the NTP daemon (ntpd) we were unable to obtain the required synchronisation accuracy. However, by running the ntpdate utility once a second in a continuous loop, we are able to obtain the required synchronisation accuracy. The ntpdate utility immediately sets the local time by polling the NTP server. More information about NTP is available at http://www.ntp.org/.

Page 187:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 187 of 204

TEQUILA Consortium - October 2002

8.1.4 Optional Network Services Several services are deployed in the testbed network. Strictly speaking these services are not required for an operation Tequila system. However, they are considered to be beneficial from software development and network administration view points. Details of these services are provided below.

8.1.4.1 Domain Name System (DNS) The Domain Name System converts machine names to IP addresses and vice-versa. Although not required by the TEQUILA system, DNS is installed in the tested as an aid to network administration and trouble-shooting. More information can be found at http://www.tldp.org/HOWTO/DNS-HOWTO.html. The DNS package installed in the testbed is bind-9.1.2.

8.1.4.2 Network Information Service (NIS) The purpose of the Network Information Service (NIS) is to provide a lookup service for information that must be known throughout the network. NIS is installed in the testbed to provide a unified view of login names, passwords, and home directories throughout the network, enabling users to logon to any terminal in the network. More information can be found at http://www.tldp.org/HOWTO/NIS-HOWTO/index.html.

8.1.4.3 Network File System (NFS)

The Network File System (NFS) allows machines to mount a disk partition on a remote machine as if it were on a local hard drive. This is useful in the testbed for distributing software packages. More information about NFS can be found at http://www.tldp.org/HOWTO/NFS-HOWTO/index.html.

8.1.4.4 SAMBA The SAMBA package allows Linux machines to access Microsoft Windows network shares. This service is installed in the testbed to permit the movement of software between the testbed network and the main company network (which is Microsoft Windows based). The SAMBA package installed in the testbed is samba-2.2.0a. More information about SAMBA can be found at http://www.tldp.org/HOWTO/SMB-HOWTO.html.

8.1.5 Cisco Router Functionality Limitation Problem: Regarding the Cisco-based UK testbed, we tried to use Policy-Based Routing (PBR) and MPLS EXP setting as both were needed. We wanted to use policy-based routing to direct specific traffic to a specific tunnel. In addition, traffic belonging to each of MPLS tunnels should receive different PHB treatment along the tunnel path (e.g., by using LLQ-CBWFQ). As the MPLS Experimental bits is the only useful field visible along the tunnel path, it is required to set a different value to the MPLS Experimental bits of the traffic belonging to each tunnel in order to use it and direct each tunnel's traffic to specific queues/schedulers. We tried the following configurations in Cisco routers using a number of IOS versions: c7200-p-mz.mvpn, c7200-js-mz.122-5.bin, c7200-js-mz.122-8.T4.bin:

1- It has been possible to set EXP to any value at the input interface of edge routers (e.g., PE1). This is possible by using "class-map" and attaching "service policy" to the input interface.

2- It has also been possible to use PBR to direct traffic to specific tunnel by using route-map and attaching "policy route-map" to the input interface.

3- Unfortunately, the combination of both didn't work (either traffic is not directed to tunnel or MPLS EXP is not set).

4- Normally, there is a mapping of incoming packet's DSCP to EXP in IOS. When PBR is used this mapping is not performed.

We concluded that there is a problem with the PBR functionality. This is reported to Cisco and they acknowledged that we found a bug in the IOS with regard to PBR functionality. Cisco raised the bug CSCdx84421 for this problem and currently design engineers are working on it. The bug status report can be found at: http://www.cisco.com/cgi-bin/Support/Bugtool/launch_bugtool.pl.

Page 188:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 188 of 204

TEQUILA Consortium - October 2002

Limited Workaround: we solved the problem somehow, by not using PBR. Instead, static routes can be used to direct traffic to specific tunnels. Static routes can be set at the granularity of destination host/network IP addresses. This is not the same granularity you can gain from access-lists used by PBR where you can specify the protocol, port numbers, etc. for traffic destined to the tunnel. But it solves our problem as long as each tunnel's traffic is destined for a distinct IP destination. By using static routes, it is possible to set MPLS EXP at the input interface at the same time.

Page 189:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 189 of 204

TEQUILA Consortium - October 2002

TEQUILA UKTEQUILA UK Testbed Testbed

PE1 P1

P3

L1 .40/30

LB .32/30

.33

.41 .42

P2

L2 .120/30

.121

.122

L3 .200/30 L5 .144/30

L6 .148/30

L4 .48/30

L7 .60/30

LB .52/30 LB .104/30

LB .172/30

LAN4 .112/29

L8 .152/29

3/1:0

3/0:0

1/0

3/0:0

LB .248/30

.249

0/1 0/0

0/00

LAN1 .0/28

.9

Fast Ethernet

LB .124/30

LAN2 .64/29

TEQ. 0011

TEQ. 0012

.65 Ethernet

0/10/0

.201 .202

LAN5 .176/29eth0:.178

.177 Ethernet

TEQ. 0006

PE33/0:0

0

3/1:0

3/0:04/1

4/0

0/2

1/0

1/00

PE21/0

0

3/1:0

0/00

Fast Ethernet

.113

.49

.50

.61 .62

.145

.146

.149

.150

3/2:0 .153

.154

LAN9 .16/28

.17

TEQ. 0010

TEQ. 0009

.11

eth0: .12

LAN3 .32/28

Fast Ethernet

.34

LAN7 .128/28

TRLInternalNetwork

BRIDGE

.132

(SAMBA)PC8734

PC5068

Fast Ethernet

TEQ. 0001

.140 .139

(NTP, DNS, NIS)

TEQ. 0003

(Develop.)

.131

1/1

eth1: .141

eth0: .66

eth1: .142

NTP Specific Ethernet NetworkLAN7 .128/28

eth1: .133

HUB

.10

.129.125

.105.53

.173

HP WANAdvisor

RS2

32

S f e r S e r ve r

mont ori ng/ anal ss

HP 58503A GPS TimeReceiver

SX/12 (2)

.37

E1 Link

X.21 Serial Link

PE1, PE2, PE3, P3: 7204 Cisco Router - IOS Enterprise 12.2(5)P1: 3640 Cisco Router - IOS Enterprise 12.2(5)P2: 2620 Cisco Router- - IOS Enterprise 12.2(5)SX/12: ADTECH data link SimulatorHP WAN/LAN Advisor: WAN/LAN Protocol AnalysersHP 58503A: GPS Time & Frequency Reference Receiver

Fast Ethernet/Ethernet

G703-X.21 Converter

WE

WE

HITACH

TEQ. 0005

.135.136

SmartBitsChassis

SmartBits Control PC

HITACH

SmartBitsPort 1

HITACH

SmartBitsPort 2

TEQ. 0004

TEQ. 0002

eth1: .36

eth0: .130

eth0: .134

1/0

eth1: .114

3/0:0

Antenna

SX/12 (1)

0

TEQ. 0008

eth1:.18

Fast Ethernet

TEQ. 0007

eth0:.138

.19

HP LANAdvisor

HITACH

SmartBitsPort 4

.20

.35

eth1: .137

Figure 8-1: TEQUILA UK Testbed.

Page 190:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 190 of 204

TEQUILA Consortium - October 2002

8.2 FRENCH TESTBED SPECIFICATION

8.2.1 TEQUILA Network

TE1

TE3

TE2

TE TE

192.168.101.0/ 24 192.168.102.0/ 24

192.168.103.0/ 24

192.168.104.0/ 24192.168.105.0/ 24

192.168.106.0/ 24

192.168.107.0/ 24

192.168.108.0/ 24192.168.109.0/ 24

192.168.110.0/ 24 192.168.111.0/ 24

192.168.112.0/ 24

192.168.113.0/ 24

192.168.114.0/ 24

192.168.115.0/ 24

192.168.116.0/ 24 192.168.117.0/ 24 192.168.118.0/ 24

AS 65040

192.168.123.0/ 24

E0 101.2

E1 101.1

E2 102.1

E0 123.2

E0 102.2

E1 103.1 E1 103.2

FE0/1 123.1

E3 104.1 E2 105.1 E2 106.1 E3 107.1

E0 104.2

E1 108.1 E5 108.2

E0 105.2 E1 106.2

E3 109.1 E2 109.2

E1 107.2

E3 110.1 E2 111.1

E4 112.2 E2 113.1

E1 113.2

E3 114.1 E0 115.1

E0 110.2 E1 111.2

E2 112.1 E2 114.2 E0 115.2

E1 116.1 E0 116.2 E3 117.1 E0 117.2 E3 118.2 E1 118.1

TE10TE

TE11

AS 5511R2

FE0/0 193.251.249.33/29

AS 65001

S1/0 193.251.150.201/30

Horus .36

R2 .38 R1

FTLD

Cisco

Linux

TE4

TE5

Carnot .34 (serveur Web, Serveur Real, MRTG)

Bessel .35 OSIRIS .37

Serveur DNS

TE12

E1 123.3

TE6

192.168.119.0/ 24

192.168.120.0/ 24

E5 119.2

E0 119.1 E1 120.1

E2 120.2

Figure 8-2: French TEQUILA Testbed for IP TE experiments.

Note that in addition to the above tested, a testbed comprised of Cisco equipment was provided. In this testbed, the whole TEQUILA system has been successfully integrated.

8.2.2 COPS Testing Set-up The system under test is the IP-TE sub-system, which integrates several PEPs, PDP and OSPF-TE-capable routers. The COPS entities contain the following:

• PEP-Proxies: those entities connect to the related routers and provide COPS decisions by enabling Telnet connections.

• PDP: policy server that provide TE decisions.

In the current tests, we focused on the Ipinfusion Solution (www.ipinfusion.com) TE-capable routers.

8.2.2.1 Brief description of the IPInfusion Solution Name ZebOS Advanced Routing Suite Version 3.0 Vendor Ipinfusion

Page 191:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 191 of 204

TEQUILA Consortium - October 2002

ZebOS Advanced Routing Suite supports IETF-compliant IPv4 and IPv6 versions of OSPF, BGP, and RIP as well as MPLS switching protocols. Especially, OSPFv2 and OSPFv3 Protocol Modules are offered for IPv4 and IPv6 support, respectively. Traffic engineering (TE) extensions, a Constrained Shortest Path First (CSPF) topology, and virtual routing are available for the OSPF Protocol Modules.

Startup Router

Configure router mode

Enable

Configure mode

Priviledged exec mode

Configure interface mode

Router-map mode

Line mode

Configure terminal

Figure 8-3: The ospf command mode in IPINFUSION.

Startup Router

Enable

Configure mode

Priviledged exec mode

Configure interface mode

Configure terminal

Interface IFNAME

Figure 8-4: The Zebos command mode in IPINFUSION.

8.2.2.2 Test Material The tools used to carry out the tests are:

• Network Analyser: we used two analysers in order to visualize the COPS packets exchanged between entities embedding PEP and PDP. The use of this several analysers is motivated by the possibility to compare results obtained with each tool and avoid bugs related to one analyser. Thus, we used:

• Ethereal: this software enables COPS Protocol; the COPS fields supported by Ethereal are grouped in Table 2.

• Agilent 8.11: this analyser supports also COPS, and it permits to discover the nodes of the local network; it offers more details about the COPS messages than what do Ethereal.

• Policy Server: the PDP (Policy Decision Point) used in this experimentation is used to provide TE decisions.

Page 192:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 192 of 204

TEQUILA Consortium - October 2002

8.2.2.3 Limitations According to the results of the tests carrying on the support of the TE–OSPF extensions, it is proven that the Ipinfusion solution is not TE-capable. Thus, the impact of COPS on the stability of the network cannot be measured.

8.2.2.4 COPS test-bed

Our test-bed is organized on twelve routers; nine of them implement the OSPF-TE (Open Shortest Path First Traffic Engineering) extensions.

PDP

192.168.108.10

TEQUILA Network

(see Figure 8.2)

Figure 8-5: COPS test set-up.

8.2.2.5 Remarks and anomalies In this section, we will give remarks and anomalies that we didn't resolve during the tests:

• There is no verification of the conformance of the bandwidth configured: for example the reserveable bandwidth could be greater that the available one.

• The configuration of TE information is made under OSPFD and ZEBRA process, it's preferable that this configuration could be done only under OSPFD.

• The COPS-PR architecture implemented does not provide real-time information about the state of the link; for example, if a link is broken, the activation of TE-OSPF process in an interface.

8.2.3 Common Open Policy Service (COPS) This section summarises the COPS objects defined in RFC 2748.

MAC header

IP header

TCP header

COPS header

Data

Figure 8-6: The COPS packet encapsulated in an IP packet.

VERSION OpCode Client type

Message length

Data

RESERVED

S

Figure 8-7: COPS Header.

Version (4 bits): COPS version number.

Reserved (3 bits): Must be set to 0

S, Solicited Message Flag (1 bit): Set when the message is solicited by another COPS message.

Opcode (8 bits): all OpCode values defined in RFC 2748 (see table 8-1)

Client type (16 bits): Identifies the policy client. Interpretation of all encapsulated objects is relative to the client type. Client types that set the most significant bit in the client-type field are enterprise specific (these are client-types 0x8000 - 0xFFFF). For KA Messages, the client-type in the header MUST always be set to 0 as the KA is used for connection verification.

Page 193:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 193 of 204

TEQUILA Consortium - October 2002

Message length (32 bits): It contains the size of message in bytes, which includes the standard COPS header and all encapsulated objects. Messages MUST be aligned on 4 byte intervals.

Data (Variable length): It contains one or more Objects (see below).

Opcode Description 1 Request 2 Decision 3 Report State 4 Delete Request State 5 Synchronize State Req 6 Client-Open 7 Client-Accept 8 Client-Close 9 Keep-Alive 10 Synchronize Complete

Table 8-1: Values of the OpCode field.

LENGTH C-num SubClass

Data

Figure 8-8: A COPS object.

Object (Variable length): If the length in bytes does not fall on a 32-bit word boundary, padding MUST be added to the end of the object so that it is aligned to the next 32-bit boundary before the object can be sent on the wire. On the receiving side, a subsequent object boundary can be found by simply rounding up the previous stated object length to the next 32-bit boundary.

Length (16 bits): Size of the object in bytes.

Class (8 bits): Class of the object (see table 8-2).

Data (Variable length): The content depends on the message.

Port: COPS use the port 3288 (TCP).

Class Description 1 Handle 2 Context 3 In Interface 4 Out Interface 5 Reason code 6 Decision 7 LPDP Decision 8 Error 9 Client Specific Info 10 Keep-Alive Timer 11 PEP Identification 12 Report Type 13 PDP Redirect Address 14 Last PDP Address 15 Accounting Timer 16 Message Integrity

Table 8-2: COPS Classes.

Page 194:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 194 of 204

TEQUILA Consortium - October 2002

8.3 French IFT Testbed Specification

8.3.1 Hardware architecture The IFT ATM board can be assimilated to a simplex 622Mbit/s ATM switch with dynamic set-up. It has to be combined with a real ATM switch to deploy an MPLS network. The platform uses an IDT 77950 with a 1,24 Gbit/s core and eight 155Mbit/s interfaces. This switch provides 4 levels of priority and ATM cells reassembly ability. A priority queue must be empty before a lower one is serviced and a Round Robin algorithm is activated between individual flows buffered in a given queue. It can be configured by an application software running on Windows. (http://www.idt.com/products/pages/SwitchStar.html).

The small number of IFT board prototypes induces a server configuration, meaning that a same simplex board will be used to analyse incoming frames of all interfaces of a particular MPLS node. This has that the following consequence: symmetric paths will share the theoretical bandwidth of 155Mbit/s. Note that this limit comes from the ATM switch, since the current IFT limit is 622Mbit/s.

The switch is configured for the 3 Linux boxes, which can act as 2 Label Edge Routers (LER) and 1 Label Switch Router (LSR), and that are able to set up two different kinds of paths:

• LER to LER

• LER to LER via LSR

Figure 8-9: Overview of IFT hardware architecture.

Since the IFT board cannot modify the length of frames, the MPLS shim header is written over the LLC/SNAP field at the ingress node and then restored at the egress node by the hardware.

8.3.2 Software architecture 8.3.2.1 Pre-requisites 8.3.2.1.1 RSVP-TE for DiffServ over MPLS

The current version is 0.61-rc3. Software and documentation can be downloaded at http://ds_mpls.atlantis.rug.ac.be.

Since March 2002, netlink sockets are not used anymore in this software. An adaptation had to be realized to generate the required messages as a duplication of ioctl operations.

8.3.2.1.2 ATM on Linux

The current version is 0.78. Software and documentation can be downloaded at http://linux-atm.sourceforge.net/. Classical IP is used to convey IP datagrams over ATM

8.3.2.1.3 Iptables2

This library provides a means for user space programs to be aware of all modifications regarding iptables filtering administration via netlink socket messages. The current version is 1.0.0. Software and documentation can be downloaded at: ftp://ftp.linux-sna.org/pub/netfilter/iptables2-1.0.0.tar.gz.

8.3.2.1.4 IFT low-level software

Software includes IFT PCI board driver and trie memory library. The current version is 2.2.

Page 195:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 195 of 204

TEQUILA Consortium - October 2002

8.3.2.2 IFT event monitor: iftkmon

The IFT event monitor is an autonomous user program that listens to netlink socket messages sent by the kernel as an echo for every modification regarding networking. These events are then analysed and mapped to the IFT TRIE memory. Two channels are used:

1. Standard Linux netlink messages linked to IP routing events and MPLS netlink messages generated by the modified RSVP-TE software or MPLS administration tool (mplsadm).

2. Netfilter netlink messages linked to iptables administration.

As we cannot rely on the Ethertype field to determine the IP or MPLS nature of incoming data of an interface (due to MPLS/ATM standard), software needs configurations files. This restriction has been extended to outgoing frames and leads the monitor to act as a LER or a LSR, exclusively. If all of the managed interfaces in the configuration file /etc/iftkint.conf are LSR tagged, it will process netlink messages as a Label Switch Router, Label Edge Router otherwise.

To spare hardware boards, software was designed to work on logical IFTs, meaning that several monitor instances on different Linux boxes can drive a single IFT hardware board as long as ATM VCs are different. This leads to a second configuration file, /etc/iftknet.conf, which describes the individual network links and induces the VCs set attribution. The file is required even if this capability is not used , which is the case in the actual testbed .

Figure 8-10: Overview of TEQUILA IFT software architecture.

The same monitor runs at start-up on each node of the IFT MPLS network, reading configuration files reflecting either the LSR or LER role. Trees are written in the IFT memory along three models gathering the different frame fields to analyse and their order before giving a final verdict. Models are always fulfilled with a joker pattern if one or several field’s values are missing in the netlink messages. The joker pattern is a nibble range [0-F], meaning that all field values will match regardless of the real size. Initialisation consists in writing all models with joker patterns and a final status (VC) leading to the Linux box.

Field Ingress model Core model Egress model ATM VC 1 1 1 MPLS LABEL 2 2 MPLS EXP 3 3 SNAP TYPE 2 IPV4 VERSion 3 4 IPV4 IHL 4 5 IPV4 DSCP 5 6 IPV4 PROTocol 6 7 IPV4 SA 7 8 IPV4 DA 8 9 IPV4 TTL 9 10

Page 196:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 196 of 204

TEQUILA Consortium - October 2002

IPV4 TCP/UDP source port 10 IPV4 TCP/UDP dest port 11 Hardware action end status MPLS

encapsulation LABEL switch

MPLS decapsulation

Table 8-3: Tree fields.

The ingress model is filled with the firewall iptables messages. On a Linux box, ordered rules are examined sequentially and the first complete match stops iteration. In IFT trees, the most precise match of each field will determine the next step. This has to be kept in mind when entering overlapping traffic mapping rules. Label values, that are part of the hardware status, are found by successive information readings in the /proc/net/mpls/ system directory.

The core model is filled with MPLS messages, which are duplicated from system calls of the RSVP-TE software. A connection for each priority is established when receiving a new cross-connect order.

The egress model is filled at initialisation time with routing messages.

Unlike Linux paths, IFT paths exist only when traffic is mapped on them.

Phases Ingress node Core node Egress node Initialisation of iftkmon - - Creates an egress model tree

in IFT for each IP/Ethernet interface, using standard routing netlink messages

Path creation/destruction with rtest, rtest2 and rapi_recvauto tools

- Creates/deletes a core model tree in IFT, using MPLS netlink messages

-

Traffic mapping/unmapping on the created path with the tunnel tool.

Creates/deletes an ingress model tree in IFT, using

netfilter netlink messages

- -

Table 8-4: Set-up phases of an IFT LSP.

At ingress nodes, the EXP field of the outgoing MPLS frame is a direct mapping of the incoming frame DSCP field. This mapping uses the four IDT switch priority levels and is an arbitrary choice.

DSCP V(ery) 3

H(igh) 2

M(edium) 1

L(ow) 0

BE x EF x

AF11 x AF12 x AF13 x AF21 x AF22 x AF23 x AF31 x AF32 x AF33 x AF41 x AF42 x AF43 x

Table 8-5: DSCP to EXP mapping

For each written tree, a packet counter is created. Sending a USR2 signal to the monitor will dump statistics to a log file. Sending a USR1 signal will dump monitor internal information.

LSR counter example:

n°=00000008 cpt=00000678 [0094/16][000a80/20][20/3]

means that counter n°8 found 678 packets matching the following core model values:

INPUT VC: 0x94 on 16 bits MPLS label: 0x000a80 on 20 bits MPLS EXP: 0x20 on 3 bits

Page 197:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 197 of 204

TEQUILA Consortium - October 2002

Command syntax:

iftkmon [–d] [–v] [–c] [-l <label>]

-c: enables packet counters

-d: runs the monitor as a daemon

-v: shows more details of each tree creation/destruction operation

-l: force output <label> to be used at the ingress node, for testing purpose

Monitor produces 3 log files in the directory /var/log/ift:

• iftklog.log: main trace file always enabled

• iftkusr1.log: internal debugging information’s produced by USR1 signal

• iftkusr2.log: counters information produced by USR2 signal

8.3.3 Experimental Platform Set-up

Figure 8-11: Overview of the experimental IFT tested.

Two Unix hosts can communicate via:

• An Ethernet-based MPLS network, or

An ATM IFT-based MPLS network, depending on their forwarding configuration.

• MPLS router nodes

Linux kernel 2.4.17 modified, ATM, RSVP-TE and IFT software installed. All nodes can act as IP routers and routed daemon is enabled.

o LER(1): thebe

Page 198:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 198 of 204

TEQUILA Consortium - October 2002

Athlon 900MHz / 256M RAM / 4+1 Eth 100Mbit/s, NIC / 1 ATM 155Mbit/s NIC / ATM IFT PCI board

o LSR: jupiter

Athlon 900MHhz / 256M RAM / 4+1 eth 100Mbit/s NIC / 1 ATM 155Mbit/s NIC / ATM IFT PCI board

o LER(2): titania

Pentium III 866MHz / 256M RAM / 4+1 Eth 100Mbit/s NIC / 1 ATM 155Mbit/s NIC / ATM IFT PCI board

• IDT 77950 switch

Switch has been configured:

o To enable path set-up from LER(1) to LER(2) directly or via LSR

o To receive additional ATM traffic from SmartBits

o For all IFT errors to be carried to one Linux box (titania) via his ATM interface (low VCs from 0 to 0xF).

o Each level of priority is served by 28 ATM cells queue

• HostA: europe

o Pentium II / 128M RAM / 2 Eth 100Mbit/s NIC / 1 ATM 155Mbit/s NIC

o Linux kernel 2.4.17 modified, ATM software installed

• HostB: helios

o Sun UltraSparc 5 / 640M RAM / 1 Eth 100Mbit/s NIC / 1 Eth 10Mbit/s NIC / 1 ATM 155Mbit/s NIC

o Solaris 5.7

• SmartBits 2000

Main board firmware version is 6.63.0004

o ATM 155Mbit/s board AT-9155S (firmware version 3.02.002)

This generator is used to inject additional ATM traffic in the IFT network and to measure the packet loss of IFT network.

o Ethernet 100Mbit/ps boards ML-7710 (same version as main)

This generator is used to measure the packet loss of the Ethernet network.

8.3.4 Measurement tools 8.3.4.1 SmartBits 2000

SmartBits 2000 is a network performance analysis system from Spirent. The testbed generator is equipped with one ATM 155Mbit/s board and two Ethernet 100 Mbit/s boards. Application software can ease metrics measurements depending on board types.

• Ethernet

SmartApplication V2.22 software will measure packet loss.

• ATM

SmartWindow V6.51 software will measure packet loss.

ATM SmartBits will also generate 4 congestion traffic types at the core level of the IFT network:

• Low priority traffic, injected at the lowest switch priority level 0 with DSCP set to 0x00

• Medium priority traffic, injected at the next switch priority level 1 with DSCP set to 0x0C

Page 199:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 199 of 204

TEQUILA Consortium - October 2002

• High priority traffic, injected at the next switch priority level 2 with DSCP set to 0x0A

• Very high priority traffic, injected at the highest switch priority level 3 with DSCP set to 0x2E

To simulate traffic coming from another IFT LSR node, UDP frames are sent with the accurate MPLS shim header overwritten in LLC/SNAP fields, depending on path configuration.

Figure 8-12: Original and modified UDP data frame example.

8.3.4.2 Netperf Hewlett-Packard freely provides the network benchmarking tool called "netperf". This client/server program works on Linux and Solaris, and it includes scripts for throughput measurements. Server side is installed on Solaris hostB and client side on Linux hostA. For QoS testing, we have to generate IP flows with different DSCP markings. This is done by marking outgoing packets from the client host toward different destinations with iptables. Destinations are aliases of the same hostB server interface.

/usr/local/sbin/iptables -A OUTPUT -t mangle -d <dest> -j DSCP --set-dscp <DSCP>

Five instances of the netperf client are launched during 20s at the same time inside a loop of different packet sizes. TCP_STREAM script is used by default and receive socket size are set to 65536 bytes for local host and twice for remote host.

/opt/netperf/netperf -l <delay> -H <dest> -- -m <packet size> -s <local sock size> -S <rem sock size>

Individual netperf results are then gathered in a spreadsheet file.

Page 200:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 200 of 204

TEQUILA Consortium - October 2002

8.3.5 Configuration for Integration Tests

Figure 8-13: Overview of IFT integration tests configuration.

A path will be set up between LER (1) to LER (2) via LSR, and the symmetric reverse one too, to be able to send a ping command from hostA to hostB. These operations use the RSVP-TE Linux tools developed by TEQUILA.

8.3.6 Configuration for Performance Tests Performance tests lead to measure throughput of the IFT network in several conditions of traffic load. These tests are preceded by calibration tests. TCP traffic is sent over asymmetric paths. RSVP-TE and IFT software is running on every network node.

Figure 8-14: Overview of IFT performance tests configuration.

On the LER(1) side:

38. Path 500 from LER(1) to LER(2) via LSR is set-up Shared Explicit (SE) with rtest2 tool [root#] /usr/local/bin/rtest2 –f <datafile> >/dev/null 2>>/dev/null & [root#] cat <datafile>

11.102.5.153 11.101.5.175 500 0 11.101.5.130:0 1

On the LER(2) side:

39. Path 503 from LER(2) to direct LER(1) is set-up with rtest2 tool [root#] /usr/local/bin/rtest2 –f <datafile> >/dev/null 2>>/dev/null & [root#] cat <datafile>

11.103.5.175 11.103.5.153 503 0 0 1

Check paths with rstat tool on each node: LER(1)

[root#] /usr/local/bin/rstat Iface Session Sender Neighbour

Page 201:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 201 of 204

TEQUILA Consortium - October 2002

eth1 SE 11.102.5.153/500 11.101.5.175 !!too long 11.102.5.153 0/0 168/0 eth3 SE 11.103.5.153/502 11.103.5.175 !!too long 11.103.5.153 0/0 368/0 API SE 11.101.5.175/501 11.102.5.153 !!too long API 168/0 0/0 API SE 11.103.5.175/503 11.103.5.153 !!too long API 368/0 0/0

LSR [root#] /usr/local/bin/rstat Iface Session Sender Neighbour eth2 SE 11.102.5.153/500 11.101.5.175 !!too long 11.102.5.153 168/0 268/0 eth1 SE 11.101.5.175/501 11.102.5.153 !!too long 11.101.5.175 268/0 168/0

LER(2)

[root#] /usr/local/bin/rstat Iface Session Sender Neighbour API SE 11.102.5.153/500 11.101.5.175 !!too long API 268/0 0/0 API SE 11.103.5.153/502 11.103.5.175 !!too long API 368/0 0/0 eth2 SE 11.101.5.175/501 11.102.5.153 !!too long 11.101.5.175 0/0 268/0 eth3 SE 11.103.5.175/503 11.103.5.153 !!too long 11.103.5.175 0/0 368/0

8.4 Greek Development Platform Specification

8.4.1 PCs and Operating Systems A number of PCs are used for hosting the TEQUILA components. Two types of PCs were used: Server PCs, hosting the database server and the related functionality of the TEQUILA system, and client PCs hosting the rest of the TEQUILA components residing on a CORBA-based distributed processing environment.

The following configurations used for the server PCs:

• Godzilla

CPU: AMD Athlon 1.2 GHz, RAM: 1.5 GB, Disk: 40 GB ATA100, running Linux Suse 7.2 on kernel 2.4.19

• Diego

CPU: 2 x AMD MP 2000+, RAM: 2 GB DDR 266, Disk: 120 GB ATA100 (8 MB cache), running Linux Suse 8.1 on kernel 2.4.19

• Xeonix

CPU: 2 x Intel xeon 2.4 GHz (hyper-threading on), RAM: 4 GB, Disk: RAID1 2 x 36 GB 15k RPM RAID10 4 x 36 GB 15k RPM, running Linux Suse 8.0 on kernel 2.4.19

• Athina (Sun Enterprise 3500)

CPU: 4 x 400 MHz, RAM: 2 GB, Disk: 4 x 18 GB internal, external disk storedge T3 9 x 18 GB, running Solaris 8

The client PCs, have the following configuration: CPU: AMD XP 1800+, RAM: 1 GB DDR333, Disk: 80 GB ATA133, running Windows 2000Pro.

8.4.2 Software Packages 8.4.2.1 Database The Oracle database, 9i release 1 and 2, was used for data storage purposes. It should be noted that major parts of the subscription processing and off-line TE functions also run in the database server to exploit the advanced data-warehouse capabilities of Oracle.

8.4.2.2 Web-server

• Apache 1.3 was used to implement the subscription and the statistics web-servers.

8.4.2.3 ORBacus ORB / Notification Service Similar to section 8.1.3.1.

Page 202:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 202 of 204

TEQUILA Consortium - October 2002

8.4.2.4 Programming Tools

• PHP4 was used for creating the Web-based interface for accessing and viewing the results of the off-line TE functions.

• GD and Lib PNG were used for on-line graphical representation.

• C/C++ development on gcc 2.95.3.

• Java development on JDK 1.3.1.

8.4.3 Traffic and Network Emulation Tool A traffic and network emulation platform has been designed and implemented to carry out the envisaged Service Management performance assessment tests. It serves to set-up the experiment environment and traffic demand conditions, and to emulate the network behaviour during the experiment execution, according to the behaviour of the admission control scheme of the TEQUILA system.

The architecture of the developed platform is shown in figure 8-15.

Subscription Generation(BSRGen)

Traffic DemandEvent Generation

Emulation Engine

Notification(TT loads, state and invocation events)

Network Topology andCustomer Generation

Network Performance Evaluation

Injected Network TrafficCalculation (per LSP)

RSIM

InvocationRepository

Traffic EmulationRepository

SubscriptionRepository

Exp

erim

ent

set-

upE

xper

imen

t se

t-up

Exp

erim

ent

exec

uti

on

Exp

erim

ent

exec

uti

on

ND

SSSs

SSSs

SISsInvocation events

Offered Load events

TM, RAM

NetworkRepository

PHBs, LSPs, Flow Mappings

Network Topology

TrafficConditioning

Figure 8-15: Traffic and Network Emulation Platform Architecture.

The Network Topology and Customer Generation function generates arbitrarily large network topologies and potential customer’s populations and stores its results to the corresponding repositories.

The subscription generation is performed by the BSRGen testing tool, which generates the subscription requests to be negotiated with service management during the experiment. The generation of subscription requests is based on the available network topology and customers population and regulated by appropriate intensity parameters per service type.

The Traffic Demand Event Generation function generates the invocation and offered load events throughout the entire experiment for a given set of subscriptions and based to intensity parameters per service type.

The Emulation Engine drives the experiment execution. It operates in discrete time units. The workflow per time unit involves the following steps:

Page 203:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page: 203 of 204

TEQUILA Consortium - October 2002

• RSIM instances are fed with the invocation events through the Notification function. When the invocation request is accepted RSIM updates appropriately the traffic conditioning settings in the invocation repository.

• The offered load events are fed to the Injected Network Traffic Calculation function, which, using the RSIM traffic conditioning settings, calculates the traffic actually injected to each established LSP.

• Based on the injected traffic, the Network Performance Evaluation function deduces the performance state per LSP and generates the appropriate events, in terms of RED/GREEN flags.

• The performance events, together with load update events generated by the specified Reporting Period, are fed to RSIM instances by the Notification function. In case of RSIM algorithm state transitions, the resulted actions are appropriately populated to the established traffic conditioners.

The supported service types and the corresponding traffic generation configuration parameters are depicted in table 8-16. Subscription generation parameters are given as input to BSRGen and traffic demand generation parameters to the traffic demand generation function of the emulation platform.

The yellow coloured cells denote the configuration parameters, while the bright blue coloured cells the parameters calculated so as to generate traffic demand at the target intensity.

The target intensity is expressed through the Inflation Factor parameter. Inflation in that case refers to the factor with which the demand of the current subscription population will be inflated against the anticipated demand for the same population as calculated by the TEQUILA demand derivation and aggregation functions.

It should be noted that, thought Generalised Exponential, Pareto and Constant distributions are statically associated for invocation and load generation per service type, there is no such limitation by the traffic demand generation function.

Service Types OA min max min maxPermanent VPN EF Pareto

Flexible VPN AF3 Gen Exp Constant

Flexible Peer-to-Peer AF1 Gen Exp Pareto

On-Demand Peer-to-Peer AF1 Gen Exp 240 600 Gen Exp 15 45

Var

ianc

e

Var

ianc

e

Dis

trib

uti

on

Mea

n O

FF

Tim

e

Mea

n O

N T

ime

Sim

ult

aneo

us

Use

rs

Dis

trib

uti

on

Mea

n H

old

ing

Tim

e

Mea

n In

tera

rriv

al T

ime

Subscription GenerationTraffic Demand Generation

Invocation Load

Do

wn

stre

am R

ate

Up

stre

am R

ate

Su

bsc

rip

tio

ns

Nu

mb

er

Per

cent

age

Infl

atio

n F

acto

r

Table 8-6: Service Types and Configuration Parameters

Page 204:  · D3.4: Final System Evaluation Page 1 of 204 TEQUILA Consortium – October 2002 Project Number : IST-1999-11253-TEQUILA Project Title : Traffic Engineering for Quality of Serv

D3.4: Final System Evaluation Page 204 of 204

TEQUILA Consortium - October 2002