A Fair and Dynamic Load Balancing Mechanism
F. Larroca and J.L. Rougier
International Workshop on Traffic Management and Traffic Engineering for the Future Internet
Porto, Portugal, 11-12 December, 2008
page 2
Agenda
IntroductionUtility Maximization Load-BalancingDistributed AlgorithmSimulations
• Packet-Level Simulations• Fluid-Level Comparison
Conclusions
F. Larroca and J.L. Rougier FITRAMEN 08, Dec. 2008
Introduction
Network Convergence:• Traffic increasingly unpredictable and dynamic
Classic TE techniques (i.e. over-provisioning) inadequate: • Ever-increasing access rates• New emerging architectures with low link capacities
Possible answer: Dynamic Load-Balancing• Origin-Destination (OD) pairs with several paths:
how to distribute its traffic?• Paths configured a priori and distribution dependent
on current TM and network condition
page 3 F. Larroca and J.L. Rougier FITRAMEN 08, Dec. 2008
Introduction
Network operator interested OD pairs obtained performance• Why not state the problem in their terms?
Analogy with Congestion Control (TCP):• End-hosts = OD pairs• Rate = OD performance indicator
Differences: • Decision variable: portion of traffic sent through each
path (total traffic is given)• Much larger time-scale
page 4 F. Larroca and J.L. Rougier FITRAMEN 08, Dec. 2008
Introduction
Previous proposals:
• Define a link-cost function ll for each link l=1..L
• Minimize the total network’s cost Limitations:
• Indirect way of proceeding• Cannot prioritize an OD pair or enforce fairness
page 5 F. Larroca and J.L. Rougier FITRAMEN 08, Dec. 2008
Example:
page 6
Agenda
IntroductionUtility Maximization Load-BalancingDistributed AlgorithmSimulations
• Packet-Level Simulations• Fluid-Level Comparison
Conclusions
F. Larroca and J.L. Rougier FITRAMEN 08, Dec. 2008
Utility Maximization Load-Balancing
Define a single performance indicator per OD pair• us(d): performance perceived by OD pair s when
traffic distribution is d “Distribute” us(d) among OD pairs to maximize total
Utility (à la Congestion Control)
• ds = total demand of OD pair s (given)• dsi = traffic sent through path i of OD pair s (∑dsi= ds)• d = [ d11 d12 .. dS1 .. dSnS ]T
How to define us(d)?
page 7 F. Larroca and J.L. Rougier FITRAMEN 08, Dec. 2008
S
ssss
d
duUd1
))((max
Utility Maximization Load-Balancing
Our choice for us(d): mean path’s Available Bandwidth (ABW)
Assumptions: • Majority of traffic is elastic (i.e. TCP)• Path choice considered propagation delay
Advantages: • Mean ABW rough approximation of rate obtained by
TCP flows (ABW is the most important indicator)• Sudden increases in demand may be
accommodated
page 8 F. Larroca and J.L. Rougier FITRAMEN 08, Dec. 2008
lsil
sisisis ABWpABWp(d)u minarg
Utility Maximization Load-Balancing
Final version of the problem:
If ABWsi is the flow obtained rate, the problem is very similar to Multi-Path TCP• By only changing ingress routers, users may be
regarded as if they used MP-TCP: improved performance and more supported demands
page 9 F. Larroca and J.L. Rougier FITRAMEN 08, Dec. 2008
s
s
n
issi
S
s
n
isisiss
d
sdddcRdts
ABWpUd
1
1 1
,0,..
max
page 10
Agenda
IntroductionUtility Maximization Load-BalancingDistributed AlgorithmSimulations
• Packet-Level Simulations• Fluid-Level Comparison
Conclusions
F. Larroca and J.L. Rougier FITRAMEN 08, Dec. 2008
Distributed Algorithm
The optimization problem is not convex However, not too “unconvex” The distributed algorithm solves the dual problem
and results in a good approximation Based on the Harrow-Hurwitz method: greedy on
path utility (PU) minus path cost (PC)
page 11 F. Larroca and J.L. Rougier FITRAMEN 08, Dec. 2008
otherwise ,0
minarg if ),('
and ˆ where
ˆ)('
:
lsil
lsi
sil
s silisill
sillsisissi
ABWlABWUd
PCABWuUPU
page 12
Agenda
IntroductionUtility Maximization Load-BalancingDistributed AlgorithmSimulations
• Packet-Level Simulations• Fluid-Level Comparison
Conclusions
F. Larroca and J.L. Rougier FITRAMEN 08, Dec. 2008
Packet-Level Simulations
A simple example: all links have the same capacity and probabilities are updated every 50 seconds
page 13 F. Larroca and J.L. Rougier FITRAMEN 08, Dec. 2008
Comparison with two previous proposals: • MATE: minimize total M/M/1 delay
• TeXCP: greedy on the path’s maximum utilization
Two performance indicators: • Mean ABW (us) (weighted mean, 10% quantile and
minimum)• Link Utilization (mean, 90% quantile and maximum)
Fluid-Level Simulations
page 14 F. Larroca and J.L. Rougier FITRAMEN 08, Dec. 2008
In two real topologies and TMs:
l lABW
1min
Mean ABW (us)
Link Utilization
UM/MATE
Fluid-Level Simulations – Abilene
page 15 F. Larroca and J.L. Rougier FITRAMEN 08, Dec. 2008
UM/TeXCP
TeXCP - MATE TeXCP - UM
Fluid-Level Simulations – Géant Mean ABW (us)
Link Utilization
page 16 F. Larroca and J.L. Rougier FITRAMEN 08, Dec. 2008
UM/MATE UM/TeXCP
TeXCP - MATE TeXCP - UM
page 17
Agenda
IntroductionUtility Maximization Load-BalancingDistributed AlgorithmSimulations
• Packet-Level Simulations• Fluid-Level Comparison
Conclusions
F. Larroca and J.L. Rougier FITRAMEN 08, Dec. 2008
Conclusions
Performance as perceived by OD pairs is always better in UM than in MATE or TeXCP• MATE: relatively small differences in mean, but
significant in the worst case• TeXCP: more significant differences
Link utilization results for TeXCP and UM are very similar• MATE: although similar in mean and quantile, the
maximum link utilization may increase significantly Future Work:
• Stability• Other simpler methods or objective function that obtains
similar results
page 18 F. Larroca and J.L. Rougier FITRAMEN 08, Dec. 2008