congestion control algorithms of tcp in emerging networks
DESCRIPTION
Congestion Control Algorithms of TCP in Emerging Networks. Sumitha Bhandarkar Under the Guidance of Dr. A. L. Narasimha Reddy September 16, 2005. Motivation. Why TCP Congestion Control ? Designed in early ’80’s Still the most predominant protocol on the net Continuously evolves - PowerPoint PPT PresentationTRANSCRIPT
1Texas A&M University
Congestion Control Algorithms of TCPin Emerging Networks
Sumitha Bhandarkar Under the Guidance of Dr. A. L. Narasimha Reddy
September 16, 2005
2Texas A&M University
Why TCP Congestion Control ?
• Designed in early ’80’s– Still the most predominant protocol on the net
• Continuously evolves– IETF developing an RFC to keep track of TCP changes !
• Has “issues” in emerging networks– We aim to identify problems and propose solutions for
TCP in high-speed networks
Motivation
3Texas A&M University
• Link speeds have increased dramatically– 270 TB collected by PHENIX (Pioneering High
Energy Nuclear Interaction eXperiment)– Data transferred between Brookhaven National
Laboratory, NY to RIKEN research center, Tokyo– Typical rate 250Mbps, peak rate 600Mbps– OC48(2.4 Gbps) from Brookhaven to ESNET,
transpacific line (10 Gbps) served by SINET to Japan
– Used GridFTP (Parallel connections with data striping)
Source : CERN Courier, Vol. 45, No.7
Motivation
4Texas A&M University
• Historically, high-speed links present only at the core – High levels of multiplexing (low per-flow rates)– New architectures for high-speed routers
• Now, high-speed links are available for transfer between two endpoints– Low levels of multiplexing (high per-flow rates)
Motivation
5Texas A&M University
Outline
• TCP on high-speed links with low multiplexing– Design, analysis and evaluation of aggressive
probing mechanism (LTCP)• Impact of high RTT
• Impact on router buffers and loss rates (LTCP-RCS)
• TCP on high-speed links with high multiplexing– Impact of packet reordering (TCP-DCR)
• Future Work
6Texas A&M University
• TCP on high-speed links with low multiplexing– Design, analysis and evaluation of aggressive
probing mechanism (LTCP)• Impact of high RTT
• Impact on router buffers and loss rates (LTCP-RCS)
• TCP on high-speed links with high multiplexing– Impact of packet reordering (TCP-DCR)
• Future Work
Where We are ...
7Texas A&M University
TCP in High-speed Networks
TCP’s one per RTT increase does not scale well
* For RTT = 100ms, Packet Size = 1500 Byte
Motivation
*Source : RFC 3649
8Texas A&M University
• Design Constraints – More efficient link utilization– Fairness among flows of similar RTT – RTT unfairness no worse than TCP– Retain AIMD behavior
TCP in High-speed Networks
9Texas A&M University
• Layered congestion control
– Borrow ideas from layered video transmission– Increase layers, if no losses for extended period– Per-RTT window increase more aggressive at
higher layers
TCP in High-speed Networks
10Texas A&M University
• Layering– Start layering when window > WT
– Associate each layer with a step size K
– When window increases from previous addition of layer by K, increment number of layers
– For each layer K, increase window by K per RTT
Number of layers determined dynamically based on current network conditions.
LTCP Concepts (Cont.)
TCP in High-speed Networks
11Texas A&M University
K
K + 1
K
K - 1
LayerNumber
WK-1
Minimum Window Corresponding to the layer
Number of layers = K when WK W WK+1
WK
WK+1
LTCP Concepts
TCP in High-speed Networks
12Texas A&M University
Constraint 1 :
Rate of increase forflow at higher layer should be lower than flow at lower layer
Framework
TCP in High-speed Networks
K + 2
K
K - 1 WK-1
Number of layers = K when WK W WK+1
WK
WK+2
K + 1
WK+1
(K1 > K2, for all K1, K2 2)
13Texas A&M University
Constraint 2 :
After a loss, recovery time for a larger flow should be more than the smaller flow
Framework
TCP in High-speed Networks
(K1 > K2, for all K1, K2 2)
Flow 1 :
Flow 2 :
Window WR1
Slope = K1'
T1
Time
Window WR2
Slope = K2'
T2
Time
14Texas A&M University
• Decrease behavior : – Multiplicative decrease
• Increase behavior :– Additive increase with additive factor = layer
number
W = W + K/W
Design Choice
TCP in High-speed Networks
15Texas A&M University
• Analyze two flows operating at adjacent layers– Should hold for other cases through induction
• Ensure constraints satisfied for worst case– Should work in other cases
• After loss, drop at most one layer– Ensures smooth layer transitions
Determining Parameters
TCP in High-speed Networks
16Texas A&M University
– Before Loss : Flow1 at K, Flow2 at (K-1)– After loss four possible cases
– For worst case to happen W1 close to WK+1 , W2 close to WK-1
– Substitute worst case values in constraint on decrease behavior
Determining Parameters(Cont.)
TCP in High-speed Networks
worst case
17Texas A&M University
– Analysis yields inequality
• Higher the inequality, slower the increase in aggressiveness
– We choose
– If layering starts at WT, by substitution,
Determining Parameters
TCP in High-speed Networks
18Texas A&M University
Since after loss, at most one layer is dropped,
By substitution and simplification,
We choose = 0.15
Choice of
TCP in High-speed Networks
19Texas A&M University
TCP in High-speed Networks
Other Analyses
• Time to claim bandwidth– Window corresponding to BDP is at layer K– .
=
– For TCP, T(slowstart) + (W - WT) RTTs
(Assuming slowstart ends when window = WT)
20Texas A&M University
TCP in High-speed Networks
Other Analyses
• Packet recovery time– Window reduction is by
– After loss, increase is atleast by (K-1)
– Thus, time to recover from loss is RTTs
– For TCP, it is W/2 RTTs
– Speed up in packet recovery time
21Texas A&M University
TCP in High-speed Networks
22Texas A&M University
where K' is the layer for steady state window
Steady State Throughput
BW = ND / TD
TCP in High-speed Networks
23Texas A&M University
Response Curve
TCP in High-speed Networks
1.00E+00
1.00E+01
1.00E+02
1.00E+03
1.00E+04
1.00E+05
1.00E+06
1.00E-08
1.00E-07
1.00E-06
1.00E-05
1.00E-04
1.00E-03
1.00E-02
Loss Rate (p)
Win
do
w (
Pa
cke
ts/R
TT
)
TCP
LTCP
24Texas A&M University
• TCP on high-speed links with low multiplexing– Design, analysis and evaluation of aggressive
probing mechanism (LTCP)• Impact of high RTT
• Impact on router buffers and loss rates (LTCP-RCS)
• TCP on high-speed links with high multiplexing– Impact of packet reordering (TCP-DCR)
• Future Work
Where We are ...
25Texas A&M University
• Two-fold dependence on RTT– Smaller the RTT, faster the growth in window– Smaller the RTT, faster the aggressiveness increases
• Easy to offset this– Scale K using “RTT compensation factor” KR
– Thus, increase behavior is W = W + (KR * K) / W
– Decrease behavior is still W = * W
Impact of RTT*
TCP in High-speed Networks
* In collaboration with Saurabh Jain
26Texas A&M University
– Throughput ratio in terms of RTT and KR is
– When KR RTT (1/3), TCP-like RTT-unfairness
– When KR RTT, linear RTT unfairness (window size independent of RTT)
Impact of RTT*
TCP in High-speed Networks
* In collaboration with Saurabh Jain
27Texas A&M University
Window Comparison
TCP in High-speed Networks
28Texas A&M University
TCP in High-speed Networks
– Highspeed TCP : Modifies AIMD parameters based on different response function (no longer AIMD)
– Scalable TCP : Uses MIMD
– FAST : Based on Vegas core
– BIC TCP : Uses Binary/Additive Increase, Multiplicative Decrease
– H-TCP : Modifies AIMD parameters based on “time since last drop” (no longer AIMD)
Related Work
29Texas A&M University
Link Utilization
TCP in High-speed Networks
30Texas A&M University
Dynamic Link Sharing
TCP in High-speed Networks
31Texas A&M University
Effect of Random Loss
TCP in High-speed Networks
32Texas A&M University
Interaction with TCP
TCP in High-speed Networks
33Texas A&M University
RTT Unfairness
TCP in High-speed Networks
34Texas A&M University
• Why LTCP ?– Current design remains AIMD– Dynamically changes increase factor– Simple to understand/implement– Retains convergence and fairness properties– RTT unfairness similar to TCP
Summary
TCP in High-speed Networks
35Texas A&M University
• TCP on high-speed links with low multiplexing– Design, analysis and evaluation of aggressive
probing mechanism (LTCP)• Impact of high RTT
• Impact on router buffers and loss rates (LTCP-RCS)
• TCP on high-speed links with high multiplexing– Impact of packet reordering (TCP-DCR)
• Future Work
Where We are ...
36Texas A&M University
Increased aggressiveness increases congestion events
Summary of Bottleneck Link Buffer Statistics
TCP in High-speed Networks
Impact on Packet Losses
37Texas A&M University
Increased aggressiveness increases stress on router buffers
TCP in High-speed Networks
Instantaneous Queue Length at Bottleneck Link Buffers
Impact on Router Buffers
38Texas A&M University
• Important to be aggressive for fast convergence – When link is underutilized
– When new flows join/leave
• In steady state, aggressiveness should be tamed– Otherwise, self-induced loss
rates can be high
TCP in High-speed Networks
Impact on Buffers and Losses
Motivation
0
200
400
600
800
1000
1200
0 500 1000 1500
Time (Seconds)
Lin
k U
tiliza
tio
n (
Mb
ps)
(5-s
eco
nd
Ave
rag
e)
flow1
flow2
39Texas A&M University
• Proposed solution– In steady state, use less aggressive TCP algorithms– Use a control switch to turn on/off aggressiveness
• Switching Logic– ON when bandwidth is available– OFF when link is in steady state– ON when network dynamics change (sudden
decrease or increase in available bandwidth)
TCP in High-speed Networks
Impact on Buffers and Losses
40Texas A&M University
Using the ack-rate for identifying steady stateRaw ack-rate signal for flow1
TCP in High-speed Networks
Impact on Buffers and Losses
41Texas A&M University
• Using ack-rate for switching– Trend of the ack rate works well for our purpose
– If (gradient = 0) : Aggressiveness OFFIf (gradient 0) : Aggressiveness ON
– Responsiveness of raw signal does not require large buffers
– Noisy raw signal smoothed using EWMA
TCP in High-speed Networks
Impact on Buffers and Losses
42Texas A&M University
Instantaneous Queue Length at Bottleneck Link Buffers
Without Rate-based Control Switch With Rate-based Control Switch
TCP in High-speed Networks
Impact on Buffers and Losses
43Texas A&M University
Summary of Bottleneck Link Buffer Statistics
TCP in High-speed Networks
Impact on Buffers and Losses
44Texas A&M University
Convergence Properties
TCP in High-speed Networks
Impact on Buffers and Losses
45Texas A&M University
• Other Results
– TCP Tolerance slightly improved
– RTT Unfairness slightly improved
– At higher number of flows, improvement in loss rate is about a factor of 2
– Steady reverse traffic does not impact performance
– Highly varying traffic reduces benefits, improvement in loss rate is about a factor of 2
TCP in High-speed Networks
Impact on Buffers and Losses
46Texas A&M University
• Use of rate-based control switch– provides improvement in loss rates ranging from
orders of magnitude to a factor of 2 – low impact on other benefits of high-speed protocols– Benefits extend to other high-speed protocols
(verified for BIC and HTCP)
• Whichever high-speed protocol emerges as the next standard, rate-based control switch could be safely used with it
Summary
TCP in High-speed Networks
Impact on Buffers and Losses
47Texas A&M University
• TCP on high-speed links with low multiplexing– Design, analysis and evaluation of aggressive
probing mechanism (LTCP)• Impact of high RTT
• Impact on router buffers and loss rates (LTCP-RCS)
• TCP on high-speed links with high multiplexing– Impact of packet reordering (TCP-DCR)
• Future Work
Where We are ...
48Texas A&M University
• TCP behavior: If three dupacks – retransmit the packet – reduce cwnd by half.
• Caveat : Not all 3-dupack events are due to congestion – channel errors in wireless networks– reordering etc.
• Result : Sub-optimal performance
TCP with Non-Congestion Events
49Texas A&M University
Impact of Packet Reordering
• Packet Reordering in the Internet– Originally thought to be pathological
• caused only by route flapping, router pauses etc
– Later results claim higher prevalence of reordering
• reason attributed to parallelism in Internet components
– Newer measurements show
• low levels of reordering in most part of Internet
• high levels of reordering is localized to some links/sites
• is a function of network load
50Texas A&M University
• Proposed Solution
– Delay the time to infer congestion by
– Essentially a tradeoff between wrongly inferring congestion and promptness of response to congestion
chosen to be one RTT to allow maximum time while avoiding an RTO
Impact of Packet Reordering
51Texas A&M University
• Evaluation conducted for different scenarios– Networks with only packet reordering, only
congestion, both
• Evaluation at multiple levels– Flow level (Throughput, relative fairness,
response to dynamic changes in traffic etc.) – Protocol level (Packet delivery time, RTT
estimates etc.)– Network level (Bottleneck link droprate, queue
length etc.)
Impact of Packet Reordering
52Texas A&M University
TCP-DCR maintains high throughput even when large percentage of packets are delayed
Throughput Vs Percentage of Delayed Packets (Normally Distributed Packet Delay, mean 25ms, stddev 8ms)
0
1
2
3
4
5
6
7
8
0 5 10 15 20 25 30 35
Percentage of Packets Delayed
Th
rou
gh
pu
t (M
bp
s)
TCP-SACK
TCP-DCR
Packet Reordering Only
Impact of Packet Reordering
53Texas A&M University
TCP-DCR maintains high throughput when packets are delayed upto 0.8 * RTT
Packet Reordering OnlyThroughput Vs Packet Delay
2% of Packets Delayed
0
1
2
3
4
5
6
7
8
0 0.2 0.4 0.6 0.8 1 1.2 1.4
Packet Delay (Fraction of RTT)
Th
rou
gh
pu
t (M
bp
s) TCP-SACK
TCP-DCR
Impact of Packet Reordering
54Texas A&M University
Congestion Only (Fairness)
Per-flow throughput TCP-DCR is similar to that of competing TCP-SACK flows on congested links
Throughput Vs Link Droprate due to Congestion
0
0.1
0.2
0.3
0.40.5
0.6
0.7
0.8
0.9
1
0 1 2 3 4 5 6 7
Link Droprate due to Congestion
Th
rou
gh
pu
t (M
bp
s)
TCP-SACK
TCP-DCR
Impact of Packet Reordering
55Texas A&M University
TCP-DCR utilizes throughput given up by TCP-SACK flows TCP-SACK flows are not starved for bandwidth
Congestion and Packet ReorderingThroughput Vs Percentage of Delayed Packets
(Normally Distributed Packet Delay, mean 25ms, stddev 8ms)Congestion Droprate : 0.2 to 2%
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0 5 10 15 20 25 30 35
Percentage of Packets Delayed
Th
rou
gh
pu
t (M
bp
s)
TCP-Sack
TCP-Dcr
Impact of Packet Reordering
56Texas A&M University
• Other Results
– RTT Estimation not affected
– Packet delivery time increased only for packets recovered via retransmission
– Convergence properties not affected
– Bottleneck queue similar with both Droptail and RED
– Bottleneck droprates similar with Droptail and RED
– Evaluation on Linux testbed
Impact of Packet Reordering
57Texas A&M University
• TCP on high-speed links with low multiplexing– Design, analysis and evaluation of aggressive
probing mechanism• Impact of high RTT
• Impact on router buffers and loss rates
• TCP on high-speed links with high multiplexing– Impact of packet reordering
• Future Work
Where We are ...
58Texas A&M University
Future Work
• Further Evaluation of LTCP / LTCP-RCS
• Further Evaluation of TCP-DCR
• Exploring Congestion Avoidance Techniques
59Texas A&M University
Future Work (1)
• Further Evaluation of LTCP / LTCP-RCS– Impact of delaying congestion response in high-speed
networks
– Evaluate alternate metrics for aggressiveness control• Investigate different smoothing techniques for ackrate signal
– Experimental evaluation on Internet2 testbed• LTCP• Rate-based Control Switch• Extent of reordering
60Texas A&M University
• Further Evaluation of TCP-DCR– Exploiting the benefits of robustness to
reordering• End-node multi-homing
• Network multi-homing/load balancing with packet-level decisions instead of flow-level decisions
Future Work (2)
61Texas A&M University
Exploring Congestion Avoidance Techniques
Future Work (3)
AggressiveAlgorithm
Non-aggressiveAlgorithm
AggrControl
Yes No
LTCP TCP-SACK
RCS
Motivation
TCP-SACK
62Texas A&M University
• Focus on the “non-aggressive algorithm” component
– Several options available
• TCP-SACK (Loss)
• CARD (Delay Gradient)
• Tri-S (Throughput Gradient)
• DUAL (Delay)
• TCP-Vegas (Throughput)
• CIM (Delay)
Exploring Congestion Avoidance Techniques
Future Work (3)
63Texas A&M University
• Should we use existing options ? – Rely on changes in RTT for detecting congestion– Research shows low correlation between RTT and
packet loss– High false positives reduce achieved throughput
• False positives may be due to forward or reverse traffic changes
Exploring Congestion Avoidance Techniques
Future Work (3)
64Texas A&M University
• Improved delay-based metric possible ?– Two factors effect variations in RTT
• Persistent Congestion
• Transient burstiness in traffic
– Probabilistically determine if RTT increase is related to congestion ?
– Modify response to compensate for unreliability of signal ?
• Probabilistic response
• Proportional response
Exploring Congestion Avoidance Techniques
Future Work (3)
65Texas A&M University
• Compensation for unreliability of signal by modifying the response– Deviate from current philosophy of binary
response (Respond or NOT Respond)– Resulting behavior similar to RED
• Response by end points eliminates deployment issues
Exploring Congestion Avoidance Techniques
Future Work (3)
66Texas A&M University
Exploring Congestion Avoidance Techniques
Future Work (3)
Topology
67Texas A&M University
Exploring Congestion Avoidance Techniques
Future Work (3)
00.10.20.30.40.50.60.70.80.9
11.1
0.06 0.08 0.1 0.12 0.14RTT (seconds)
Cum
ulat
ive
Pro
babi
lity
Case 1
Case 2
Case 3
Case 4
Case 5
68Texas A&M University
Exploring Congestion Avoidance Techniques
Future Work (3)
69Texas A&M University
Conclusions
• TCP on high-speed links with low multiplexing– LTCP
• Retains AIMD, good convergence properties and controlled RTT-unfairness
– RCS• Controls aggressiveness to reduce loss rates, can be used with
other loss-based high-speed protocols– Future work
• Alternate non-aggressive algorithms
• TCP on high-speed links with high multiplexing– TCP-DCR
• Simple, yet effective
70Texas A&M University
• LTCP / LTCP-RCS– Sumitha Bhandarkar and A. L. Narasimha Reddy, "Rate-based Control of the Aggressiveness of
Highspeed Protocols”, Currently Under Submission.
– Sumitha Bhandarkar, Saurabh Jain and A. L. Narasimha Reddy, ”LTCP : Layered Congestion Control for Highspeed Networks”, Journal Paper, Currently Under Submission.
– Sumitha Bhandarkar, Saurabh Jain and A. L. Narasimha Reddy, "Improving TCP Performance in High Bandwidth High RTT Links Using Layered Congestion Control", Proceedings of PFLDNet 2005 Workshop, February 2005.
• TCP-DCR– Sumitha Bhandarkar and A. L. Narasimha Reddy, "TCP-DCR: Making TCP Robust to Non-
Congestion Events", Proceedings of Networking 2004, May 2004. Also, presented as student poster at ACM SIGCOMM 2003, August 2003.
– Sumitha Bhandarkar, Nauzad Sadry, A. L. Narasimha Reddy and Nitin Vaidya, “TCP-DCR: A Novel Protocol for Tolerating Wireless Channel Errors”, accepted for publication in IEEE Transactions on Mobile Computing (Vol. 4, No. 5), September/October 2005
– Sumitha Bhandarkar and A. L. Narasimha Reddy, "Improving the robustness of TCP to Non-Congestion Events", IETF Draft, work in progress, May 2005. Status: Preparing for WGLC.
List of Publications
71Texas A&M University
Thank You
Questions ?
72Texas A&M University
Supporting Slides
73Texas A&M University
• HS-TCPSally Floyd, “HighSpeed TCP for Large Congestion Windows”, RFC 3649 Dec 2003.
• Scalable TCPTom Kelly, “Scalable TCP: Improving Performance in HighSpeed Wide AreaNetworks”, ACM Computer Communications Review, April 2003.
• FASTCheng Jin, David X. Wei and Steven H. Low, “FAST TCP: motivation, architecture,algorithms, performance”, IEEE Infocom, March 2004.
• BICLisong Xu, Khaled Harfoush, and Injong Rhee, “Binary Increase Congestion Control forFast Long-Distance Networks”, IEEE Infocom, March 2004.
• HTCPR. N. Shorten, D. J. Leith, J. Foy, and R. Kilduff, “H-TCP Protocol for high-speed LongDistance Networks”, PFLDnet 2004, February 2003.
Work Related to LTCP
74Texas A&M University
Work Related to LTCP-RCS
• TCP-AFRICARyan King, Richard Baraniuk, Rudolf Riedi, “TCP-Africa: An Adaptive and Fair Rapid Increase Rule for Scalable TCP”, IEEE Infocom, March 2005.
75Texas A&M University
Work Related to Reordering
[1] V. Paxson, "End-to-end Internet packet dynamics," IEEE/ACM Transactions on Networking, 7(3):277--292, 1999.
[2] Jon C. R. Bennett, Craig Partridge, and Nicholas Shectman. “Packet reordering is not pathological network behavior,” IEEE/ACM Transactions on Networking, 1999.
[3] D. Loguinov and H. Radha, "End-to-End Internet Video Traffic Dynamics: Statistical Study and Analysis," IEEE INFOCOM, June 2002.
[4] G. Iannaccone, S. Jaiswal and C. Diot, "Packet Reordering Inside the Sprint Backbone," Tech. Report, TR01-ATL-062917, Sprint ATL, Jun. 2001.
[5] S. Jaiswal, G. Iannaccone, C. Diot, J. Kurose and D. Towsley, "Measurement and Classification of Out-of-sequence Packets in Tier-1 IP Backbone," INFOCOM 2003
[6] Yi Wang, Guohan Lu, Xing Li, “A Study of Internet Packet Reordering,” Proc. ICOIN 2004: 350-359.
[7] Xiaoming Zhou, Piet Van Mieghem, “Reordering of IP Packets in Internet,” Proc. PAM 2004: 237-246
[8]Ladan Gharai, Colin Perkins, Tom Lehman, “Packet Reordering, High Speed Networks and Transport Protocol Performance,” ICCCN 2004: 73-78.
76Texas A&M University
Work Related to TCP-DCR
• Blanton/AllmanE. Blanton and M. Allman, “On Making TCP More Robust to Packet Reordering,” ACMComputer Communication Review, January 2002
• RR-TCPM. Zhang, B. Karp, S. Floyd, and L. Peterson, “RR-TCP: A Reordering-Robust TCP with DSACK,” ICSI Technical Report TR-02-006, Berkeley, CA, July 2002
• TCP-DCR IETF Drafthttp://www.ietf.org/internet-drafts/draft-ietf-tcpm-tcp-dcr-05.txt
77Texas A&M University
• CARDRaj Jain, "A Delay-Based Approach for Congestion Avoidance in Interconnected Heterogeneous Computer Networks," ACM CCR vol. 19, pp. 56-71, Oct 1989.
• Tri-SZheng Wang and Jon Crowcroft, "A New Congestion Control Scheme: Slow Start and
Search (Tri-S)," ACM Computer Communication Review, vol. 21, pp 32-43, Jan 1991 • DUAL
Zheng Wang and Jon Crowcroft, "Eliminating Periodic Packet Losses in the 4.3-Tahoe BSD TCP Congestion Control Algorithm," ACM CCR vol. 22, pp. 9--16, Apr. 1992
• TCP-VegasLawrence S. Brakmo and Sean W. O'Malley, "TCP Vegas: New Techniques forCongestion Detection and Avoidance," in SIGCOMM '94.
• CIMJ. Martin, A. Nilsson, and I. Rhee, “Delay-Based Congestion Avoidance for TCP,” IEEE/ACM Transactions on Networking, vol. 11, no. 3, pp. 356–369, June 2003
Delay-based Schemes
78Texas A&M University
References
• Measurement sites showing TCP predominancehttp://ipmon.sprint.com/packstat/viewresult.php?0:protobreakdown:sj-20.0-040206:
http://www.aarnet.edu.au/network/trafficvolume.html
http://www.caida.org/outreach/resources/learn/trafficworkload/tcpudp.xml
• TCP Roadmaphttp://tools.ietf.org/wg/tcpm/draft-ietf-tcpm-tcp-roadmap/draft-ietf-tcpm-tcp-roadmap-04.txt
79Texas A&M University
Delay-based Schemes : Issues
[1] Ravi S. Prasad, Manish Jain, Constantinos Dovrolis, “On the Effectiveness of Delay-Based Congestion Avoidance”, PFLDnet 2004
[2] S. Biaz and N. Vaidya, “Is the Round-Trip Time Correlated with the Number of Packets in Flight?,” Internet Measurement Conference (IMC), Oct. 2003
[3] J. Martin, A. Nilsson, and I. Rhee, “Delay-Based Congestion Avoidance for TCP,” IEEE/ACM Transactions on Networking, vol. 11, no. 3, pp. 356–369, June 2003.
[4] Les Cottrell, Hadrien Bullot and Richard Hughes-Jones, "Evaluation of Advanced TCP stacks on Fast Long-Distance production Networks” PFLDNet 2004
80Texas A&M University
References
• PHENIX Projecthttp://www.phenix.bnl.gov/
• CERN Courier, Vol. 45, No.7http://www.cerncourier.com/main/toc/45/7
81Texas A&M University
References
• AIMD
D.-M. Chiu and R. Jain, “Analysis of the increase and decrease algorithms for congestion avoidance in computer networks,” Computer Networks and ISDN Systems, 17(1):1--14, June 1989.
82Texas A&M University
Response Curve High-speed Protocols
83Texas A&M University
Topology
Dynamic Link Sharing
Time (Seconds)0 300 600 900 1200 1500 1800 2100
Flow1 Start
Flow2 Start
Flow3 Start
Flow4 Start
Flow4 Stop
Flow3 Stop
Flow2 Stop
Flow1 Stop
1Gbps, 40ms
2.4Gbps, 10ms
2.4Gbps, 10ms
84Texas A&M University
Topology
RCS : Convergence Properties
0
200
400
600
800
1000
1200
0 200 400 600 800 1000 1200 1400
Time (Seconds)
Lin
k U
tilizati
on
(M
bp
s)
(5-s
eco
nd
Avera
ge)
flow1
flow2
Region 1 Region 2 Region 3
-fair convergence :
Time for allocation (B,0)
Jain Fairness Index :
85Texas A&M University
References
• -fair convergence : Deepak Bansal, Hari Balakrishnan, Sally Floyd and Scott Shenker, “Dynamic Behavior of Slowly-Responsive Congestion Control Algorithms”, ACM SIGCOMM 2001.
•Jain Fairness Index : R.Jain, D-M. Chiu and W. Hawe. "A Quantitative Measure of Fairness and Discrimination For Resource Allocation in Shared Conputer Systems," Technical Report TR-301, DEC Research Report, September, 1984
86Texas A&M University
Instantaneous Queue Length at Bottleneck Link Buffers
with Rate-based Control Switch
TCP in High-speed Networks
Impact on Buffers and Losses
87Texas A&M University
Loss Events and Packet Loss Rate with the RCS
TCP in High-speed Networks
Impact on Buffers and Losses
88Texas A&M University
Impact on Router Buffers and Packet Loss Rates
TCP in High-speed Networks
Convergence Properties
89Texas A&M University
Convergence Properties (Cont.)
TCP in High-speed Networks
Impact on Buffers and Losses
90Texas A&M University
Behavior with multiple Flows
TCP in High-speed Networks
Impact on Buffers and Losses
1E-07
1E-06
1E-05
1E-04
1E-03
0 5 10 15Number of Flows
Pa
cke
t L
oss R
ate
LTCP (1BDP)
LTCP-RCS (1BDP)
LTCP (1/3 BDP)
LTCP-RCS (1/3 BDP)
91Texas A&M University
Behavior with multiple Flows
TCP in High-speed Networks
Impact on Buffers and Losses
92Texas A&M University
TCP Tolerance
TCP in High-speed Networks
Impact on Buffers and Losses
93Texas A&M University
TCP Tolerance
TCP in High-speed Networks
Impact on Buffers and Losses
94Texas A&M University
RTT Unfairness
TCP in High-speed Networks
Impact on Buffers and Losses
95Texas A&M University
RTT Unfairness
TCP in High-speed Networks
Impact on Buffers and Losses
96Texas A&M University
Topology
RCS : Impact of Steady Reverse Traffic
1Gbps, 40ms2.4Gbps,
10ms
2.4Gbps, 10ms
High-speed Flow
UDP Flow
Time (Seconds)0 200 400 600 800 1000
100755025
Link Util by UDP Flow
97Texas A&M University
Impact of Reverse Traffic
TCP in High-speed Networks
Impact on Buffers and Losses
98Texas A&M University
Impact of Reverse Traffic
TCP in High-speed Networks
Impact on Buffers and Losses
99Texas A&M University
Impact of Background Traffic with High Variance
TCP in High-speed Networks
Impact on Buffers and Losses
100
Texas A&M University
Impact of Background Traffic with High Variance
TCP in High-speed Networks
Impact on Buffers and Losses
101
Texas A&M University
Delay-based Metric for Aggressiveness Control
TCP in High-speed Networks
Impact on Buffers and Losses
Buffer Size = 1/3 DBP (5000 Packets)
RCS : Rate-based Control Switch : OFF when throughput gradient 0)DCS : Delay-based Control Switch : OFF when (queuing delay * sending rate) >
( = 1.65)
102
Texas A&M University
Exploring Congestion Avoidance Techniques
Future Work (3)
Motivation
103
Texas A&M University
Fairness Among Multiple Flows
TCP in High-speed Networks
Jain Fairness Index :
104
Texas A&M University
Interaction With Non-responsive Traffic
TCP in High-speed Networks
105
Texas A&M University
TCP in High-speed Networks
Impact on Buffers and Losses
Related Work• TCP-AFRICA
– Uses delay-based metric for reducing losses in HS-TCP • Requires high resolution timers
– Convergence behavior not examined• Could potentially increase convergence time drastically
• TCP-FAST– Based on Vegas Core– Research shows issues that make it less effective for practical
deployment
106
Texas A&M University
Congestion Only (Sudden Changes in Traffic)
Time to reach (55%,45%) allocation :
TCP-SACK : 3.10 sTCP-DCR : 3.67 s
Response of TCP-DCR to sudden changes in traffic is similar to that
of TCP-SACK
Response of TCP-SACK to Sudden Change in Traffic
0
2
4
6
8
10
12
0 50 100 150 200Time (seconds)
Th
rou
gh
pu
t (M
bp
s)(1
sec
on
d b
ins)
TCP-SACK
FTP Traffic
Response of TCP-DCR to Sudden Change in Traffic
0
2
4
6
8
10
12
0 50 100 150 200Time (seconds
Th
rou
gh
pu
t (M
bp
s)
(1 s
eco
nd
bin
s)TCP-DCR
FTP Traffic
Impact of Packet Reordering
107
Texas A&M University
Congestion Only (Effect of Web-like Traffic)Interaction of TCP-SACK with Web-like Traffic
0
2
4
6
8
10
0 50 100 150 200Time (Seconds)
Th
rou
gh
pu
t (M
bp
s)(in
1 s
ec b
ins)
TCP-SACK
Traffic
Interaction of TCP-DCR with Web-like Traffic
0
2
4
6
8
10
0 50 100 150 200
Time (Seconds)
Th
rou
gh
pu
t (M
bp
s)(in
1 s
ec b
ins)
TCP-DCR
Traffic
TCP-SACK TCP-DCRAggregate
Throughput4.76 Mbps 4.73 Mbps
Throughput ofWeb-Traffic
4.84 Mbps 4.82 Mbps
Bulk transfer due to TCP-DCR does not effect background web-
like traffic
Impact of Packet Reordering
108
Texas A&M University
Congestion Only (Background UDP traffic)Response to UDP Traffic
0
1
2
3
4
5
6
7
0 25 50 75 100 125 150 175 200Time (seconds)
Th
rou
gh
pu
t (M
bp
s)
(1
se
co
nd
bin
s)
TCP-SACK
TCP-DCR
UDP
TCP-DCR and TCP-SACK maintain relative fairness with dynamically changing traffic
Impact of Packet Reordering
109
Texas A&M University
Congestion Only (Packet Delivery Time)Packet Delivery Time for TCP-SACK
0
0.05
0.1
0.15
0.2
0.25
0.3
0 5000 10000 15000Packet Sequence Number
Pac
ket d
eliv
ery
Tim
e (s
eco
nd
s)
Packet Delivery Time for TCP-DCR
0
0.05
0.1
0.15
0.2
0.25
0.3
0 5000 10000 15000Packet Sequence Number
Pac
ket d
eliv
ery
Tim
e (s
eco
nd
s)
Time to recover lost packets :
TCP-SACK : 182.7msTCP-DCR : 201.3 ms
TCP-DCR has higher packet recovery time for lost packets. Packet delivery time similar to TCP-SACK
during times of no congestion.
Impact of Packet Reordering
110
Texas A&M University
Problem Description
End-to-End RTT
Receiver
Packet Delayed Causing Reordering
Sender
…
Retransmission andWindow Reduction
7 8 9 2 1 2 3 456
2 2 2 2 2 7 8 9 10 10
TCP with Non-Congestion Events
111
Texas A&M University
Congestion Response Delay Timer Cancelled
Congestion Response Delay TimerEnd-to-End RTT
Receiver
Packet Delayed Causing Reordering
Sender
…
No Retransmission or Window Reduction
7 8 9 10 11
1 2 3 456
2 2 2 2 2 7 8 9 10 11 12
Proposed Solution
TCP with Non-Congestion Events