3: transport layer3b-1 principles of congestion control congestion: r informally: “too many...
TRANSCRIPT
3: Transport Layer 3b-1
Principles of Congestion Control
Congestion: informally: “too many sources sending too
much data too fast for network to handle” different from flow control! Manifestations (symptoms):
lost packets (buffer overflow at routers) long delays (queueing in router buffers)
a top-10 problem!
3: Transport Layer 3b-2
Causes/costs of congestion: scenario 1 Assumptions: two senders, two receivers
one router, infinite buffers
no retransmission
C - link capacity large delays when congested (nearing link capacity; at maximum achievable throughput)
3: Transport Layer 3b-3
Causes/costs of congestion: scenario 2
one router, finite buffers sender retransmission of lost packet
3: Transport Layer 3b-4
Causes/costs of congestion: scenario 2 Ideally, no loss: (top line in fig. (a) below)
“perfect” retransmission, only when loss: (fig. (b) e.g.
1/3 of bytes retransmitted, so 2/3 original received, per host)
retransmission of delayed (not lost) packet makes larger (than
perfect case) for same (fig. (a) second line e.g.)
in
out
=
in
out
>
in
out
“costs” of congestion: more work (retrans) of original data (goodput) unneeded retransmissions: link carries multiple copies of pkt
3: Transport Layer 3b-5
Causes/costs of congestion: scenario 3 four senders multihop paths timeout/retransmit
in
Q: what happens as and increase ?
in
3: Transport Layer 3b-6
Causes/costs of congestion: scenario 3
Another “cost” of congestion: when packet dropped, any “upstream transmission capacity
used for that packet was wasted!
3: Transport Layer 3b-7
Approaches towards congestion control
End-end congestion control:
no explicit feedback from network
congestion inferred from end-system observed loss, delay
approach taken by TCP (IP provides no feedback)
Network-assisted congestion control:
routers provide feedback to end systems single bit indicating
congestion (SNA, DECnet, TCP/IP ECN, ATM)
explicit rate that sender should send at
Two broad approaches towards congestion control:
3: Transport Layer 3b-8
Case study: ATM ABR congestion control
ABR: available bit rate:
“elastic service” if sender’s path
“underloaded”: sender should use
available bandwidth if sender’s path
congested: sender throttled to
minimum guaranteed rate
RM (resource management) cells:
sent by sender, interspersed with data cells
bits in RM cell set by switches (“network-assisted”) NI bit: no increase in rate
(mild congestion) CI bit: congestion indication
RM cells returned to sender by receiver, with bits intact
(note: switch can also generate RM cell)
3: Transport Layer 3b-9
Case study: ATM ABR congestion control
two-byte ER (explicit rate) field in RM cell congested switch may lower ER value in cell sender’s send rate thus minimum supportable rate of all switches
on path
EFCI (explicit forward congestion indication) bit in data cells: set to 1 in congested switch if data cell preceding RM cell has EFCI set, receiver sets CI bit in
returned RM cell
3: Transport Layer 3b-10
TCP Congestion Control end-end control (no network assistance) transmission rate limited by congestion window size, Congwin,
over segments:
w segments, each with MSS bytes sent in one RTT:
throughput = w * MSS
RTT Bytes/sec
Congwin
3: Transport Layer 3b-11
TCP congestion control:
two “phases” slow start congestion avoidance
important variables: Congwin threshold:
defines threshold between slow start phase and congestion avoidance phase
“probing” for usable bandwidth: ideally: transmit as
fast as possible (Congwin as large as possible) without loss
increase Congwin until loss (congestion)
loss: decrease Congwin, then begin probing (increasing) again
3: Transport Layer 3b-12
TCP Slowstart
exponential increase (per RTT) in window size (not so slow!)
loss event: timeout (Tahoe TCP) and/or three duplicate ACKs (Reno TCP)
initialize: Congwin = 1for (each segment ACKed) Congwin++until (loss event OR CongWin > threshold)
Slowstart algorithmHost A
one segment
RTT
Host B
time
two segments
four segments
3: Transport Layer 3b-13
TCP Congestion Avoidance
/* slowstart is over */ /* Congwin > threshold */Until (loss event) { after every w segments ACKed:
Congwin++
}threshold = Congwin/2Congwin = 1perform slowstart
Congestion avoidance
1
1: TCP Reno skips slowstart (fast recovery) after three duplicate ACKs
3: Transport Layer 3b-14
TCP FairnessFairness goal: if N TCP
sessions share same bottleneck link, each should get 1/N of link capacity
TCP congestion avoidance (ignoring slow start):
AIMD: increase window by
1 per RTT decrease window
by factor of 2 on loss event
AIMD (additive increase, multiplicative decrease)
TCP connection 1
bottleneckrouter
capacity R
TCP connection 2
3: Transport Layer 3b-15
Why is TCP fair?
Assume two competing sessions (same MSS and RTT):
Additive increase gives slope of 1, as throughout increases multiplicative decrease decreases throughput proportionally
R
R
equal bandwidth share
Connection 1 throughputConnect
ion 2
th
roughput
congestion avoidance: additive increaseloss: decrease window by factor of 2
congestion avoidance: additive increaseloss: decrease window by factor of 2
3: Transport Layer 3b-16
Effects of TCP latencies
Q: client latency from object request from WWW server to receipt?
TCP connection establishment
data transfer delay
Notation, assumptions: Assume: fixed congestion
window, W, giving throughput of R bps
S: MSS (bits) O: object size (bits) no retransmissions (no
loss, no corruption)
Two cases to consider: WS/R > RTT + S/R: server receives ACK for first
segment in window before window’s worth of data sent WS/R < RTT + S/R: server must wait for ACK after
sending window’s worth of data sent
3: Transport Layer 3b-17
Effects of TCP latencies W=4, WS/R > RTT + S/R W=2, WS/R < RTT + S/R
Case 1: latency = 2RTT + O/R
(O is num bits in entire object)
Case 2: latency = 2RTT + O/R+ (K-1)[S/R + RTT - WS/R]
K = O/(WS)
3: Transport Layer 3b-18
Summary on Transport Layer
principles behind transport layer services: multiplexing/
demultiplexing reliable data transfer flow control congestion control
instantiation and implementation in the Internet UDP TCP
Next: leaving the network
“edge” (application transport layer)
into the network “core”